People are biased, and the issues that arise from these biases shape our planetary culture through our struggles as a species. Although sometimes these racial and gender biases are outright and apparent, oftentimes they are hidden beneath layers of our own experiences.

 

For example, Pepsi’s recent “tone-deaf” commercial with Kendall Jenner marginalized struggles by implying that a Pepsi could solve them received immediate harsh criticism on social media that resulted in the ad’s removal and a prompt apology.

 

With increasingly advanced artificial intelligence, moral questions surrounding gender and racial biases in humanity and how not to pass these along to our AI progeny became ever more apparent.

 

Current artificial intelligence has already begun to suss out biases in human language from our writing and news stories, according to Caliskan, Bryson, and Narayanan in a journal released in Science Magazine on April 14, 2017.

 

The despicable racist and sexist parts of the internet were apparent when Microsoft exposed their Twitter chatbot, Tay, to the world wide web and she immediately became a hateful, xenophobic, offensive respondent thanks to her learning capabilities and a lot of deplorable internet trolling.

 

But human language has a more subtle set of biases from which artificial intelligence is now learning.

 

The authors of the paper used the GloVe word embedding method to see that relationships between words in a vector space. The closer the proximity of words, the more frequently they were used together, painting a context for emotional associations and positive or negative connotations.

 

The associations were drawn out of a massive internet crawl containing roughly 840 billion words, with vectors that were determined based on the frequency of use in the surrounding ten words. Through these vectors, words begin to exemplify embedded knowledge and properties depending on their use.

 

Many of these associations were what you would expect, such as that flowers were more closely associated with pleasantness than insects.

 

However, it also showed that European American names were more closely associated with pleasantness than African American names.

 

Another display of this word embedding method showed that women were more closely associated to family and art while men were more often associated with mathematics.  

 

It is important to note, however, that artificial intelligence itself is not implicitly racist or sexist. AIs are learning from humans and our language and making mathematical associations that stem from our own prejudices.

 

One concern is that an AI may not have the moral capacity to consciously override a biased understanding which could impact job hiring and more.

 

According the The Guardian, Sandra Wachter of the University of Oxford said,

 

“We can, in principle, build systems that detect biased decision-making, and then act on it. This is a very complicated task, but it is a responsibility that we as society should not shy away from.”

 

Until then, it is pertinent for humans to be aware of their own biases and not only consciously seek them out, but resolve them within themselves. Although a human is not programmed to be a perfect Bayesian rationalist, it is still our duty to seek to resolve the issues within ourselves so that our creations and our children exist in a better future.