Artificial Intelligence Can Be Sexist and Racist Just Like Humans

artificial intelligence asilomar principles AI robot
A robot toy is seen at the Bosnian War Childhood museum exhibition in Zenica, Bosnia and Herzegovina, June 21, 2016. REUTERS/Dado Ruvic

Many of us like to think that artificial intelligence could help eradicate biases, that algorithms could help humans avoid hiring or policing according to gender or race-related stereotypes. But a new study suggests that when computers acquire knowledge from text written by humans, they also replicate the same racial and gender prejudices—thus perpetuating them.

Researchers at Princeton University and Britain's University of Bath found that machine learning "absorbs stereotyped biases" when trained on words from the internet. Their findings, published in the journal Science on Thursday, showed that machines learn word associations from written texts that mirror those learned by humans.

"Don't think that AI is some fairy godmother," said study co-author Joanna Bryson. "AI is just an extension of our existing culture."

For the study, Bryson and her fellow researchers used the GloVe algorithm, which analyzed 840 billion words from the internet.

A psychological tool called the implicit association test (IAT)—an assessment of a person's unconscious associations between certain words—inspired the development of a similar test for machines called a word-embedding association test (WEAT).

"Word embeddings" established a computer's definition of a word, based on the contexts in which it usually appears. The WEAT found that male names were associated with work, math, and science, and female names with family and the arts, meaning these stereotypes were held by the computer.

To some extent, the results are unsurprising machine learning systems have no choice but to learn from the data supplied to them by humans. A 2004 study found that when using resumés of the same quality, AI still favored European-American names over African-American names.

The embeddings were also useful for encoding common sentiments about flowers and even facts about the labor force, highlighting how prejudices and biases can be easily transferred from humans to machines.

"Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names," the paper's abstract states.

"Our methods hold promise for identifying and addressing sources of bias in culture, including technology."

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Anthony Cuthbertson is a staff writer at Newsweek, based in London.  

Anthony's awards include Digital Writer of the Year (Online ... Read more

To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go