Language as a Litmus Test for AI

And the problem of bias

Illustration by OstapenkoOlena/iStock by Getty

Return to main article:

Language, which clearly played an important role in human evolution, has long been considered a hallmark of human intelligence, and when Barbara Grosz started working on problems in artificial intelligence (AI) in the 1970s, it was the litmus test for defining machine intelligence. The idea that language could be used as a kind of Occam’s razor for identifying intelligent computers dates to 1950, when Alan Turing, the British scientist who cracked Nazi Germany’s encrypted military communications, suggested that the ability to carry on a conversation in a manner indistinguishable from a human could be used as a proxy for intelligence. Turing raised the idea as a philosophical question, because intelligence is difficult to define, but his proposal was soon memorialized as the Turing test. Whether it is a reasonable test of intelligence is debatable. Regardless, Grosz says that even the most advanced, language-capable AI systems now available—Siri, Alexa, and Google—fail to pass it.

The Higgins professor of natural sciences has witnessed a transformation of her field. For decades, computers lacked the power, speed, and storage capacity to drive neural networks—modeled on the wiring of the human brain—that are able to learn from processing vast quantities of data. Grosz’s early language work therefore involved developing formal models and algorithms to create a computational model of discourse: telling the computer, in effect, how to interpret and create speech and text. Her research has led to the development of frameworks for handling the unpredictable nature of human communication, for modeling one-on-one human-computer interactions, and for advancing the integration of AI systems into human teams.

The current ascendant AI approach—based on neural networks that learn—relies instead on computers’ ability to sample vast quantities of data. In the case of language, for example, a neural network can sample a corpus—extending even to everything ever written that’s been posted online—to learn the “meaning” of words and their relationship to each other. A dictionary created using this approach, explains assistant professor of computer science Alexander “Sasha” Rush, contains mathematical representations of words, rather than language-based definitions. Each word is a vector—a relativistic definition of a word in relation to other words. Thus the vectors describing the relationship between the words “man” and “woman” would be mathematically analogous to those describing the relationship between words such as “king” and “queen.”

This approach to teaching language to computers has tremendous potential for translation services, for developing miniaturized chips that would allow voice control of all sorts of devices, and even for creating AIs that could write a story about a sporting event based purely on data. But because it captures all the human biases associated with culturally freighted words like “man” and “woman,” and what the ensuing mathematical representations might embody with respect to gender, power dynamics, and inequality when confronted with the associations of a word such as “CEO,” it can lead neural-network based AI systems to produce biased results.

Rush considers his work—developing language capabilities for microscopic computer chips—to be purely engineering, and his translation work to be functional, not literary, even though the goal of developing an AI that can pass the Turing test is undoubtedly being advanced by work like his. But significant obstacles remain.

How can a computer be taught to recognize inflection, or the rising tone of words that form a question, or an interruption to discipline kids (“Hey, stop that!”), of the sort that humans understand immediately? These are the kinds of theoretical problems Grosz has been grappling with for years. And although she is agnostic about whatever approach will ultimately succeed in building systems able to participate in everyday human dialogue, probably decades hence, she does allow that it might well have to be a hybrid of neural-network learning and human-developed models and rules for understanding language in all its complexity.

Read more articles by: Jonathan Shaw

You might also like

Harvard Professor James Hanken on the Amphibian Extinction Crisis

Curator of Herpetology on where all the frogs are going

Proactive AI Policy

Businesses should start self-regulating before government intervention, argue Harvard professors.

Harvard’s Year That Was

Amid academic honors, the Faculty of Arts and Sciences focuses on the campus protest and task forces on antisemitism, anti-Muslim bias, civil discourse, and institutional voice.

Most popular

Michelle Yeoh’s Three Tips for Success

Oscar-winning actress offers advice in Harvard Law School Class Day address.

Poise, in Spite of Everything

Nina Skov Jensen ’25, portraitist for collectors and the princess of Denmark. 

Proactive AI Policy

Businesses should start self-regulating before government intervention, argue Harvard professors.

More to explore

Harvard Cardinal Robert W. McElroy on the Changing Catholic Church

Cardinal Robert W. McElroy on how the Catholic Church has moved towards inclusivity.

AI as Cancer Oracle?

How is artificial intelligence (AI) being used for cancer detection and prevention?

The Harvard Graduate and Early Vegetarian Benjamin Smith Lyman

Brief life of the vegetarian trailblazer, 1835-1920