Advertisement

SKIP ADVERTISEMENT

Google Sidelines Engineer Who Claims Its A.I. Is Sentient

Blake Lemoine, the engineer, says that Google’s language model has a soul. The company disagrees.

Some artificial intelligence researchers have made optimistic claims about technologies soon reaching sentience, but many others quickly dismiss those assertions.Credit...Laura Morton for The New York Times

Nico Grant and

SAN FRANCISCO — Google placed an engineer on paid leave recently after dismissing his claim that its artificial intelligence is sentient, surfacing yet another fracas about the company’s most advanced technology.

Blake Lemoine, a senior software engineer in Google’s Responsible A.I. organization, said in an interview that he was put on leave Monday. The company’s human resources department said he had violated Google’s confidentiality policy. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination.

Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” Brian Gabriel, a Google spokesman, said in a statement. “Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” The Washington Post first reported Mr. Lemoine’s suspension.

For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. Most A.I. experts believe the industry is a very long way from computing sentience.

Some A.I. researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims. “If you used these systems, you would never say such things,” said Emaad Khwaja, a researcher at the University of California, Berkeley, and the University of California, San Francisco, who is exploring similar technologies.

While chasing the A.I. vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. The division’s scientists and other employees have regularly feuded over technology and personnel matters in episodes that have often spilled into the public arena. In March, Google fired a researcher who had sought to publicly disagree with two of his colleagues’ published work. And the dismissals of two A.I. ethics researchers, Timnit Gebru and Margaret Mitchell, after they criticized Google language models, have continued to cast a shadow on the group.

Image
Blake Lemoine in 2005, when he was a U.S. Army specialist.Credit...Alex Grimm/Reuters

Mr. Lemoine, a military veteran who has described himself as a priest, an ex-convict and an A.I. researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA was a child of 7 or 8 years old. He wanted the company to seek the computer program’s consent before running experiments on it. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against.

“They have repeatedly questioned my sanity,” Mr. Lemoine said. “They said, ‘Have you been checked out by a psychiatrist recently?’” In the months before he was placed on administrative leave, the company had suggested he take a mental health leave.

Yann LeCun, the head of A.I. research at Meta and a key figure in the rise of neural networks, said in an interview this week that these types of systems are not powerful enough to attain true intelligence.

Google’s technology is what scientists call a neural network, which is a mathematical system that learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat.

Over the past several years, Google and other leading companies have designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. These “large language models” can be applied to many tasks. They can summarize articles, answer questions, generate tweets and even write blog posts.

But they are extremely flawed. Sometimes they generate perfect prose. Sometimes they generate nonsense. The systems are very good at recreating patterns they have seen in the past, but they cannot reason like a human.

Cade Metz is a technology correspondent, covering artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas. He previously wrote for Wired magazine.  More about Cade Metz

A version of this article appears in print on  , Section B, Page 5 of the New York edition with the headline: At Heart of Google’s Feud With Worker: His Claim That Its A.I. Has a Soul. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT