Blake Lemoine, an engineer at Google, has claimed that the firm’s LaMDA artificial intelligence is sentient, but the expert consensus is that this is not the case

Technology 13 June 2022

Google is developing a range of artificial intelligence models

KENZO TRIBOUILLARD/AFP via Getty Images

A Google engineer has reportedly been placed on suspension from the company after claim that an artificial intelligence (AI) he helped to develop had become sentient. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid,” Blake Lemoine told the Washington Post.

Lemoine released transcripts of conversations with with the AI, called LaMDA (Language Model for Dialogue Applications), in which it appears to express fears of being switched off, talk about how it feels happy and sad, and attempts to form bonds with humans by talking about situations that it could never have actually experienced. Here’s everything you need to know.

Is LaMDA really sentient?

In a word, no, says Adrian Weller at the Alan Turing Institute.

Advertisement

“LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” he says. “They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.”

Adrian Hilton at the University of Surrey, UK agrees that sentience is a “bold claim” that’s not backed up by the facts. Even noted cognitive scientist Steven Pinker weighed in to shoot down Lemoine’s claims, while Gary Marcus at New York University summed it up in one word: “nonsense“.

So what convinced Lemoine that LaMDA was sentient?

Neither Lemoine nor Google responded to New Scientist’s request for comment. But it’s certainly true that the output of AI models in recent years has become surprisingly, even shockingly good.

Our minds are susceptible to perceiving such ability – especially when it comes to models designed to mimic human language – as evidence of true intelligence. Not only can LaMDA make convincing chit-chat, but it can also present itself as having self-awareness and feelings.

“As humans, we’re very good at anthropomorphising things,” says Hilton. “Putting our human values on things and and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals. We project our own emotions and sentience onto them. I would imagine that’s what’s happening happening in this case.”

Read more: Is DeepMind’s Gato AI really a human-level intelligence breakthrough?

Will AI ever be truly sentient?

It remains unclear whether the current trajectory of AI research, where ever-larger models are fed ever-larger piles of training data, will see the genesis of an artificial mind.

“I don’t believe at the moment that we really understand the mechanisms behind what what makes something sentient and intelligent,” says Hilton. “There’s a lot of hype about AI, but I’m not convinced that what we’re doing with machine learning, at the moment, is really intelligence in that sense.”

Weller says that, given human emotions rely on sensory inputs, it might eventually be possible to replicate them artificially. “It potentially, maybe one day, might be true, but most people would agree that there’s a long way to go.”

How has Google reacted?

The Washington Post claims that Lemoine has been placed on suspension after seven years at Google, having attempted to hire a lawyer to represent LaMDA and sending executives a document that claimed the AI was sentient. Google also says that publishing the transcripts broke confidentiality policies.

Google told the Washington Post that: “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Lemoine responded on Twitter: “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”

More on these topics: