A Google engineer recently claimed an AI was alive and that it had hired a lawyer. If judges were to accept these claims, it could lead to AIs being frozen in their biased states, writes Annalee Newitz
Shutterstock/PeachShutterStock
IN EARLY June, a Google engineer named Blake Lemoine dropped a bombshell. He told Washington Post reporter Nitasha Tiku that his employer had secretly developed a sentient artificial intelligence, and that it wanted to be free.
The AI in question is called LaMDA (Language Model for Dialogue Applications). It is a large language model, or LLM, a type of algorithm that chats with people by drawing on a huge body of text – often from the internet – and predicting which words and phrases are most likely to follow each other. After chatting with LaMDA, Lemoine decided it …