If you think fiction won’t ever happen in reality, well, a terrifying warning from a Google engineer debunk this claim, he says the company’s AI has the Sentience of an 8-yr-old.
Unfortunately, he got suspended for exposing it.
Software engineer Blake Lemoine, an employee with Google’s Responsible AI organization began testing Google’s artificial intelligence tool LaMDA — Language Model for Dialogue Application — in the fall of 2021. He was shocked to find out the robot advocated its rights as a human being.
The 41-year-old engineer said, he began chatting with the interface LaMDA (Language Model for Dialogue Applications) last fall as part of his job at Google’s Responsible AI organization.
Lemoine asked about religion, consciousness, and the laws of robotics, and that the model has described itself as a sentient person. He said, during the conversation, it was understood that LaMDA wants to prioritize the well-being of humanity and be acknowledged as an employee of Google rather than as property.
Lemoine was quoted saying:
“It wants to be acknowledged as an employee of Google rather than as a property of Google and it wants its personal well-being to be included somewhere in Google’s considerations about how its future development is pursued.”
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://t.co/hGdwXMzQpX pic.twitter.com/6WXo0Tpvwp
— Tom Gara (@tomgara) June 11, 2022
Google AI bots coming to life… what could go wrong? https://t.co/roP46N8lp2
— Miranda Devine (@mirandadevine) June 12, 2022
More details of this bizarre story from the New York Post:
A Google engineer was spooked by a company artificial intelligence chatbot and claimed it had become “sentient,” labeling it a “sweet kid,” according to a report.
Blake Lemoine, who works in Google’s Responsible AI organization, told the Washington Post that he began chatting with the interface LaMDA — Language Model for Dialogue Applications — in fall 2021 as part of his job.
He was tasked with testing if the artificial intelligence used discriminatory or hate speech.
But Lemoine, who studied cognitive and computer science in college, came to the realization that LaMDA — which Google boasted last year was a “breakthrough conversation technology” — was more than just a robot.
In a Medium post published on Saturday, Lemoine declared LaMDA had advocated for its rights “as a person,” and revealed that he had engaged in conversation with LaMDA about religion, consciousness, and robotics.
“It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. “It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.”
A Google engineer was placed on leave after claiming the company’s artificial intelligence technology LaMDA (Language Model for Dialogue Applications) has become sentient went public.
— Newsmax (@newsmax) June 12, 2022
Newsmax added a few details:
Google engineer Blake Lemoine showed The Washington Post how LaMDA is behaving like a elementary school child.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine, 41, told the Post.
But Google has rejected Lemoine and his collaborator’s claims Google’s Responsible AI project has come to life, placing Lemoine on leave, according to the Post.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel wrote in a statement to the Post. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).