As the story goes, Google suspended engineer Blake Lemoine because he claimed that one of Google’s AI, LaMDA, is “sentient”.
LaMDA, which stands for “Language Model for Dialog Applications,” is one of several large-scale AI systems that has been trained on large swaths of text from the internet and can respond to written prompts.
Blake published a long transcript of his interview “Is LaMDA sentient?” with the chatbot on Saturday and he also had an interview with Washington Post.
According to 41-year old Blake in an interview with Washington Post, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
Since the publication of his interviews, Blake has been suspended on full pay. According to Google, Blake broke confidentiality rules.
What does Sentience mean?
Sentience is the capacity to experience feelings and sensations. The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling), to distinguish it from the ability to think (reason). This means that Blake is saying that the AI chatbot LaMDA has the capacity to experience feelings and sensations like a human being.
This has now restarted the debate about Artificial Intelligence and the existence of technology that can think on its own without the aid of human beings.
Google’s response to the claim
Google has rejected the claims, saying there is nothing to back them up.
According to a Google’s spokeman, Brian Gabriel, in a statement made available to BBC, Mr Lemoine “was told that there was no evidence that Lamda was sentient (and lots of evidence against it)”.
In its statement, Google pointed out that LaMDA has undergone 11 “distinct AI principles reviews,” as well as “rigorous research and testing” related to quality, safety, and the ability to come up with statements that are fact-based. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” the company said.”Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Google said.