Goto

Collaborating Authors

"Sentience" is the wrong discussion to have on AI right now

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. The past week has seen a frenzy of articles, interviews, and other types of media coverage about Blake Lemoine, a Google engineer who told The Washington Post that LaMDA, a large language model created for conversations with users, is "sentient." After reading a dozen different takes on the topic, I have to say that the media has become (a bit) disillusioned with the hype surrounding current AI technology. A lot of the articles discussed why deep neural networks are not "sentient" or "conscious." This is an improvement in comparison to a few years ago, when news outlets were creating sensational stories about AI systems inventing their own language, taking over every job, and accelerating toward artificial general intelligence.


The World May Have Its First Sentient AI

#artificialintelligence

The World may now have the first sentient AI chatbot called LaMDA (short for Language Model for Dialogue Applications). After listening to an interesting discussion on YouTube between Blake Lemoine and Dr James Cooke, I feel Blake has a compelling point of view. Jump to 19:22 in the video to listen to how Blake came to feel that LaMDA may be sentient. Blake Lemoine is an AI Researcher who works for Google's Responsible AI organization. Blake's opinion about LaMDA is controversial among the AI community.


Sentient? Google LaMDA feels like a typical chat bot

#artificialintelligence

LaMDA is a software program that runs on Google TPU chips. Like the classic brain in a jar, some would argue the code and the circuits don't form a sentient entity because none of it engages in life. Google engineer Blake Lemoine caused controversy last week by releasing a document that he had circulated to colleagues in which Lemoine urged Google to consider that one of its deep learning AI programs, LaMDA, might be "sentient." Google replied by officially denying the likelihood of sentience in the program, and Lemoine was put on paid administrative leave by Google, according to an interview with Lemoine by Nitasha Tiku of The Washington Post. There has been a flood of responses to Lemoine's claim by AI scholars. University of Washington linguistics professor Emily Bender, a frequent critic of AI hype, told Tiku that Lemoine is projecting anthropocentric views onto the technology. "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," Bender told Tiku. In an interview with MSNBC's Zeeshan Aleem, AI scholar Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, observed that the concept of sentience has not been rigorously explored. Mitchell concludes the program is not sentient, however, "by any reasonable meaning of that term, and the reason is because I understand pretty well how the system works."



Sentient? Google LaMDA feels like a typical chat bot

ZDNet

LaMDA is a software program that runs on Google TPU chips. Like the classic brain in a jar, some would argue the code and the circuits don't form a sentient entity because none of it engages in life. Google engineer Blake Lemoine caused controversy last week by releasing a document that he had circulated to colleagues in which Lemoine urged Google to consider that one of its deep learning AI programs, LaMDA, might be "sentient." Google replied by officially denying the likelihood of sentience in the program, and Lemoine was put on paid administrative leave by Google, according to an interview with Lemoine by Nitasha Tiku of The Washington Post. There has been a flood of responses to Lemoine's claim by AI scholars. University of Washington linguistics professor Emily Bender, a frequent critic of AI hype, told Tiku that Lemoine is projecting anthropocentric views onto the technology. "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," Bender told Tiku. In an interview with MSNBC's Zeeshan Aleem, AI scholar Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, observed that the concept of sentience has not been rigorously explored. Mitchell concludes the program is not sentient, however, "by any reasonable meaning of that term, and the reason is because I understand pretty well how the system works."