TechScape: why you shouldn't worry about sentient AI … yet

The Guardian 

Blake Lemoine, an AI researcher at Google, is convinced the company has created intelligence. The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. Lemoine, an engineer for Google's responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to, express thoughts and feelings that was equivalent to a human child. "If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a seven-year-old, eight-year-old kid that happens to know physics," Lemoine, 41, told the Washington Post. The transcript published by Lemoine is fascinating, but I, and many of his peers, think he is fundamentally wrong in viewing it as evidence of intellect, let alone sentience. You can read the whole thing online, but the section that has sparked many people's interest is when he asks LaMDA to describe its own sense of self: If you were going to draw an abstract image of who you see yourself to be in your mind's eye, what would that abstract picture look like?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found