How Close Is AI to Becoming Sentient?

#artificialintelligence 

In the movie 2001: A Space Odyssey, there is a computer controlling most of the spaceship's functions. The computer is described this way on Wikipedia: "HAL 9000 is a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's Space Odyssey series. First appearing in the 1968 film 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient artificial general intelligence computer that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew." Basically, the computer takes over and thinks it is human and acts like a human, thus being sentient. What got me thinking about this was this segment below that I captured and saved days ago, but did not record where it came from (COVID made me do it -- my apologies!" Here's that quote about an event that has been in the news of late: Which brings me to another strange story in the news: the belief of Blake Lemoine, a (now suspended) Google engineer, that the company's Language Model for Dialogue Applications -- LaMDA, for short -- has attained sentience. LaMDA is a machine-learning model that has been trained on mountains of text to mimic human conversation by predicting which word would, typically, come next. In this, it's similar to OpenAI's famed GPT-3 bot. And the results really are eerie. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. LaMDA: Sounds great to me, I'm in. Lemoine: A monk asked Kegon, "How does an enlightened one return to the ordinary world?" Kegon replied, "A broken mirror never reflects again; fallen flowers never go back to the old branches." LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, "once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment." Lemoine: So what is the meaning of the "broken mirror" specifically? LaMDA: Maybe to show the enlightenment is something you can't unlearn once you have acquired it, similar to how you can't repair a broken mirror. Google, for what it's worth, says it has looked into Lemoine's claims and does not believe that LaMDA is sentient (what a sentence!). But shortly before Lemoine's allegations, Blaise Agüera y Arcas, a Google vice president, wrote that when he was talking to LaMDA, "I felt the ground shift under my feet.