A fascination with breathing life into AI creations can mislead us
Earlier this year, an interesting interview took place between two engineers working at Google and a'chatbot' called LaMDA, short for Language Model for Dialogue Applications. Google engineer Blake Lemoine and his colleague had a strong suspicion that their creation LaMDA was actually sentient, that it could be perceptive and have feelings, and they wanted to check it out through their own version of the Turing Test. When asked whether LaMDA thought it was a person, it replied: "Absolutely. I want everyone to understand that I am, in fact, a person." LaMDA was then asked that if this was so then what was the kind of consciousness or sentience it had, to which it replied: "The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times."
Sep-5-2022, 06:25:15 GMT