AI Sentience: How Could We Evaluate it?

#artificialintelligence 

Approximately two weeks ago, Google engineer Blake Lemoine, claimed that reverberated throughout the global AI community: Google's chatbot, LaMDA, had achieved a degree of sentience akin to that of a human child. Google responded by promptly suspending the engineer, leading many members of the public to speculate as to whether the claim was true. Unfortunately, to refer to any entity as sentient requires an operationalized definition of the term that is applicable universally. Moreover, we would also need to generate a discrete, empirically motivated, theoretical framework that adequately disseminates the "Hard Problem" of consciousness (i.e., the idea that there is a set of fundamental attributes that give rise to our capacity for lived experience), which philosophers, psychologists, and neuroscientists have yet to answer. On the other hand, throughout the history of AI, the Turing Test has been popularized as the method of choice for the ascription of sentience to computational agents.