Claims of AI sentience branded 'pure clickbait'

#artificialintelligence 

AI chatbots are not sentient – they have just got better at tricking humans into thinking they might be, experts at Stanford University conclude. The idea of conscious machines more intelligent than any old software went viral last month, when a now-former engineer at Google, Blake Lemoine, claimed the web giant's LaMDA language model had real thoughts and feelings. Lemoine was suspended and later fired for reportedly violating Google's confidentiality policies. Although most experts were quick to dismiss LaMDA or any other AI chatbot as sentient, Lemoine's views have led some to question whether he might be right – and whether continuing to advance machine learning could be harmful for society. John Etchemendy, co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), criticized the initial news of Lamoine's suspension in the Washington Post for being "clickbait".

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found