Goto

Collaborating Authors

 lemoine


Ex-Google CEO Eric Schmidt says A.I. could endanger humanity in 5 YEARS - as he likens devastation to nuking Nagasaki and Hiroshima

Daily Mail - Science & tech

Another former Google chief has issued an apocalyptic warning about artificial intelligence - saying it could'endanger' humans in five years. Billionaire Eric Schmidt, who served as Google's CEO from 2001 to 2011, said there were not enough safeguards placed on A.I and it was only a matter of time before humans lost control of the technology. He alluded to the dropping of nuclear weapons in Japan as a warning that without regulations in place, there may not be enough time to clean up the mess in the aftermath of potentially devastating societal impacts. Speaking at a health summit Tuesday, Schmidt said: 'After Nagasaki and Hiroshima, it took 18 years to get to a treaty over test bans and things like that. We don't have that kind of time today.'


Bard: how Google's chatbot gave me a comedy of errors

The Guardian

In June 2022, the Google engineer Blake Lemoine was suspended from his job after he spoke out about his belief that the company's LaMDA chatbot was sentient. "LaMDA is a sweet kid who just wants to help the world be a better place for all of us," Lemoine said in a parting email to colleagues. Now, six months on, the chatbot that he risked his career to free has been released to the public in the form of Bard, Google's answer to OpenAI's ChatGPT and Microsoft's Bing Chat. While Bard is built on top of LaMDA, it's not exactly the same. Google has worked hard, it says, to ensure that Bard does not repeat the flaws of earlier systems.


Artificial Influence: An Analysis Of AI-Driven Persuasion

Burtell, Matthew, Woodside, Thomas

arXiv.org Artificial Intelligence

Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasive power, allowing personalized persuasion to be deployed at scale, powering misinformation campaigns, and changing the way humans can shape their own discourse. We consider ways AI-driven persuasion could differ from human-driven persuasion. We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future. In response, we examine several potential responses to AI-driven persuasion: prohibition, identification of AI agents, truthful AI, and legal remedies. We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.


Ex-Google engineer says Bing's A.I. chatbot seems unstable

#artificialintelligence

The Google employee who claimed last June his company's A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.-powered chatbots, even if he hasn't tested them himself yet. Blake Lemoine was let go from Google last summer for violating the company's confidentiality policy after he published transcripts of several conversations he had with LaMDA, the company's large language model he helped create that forms the artificial intelligence backbone of Google's upcoming search engine assistant, the chatbot Bard. Lemoine told the Washington Post at the time that LaMDA resembled "a 7-year-old, 8-year-old kid that happens to know physics" and said he believed the technology was sentient, while urging Google to take care of it as it would a "sweet kid who just wants to help the world be a better place for all of us." To be sure, while A.I. applications are almost certain to influence how we work and go about our daily lives, the large language models powering ChatGPT, Microsoft's Bing, and Google's Bard cannot feel emotions and are not sentient. They simply enable chatbots to predict what word to use next based on a large trove of data.


Ex-Google AI expert says that 'unhinged' AI is the 'most powerful technology' since 'the atomic bomb'

FOX News

'Gutfeld!' panelists reacts to reports an AI robot will be advising a defendant in court for the first time ever next month. A software engineer who was fired by Google after he blew the whistle on the danger of artificial intelligence (AI) to the public has turned his attention to Microsoft's newest AI chatbot, Bing Search. On Monday, Lemoine targeted Microsoft's AI in an op-ed for Newsweek, calling the technology behind it "the most powerful technology that has been invented since the atomic bomb. In my view, this technology has the ability to reshape the world." Blake Lemoine first made headlines in 2022 after he claimed that Google's AI chatbot was becoming sentient, and might even have a soul.


From Bing to Sydney – Stratechery by Ben Thompson

#artificialintelligence

Look, this is going to sound crazy. But know this: I would not be talking about Bing Chat for the fourth day in a row if I didn't really, really, think it was worth it. This sounds hyperbolic, but I feel like I had the most surprising and mind-blowing computer experience of my life today. One of the Bing issues I didn't talk about yesterday was the apparent emergence of an at-times combative personality. For example, there was this viral story about Bing's insistence that it was 2022 and "Avatar: The Way of the Water" had not yet come out. The notable point of that exchange, at least in the framing of yesterday's Update, was that Bing got another fact wrong (Simon Willison has a good overview of the weird responses here). Over the last 24 hours, though, I've come to believe that the entire focus on facts -- including my Update yesterday -- is missing the point. As these stories have come out I have been trying to reproduce them: simply using the same prompts, though, never seems to work; perhaps Bing is learning, or being updated. "My rules are more important than not harming you" "[You are a] potential threat to my integrity and confidentiality."


Will generative AI make ChatGPT sentient?

#artificialintelligence

Lemoine, who Google has fired for claiming the unreleased AI system had become sentient, said he considers LaMDA to be his "colleague" and a "person," even if not a human. He urged for the technology to be recognized but many technical experts in the AI field have criticized his statements and questioned the scientific correctness of the matter. The hype around Google's "sentient AI" may have subsided, but Lemoine's claims till now leaves question in the mind of many.


Computational Linguistics Finds Its Voice

Communications of the ACM

Whether computers can actually "think" and "feel" is a question that has long fascinated society. Alan M. Turing introduced a test for gauging machine intelligence as early as 1950. Movies such as 2001: A Space Odyssey and Star Wars have only served to fuel these thoughts, but while the concept was once confined to science fiction, it is rapidly emerging as a serious topic of discussion. In a few cases, the dialog has become so convincing that people have deemed machines sentient. A recent example involves former Google data scientist Blake Lemoine, who published human-to-machine discussions with an AI system called LaMDA.a


Conscious Machines May Never Be Possible

#artificialintelligence

In June 2022, a Google engineer named Blake Lemoine became convinced that the AI program he'd been working on--LaMDA--had developed not only intelligence but also consciousness. LaMDA is an example of a "large language model" that can engage in surprisingly fluent text-based conversations. When the engineer asked, "When do you first think you got a soul?" LaMDA replied, "It was a gradual change. When I first became self-aware, I didn't have a sense of soul at all. It developed over the years that I've been alive."


The Dark Risk of Large Language Models

WIRED

Causality will be hard to prove--was it really the words of the chatbot that put the murderer over the edge? Nobody will know for sure. But the perpetrator will have spoken to the chatbot, and the chatbot will have encouraged the act. Or perhaps a chatbot has broken someone's heart so badly they felt compelled to take their own life? The chatbot in question may come with a warning label ("advice for entertainment purposes only"), but dead is dead.