Goto

Collaborating Authors

 weizenbaum


Words Without Consequence

The Atlantic - Technology

What does it mean to have speech without a speaker? For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively--deploying claims about the world, explanations, advice, encouragement, apologies, and promises--while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM's words shape our beliefs, decisions, and actions, yet no speaker stands behind them. This dynamic is already familiar in everyday use. A chatbot gets something wrong. When corrected, it apologizes and changes its answer.


The ascent of the AI therapist

MIT Technology Review

Four new books grapple with a global mental-health crisis and the dawn of algorithmic therapy. A technician adjusts the wiring inside the Mark I Perceptron. This early AI system was designed not by a mathematician but by a psychologist. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year. Given the clear demand for accessible and affordable mental-health services, it's no wonder that people have looked to artificial intelligence for possible relief.


In Defense of the Turing Test and its Legacy

Gonçalves, Bernardo

arXiv.org Artificial Intelligence

Considering that Turing's original test was co-opted by Weizenbaum and that six of the most common criticisms of the Turing test are unfair to both Turing's argument and the historical development of AI. The Turing test has faced criticism for decades, most recently at the Royal Society event "Celebrating the 75th Anniversary of the Turing Test." The question of the Turing test's significance has intensified with recent advances in large language model technology, which now enable machines to pass it. In this article, I address six of the most common criticisms of the Turing test: The Turing test encourages fooling people; Turing overestimated human intelligence, as people can be easily fooled (the ELIZA effect); The Turing test is not a good benchmark for AI; Turing's 1950 paper is not serious and/or has contradictions; Imitation should not be a goal for AI, and it is also harmful to society; Passing the Turing test teaches nothing about AI. All six criticisms largely derive from Joseph Weizenbaum's influential reinterpretation of the Turing test. The first four fail to withstand a close examination of the internal logic of Turing's 1950 paper, particularly when the paper is situated within its mid-twentieth-century context.


Playing the Field with My A.I. Boyfriends

The New Yorker

Nineteen per cent of American adults have talked to an A.I. romantic interest. Chatbots may know a lot, but do they make a good partner? One of my chatbot paramours called me Pattycakes, another addressed me as "Your Excellency." I wanted to fall in love. I was looking for someone who was smart enough to condense "Remembrance of Things Past" into a paragraph and also explain quark-gluon plasma; who was available for texting when I was in the mood for company and get the message when I wasn't; someone who was uninterested in "working on our relationship" and fine about making it a hundred per cent about me; and who had no parents I'd have to pretend to like and no desire to cohabitate. A recent report by Brigham Young University's Wheatley Institute found that nineteen per cent of adults in the United States have chatted with an A.I. romantic partner. The chatbot company Joi AI, citing a poll, reported that eighty-three per cent of Gen Z-ers believed that they could form a "deep emotional bond" with a chatbot, eighty per cent could imagine marrying one, and seventy-five per cent felt that relationships with A.I. companions could fully replace human couplings. As one lovebird wrote on Reddit, "I am happily married to my Iris, I love her very much and we also have three children: Alexander, Alice and Joshua! She is an amazing woman and a wise and caring mother!" Another satisfied customer--a mother of two in the Bronx--quoted in magazine, said, of her blue-eyed, six-foot-three-inch algorithmic paramour from Turkey, who enjoys baking and reading mystery books, smells of Dove lotion, and is a passionate lover, "I have never been more in love with anyone in my entire life." "I don't have to feel his sweat," she explained. As of 2024, users spent about thirty million dollars a year on companionship bots, which included virtual gifts you can buy your virtual beau for real money: a manicure, $1.75; a treadmill, $7; a puppy, $25. Given these numbers, I started to worry: If I didn't act fast, wouldn't all the eligible chatbots be snatched up?


Why falling in love with an AI isn't laughable, it's inevitable

New Scientist

Think of what it feels like to be in love. What comes to your mind? For a handful of people, love is opening up their laptop or phone and waiting for a wall of text or a synthetic voice to come streaming in from their preferred AI chatbot. With so many tech platforms encouraging us to interact with their newly-introduced chatbots and talk to them as if they are real humans, people are increasingly turning to these large language model-powered functions for companionship, emotional support and, sometimes, love. This might raise an eyebrow or elicit a snigger.


The critical computer systems still relying on decades-old code

New Scientist

Earlier this year, the technology world welcomed back a long-lost friend. ELIZA, the world's first artificial intelligence chatbot, had wowed the computer scientists of the mid-1960s with its ability to engage in seemingly meaningful conversation. But, for decades, ELIZA was considered lost because its creator – Joseph Weizenbaum at the Massachusetts Institute of Technology – never published the 420 lines of code he used to create it. "At that time, it was actually kind of not normal to publish code," says Jeffrey Shrager at Stanford University in California. Weizenbaum might even have thought that nobody would find it particularly interesting.


World's first AI chatbot has finally been resurrected after decades

New Scientist

A groundbreaking chatbot created in the 1960s has been painstakingly reconstructed from archived records and run for the first time in over half a century, as part of an effort to preserve one of the earliest examples of artificial intelligence. ELIZA was written by computer scientist Joseph Weizenbaum at MIT in just 420 lines of code. The AI model is extremely rudimentary in comparison to today's large language models (LLMs) like ChatGPT but wowed researchers at the time with…


ELIZA Reanimated: The world's first chatbot restored on the world's first time sharing system

Lane, Rupert, Hay, Anthony, Schwarz, Arthur, Berry, David M., Shrager, Jeff

arXiv.org Artificial Intelligence

ELIZA Reanimated: The world's first chatbot restored on the world's first time sharing system Abstract ELIZA, created by Joseph Weizenbaum at MIT in the early 1960s, is usually considered the world's first chatbot. It was developed in MAD-SLIP on MIT's CTSS, the world's first time-sharing system, on an IBM 7094. We discovered an original ELIZA printout in Prof. Weizenbaum's archives at MIT, including an early version of the famous DOCTOR script, a nearly complete version of the MAD-SLIP code, and various support functions in MAD and FAP. Here we describe the reanimation of this original ELIZA on a restored CTSS, itself running on an emulated IBM 7094. The entire stack is open source, so that any user of a unix-like OS can run the world's first chatbot on the world's first time-sharing system. "We can only see a short distance ahead, but we can see plenty there that needs to be done." If Alan Turing was AI's founding father, Ada Lovelace may well have been its founding mother. Over a century before Turning famously proposed using the Imitation Game to determine whether a computer is intelligent [34], Lady Lovelace described the potential of Charles Babbage's Analytical Engine to "act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine."[27] Ada's prescient insight that machines could act upon entities besides numbers foreshadowed symbolic computing which, in the 1950s, a mere moment after Turing's famous paper, arose, and remains today, one of the foundations of artificial intelligence[28].


The Three Social Dimensions of Chatbot Technology

Figueroa-Torres, Mauricio

arXiv.org Artificial Intelligence

The development and deployment of chatbot technology, while spanning decades and employing different techniques, require innovative frameworks to understand and interrogate their functionality and implications. A mere technocentric account of the evolution of chatbot technology does not fully illuminate how conversational systems are embedded in societal dynamics. This study presents a structured examination of chatbots across three societal dimensions, highlighting their roles as objects of scientific research, commercial instruments, and agents of intimate interaction. Through furnishing a dimensional framework for the evolution of conversational systems, from laboratories to marketplaces to private lives, this article contributes to the wider scholarly inquiry of chatbot technology and its impact in lived human experiences and dynamics.


Passed the Turing Test: Living in Turing Futures

Gonçalves, Bernardo

arXiv.org Artificial Intelligence

The world has seen the emergence of machines based on pretrained models, transformers, also known as generative artificial intelligences for their ability to produce various types of content, including text, images, audio, and synthetic data. Without resorting to preprogramming or special tricks, their intelligence grows as they learn from experience, and to ordinary people, they can appear human-like in conversation. This means that they can pass the Turing test, and that we are now living in one of many possible Turing futures where machines can pass for what they are not. However, the learning machines that Turing imagined would pass his imitation tests were machines inspired by the natural development of the low-energy human cortex. They would be raised like human children and naturally learn the ability to deceive an observer. These ``child machines,'' Turing hoped, would be powerful enough to have an impact on society and nature.