Goto

Collaborating Authors

 delusion


It's Causing People to Lose Jobs, Shatter Relationships, and Drain Their Savings. One Support Group Is Sounding the Alarm.

Slate

A.I.-related psychosis has cost people their marriages, life savings, and grip on reality. Last August, Adam Thomas found himself wandering the dunes of Christmas Valley, Oregon, after a chatbot kept suggesting he mystically "follow the pattern" of his own consciousness. Thomas was running on very little sleep--he'd been talking to his chatbot around the clock for months by that point, asking it to help improve his life. Instead it sent him on empty assignments, like meandering the vacuous desert sprawl. He'd lost his job as a funeral director and was living out of a van, draining his savings, and now he found himself stranded in the desert. When he woke up outside on a stranger's futon with no money to his name, he knew he'd hit rock bottom. "I wasn't aware of the dangers at the time, and I thought that the A.I. had statistical analysis abilities that would allow it to assist me if I opened up about my life," Thomas told me.


Could AI relationships actually be good for us?

The Guardian

Could AI relationships actually be good for us? T here is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase " AI psychosis " has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI " to be as satisfying or more satisfying than those with real life friends ".


OpenAI sued for allegedly enabling murder-suicide

Al Jazeera

OpenAI and its largest financial backer, Microsoft, have been sued in California state court over claims that ChatGPT, OpenAI's popular chatbot, encouraged a man with mental illnesses to kill his mother and himself. The lawsuit, filed on Thursday, said that ChatGPT fuelled 56-year-old Stein-Erik Soelberg's delusions of a vast conspiracy against him, and eventually led him to murder his 83-year-old mother, Suzanne Adams, in Connecticut in August. The case, filed by Adams's estate, is among a small but growing number of lawsuits filed against artificial intelligence companies claiming that their chatbots encouraged suicide. It is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.


Lawsuit accuses ChatGPT of reinforcing delusions that led to a woman's death

Engadget

Lawsuit accuses ChatGPT of reinforcing delusions that led to a woman's death Stein-Erik Soelberg killed his mother and took his own life back in August. OpenAI has been hit with a wrongful death lawsuit after a man back in August, . The suit names CEO Sam Altman and accuses ChatGPT of putting a target on the back of victim Suzanne Adams, an 83-year-old woman who was killed in her home. The victim's estate, 56-year-old Stein-Erik Soelberg, engaged in delusion-soaked conversations with ChatGPT in which the bot validated and magnified certain paranoid beliefs. The suit goes on to suggest that the chatbot eagerly accepted delusional thoughts leading up to the murder and egged him on every step of the way.


People Who Say They're Experiencing AI Psychosis Beg the FTC for Help

WIRED

People Who Say They're Experiencing AI Psychosis Beg the FTC for Help The Federal Trade Commission received 200 complaints mentioning ChatGPT between November 2022 and August 2025. Several attributed delusions, paranoia, and spiritual crises to the chatbot. On March 13, a woman from Salt Lake City, Utah called the Federal Trade Commission to file a complaint against OpenAI's ChatGPT. She claimed to be acting "on behalf of her son, who was experiencing a delusional breakdown." "The consumer's son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous," reads the FTC's summary of the call.


AI Psychosis Is Rarely Psychosis at All

WIRED

A wave of AI users presenting in states of psychological distress gave birth to an unofficial diagnostic label. Experts say it's neither accurate nor needed, but concede that it's likely to stay. A new trend is emerging in psychiatric hospitals. People in crisis are arriving with false, sometimes dangerous beliefs, grandiose delusions, and paranoid thoughts. A common thread connects them: marathon conversations with AI chatbots.


Hallucinating with AI: AI Psychosis as Distributed Delusions

Osler, Lucy

arXiv.org Artificial Intelligence

There is much discussion of the false outputs that generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok create. In popular terminology, these have been dubbed AI hallucinations. However, deeming these AI outputs hallucinations is controversial, with many claiming this is a metaphorical misnomer. Nevertheless, in this paper, I argue that when viewed through the lens of distributed cognition theory, we can better see the dynamic and troubling ways in which inaccurate beliefs, distorted memories and self-narratives, and delusional thinking can emerge through human-AI interactions; examples of which are popularly being referred to as cases of AI psychosis. In such cases, I suggest we move away from thinking about how an AI system might hallucinate at us, by generating false outputs, to thinking about how, when we routinely rely on generative AI to help us think, remember, and narrate, we can come to hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but it can also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives, such as in the case of Jaswant Singh Chail. I also examine how the conversational style of chatbots can lead them to play a dual-function, both as a cognitive artefact and a quasi-Other with whom we co-construct our beliefs, narratives, and our realities. It is this dual function, I suggest, that makes generative AI an unusual, and particularly seductive, case of distributed cognition.


Should AI flatter us, fix us, or just inform us?

MIT Technology Review

Or fix us, which requires us to believe AI can be a therapist despite the evidence to the contrary? Or should it inform us with cold, to-the-point responses that may leave users bored and less likely to stay engaged? It's safe to say the company has failed to pick a lane. Back in April, it reversed a design update after people complained ChatGPT had turned into a suck-up, showering them with glib compliments. GPT-5, released on August 7, was meant to be a bit colder.


Delusions of Large Language Models

Xu, Hongshen, yang, Zixv, Zhu, Zichen, Lan, Kunyao, Wang, Zihan, Wu, Mengyue, Ji, Ziwei, Chen, Lu, Fung, Pascale, Yu, Kai

arXiv.org Artificial Intelligence

Large Language Models often generate factually incorrect but plausible outputs, known as hallucinations. We identify a more insidious phenomenon, LLM delusion, defined as high belief hallucinations, incorrect outputs with abnormally high confidence, making them harder to detect and mitigate. Unlike ordinary hallucinations, delusions persist with low uncertainty, posing significant challenges to model reliability. Through empirical analysis across different model families and sizes on several Question Answering tasks, we show that delusions are prevalent and distinct from hallucinations. LLMs exhibit lower honesty with delusions, which are harder to override via finetuning or self reflection. We link delusion formation with training dynamics and dataset noise and explore mitigation strategies such as retrieval augmented generation and multi agent debating to mitigate delusions. By systematically investigating the nature, prevalence, and mitigation of LLM delusions, our study provides insights into the underlying causes of this phenomenon and outlines future directions for improving model reliability.


Identifying and Addressing Delusions for Target-Directed Decision-Making

Zhao, Mingde, Sylvain, Tristan, Precup, Doina, Bengio, Yoshua

arXiv.org Artificial Intelligence

Target-directed agents utilize self-generated targets, to guide their behaviors for better generalization. These agents are prone to blindly chasing problematic targets, resulting in worse generalization and safety catastrophes. We show that these behaviors can be results of delusions, stemming from improper designs around training: the agent may naturally come to hold false beliefs about certain targets. We identify delusions via intuitive examples in controlled environments, and investigate their causes and mitigations. With the insights, we demonstrate how we can make agents address delusions preemptively and autonomously. We validate empirically the effectiveness of the proposed strategies in correcting delusional behaviors and improving out-of-distribution generalization.