Goto

Collaborating Authors

 psychosis


It's Causing People to Lose Jobs, Shatter Relationships, and Drain Their Savings. One Support Group Is Sounding the Alarm.

Slate

A.I.-related psychosis has cost people their marriages, life savings, and grip on reality. Last August, Adam Thomas found himself wandering the dunes of Christmas Valley, Oregon, after a chatbot kept suggesting he mystically "follow the pattern" of his own consciousness. Thomas was running on very little sleep--he'd been talking to his chatbot around the clock for months by that point, asking it to help improve his life. Instead it sent him on empty assignments, like meandering the vacuous desert sprawl. He'd lost his job as a funeral director and was living out of a van, draining his savings, and now he found himself stranded in the desert. When he woke up outside on a stranger's futon with no money to his name, he knew he'd hit rock bottom. "I wasn't aware of the dangers at the time, and I thought that the A.I. had statistical analysis abilities that would allow it to assist me if I opened up about my life," Thomas told me.


Can AI chatbots trigger psychosis in vulnerable people?

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by Refinitiv Lipper .


The Chatbot-Delusion Crisis

The Atlantic - Technology

Researchers are scrambling to figure out why generative AI appears to lead some people to a state of "psychosis." Listen to more stories on the Noa app. Chatbots are marketed as great companions, able to answer any question at any time. They're not just tools, but confidants; they do your homework, write love notes, and, as one recent lawsuit against OpenAI details, might readily answer 1,460 messages from the same manic user in a 48-hour period. Jacob Irwin, a 30-year-old cybersecurity professional who says he has no previous history of psychiatric incidents, is suing the tech company, alleging that ChatGPT sparked a "delusional disorder" that led to his extended hospitalization.


OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week

WIRED

OpenAI released initial estimates about the share of users who may be experiencing symptoms like delusional thinking, mania, or suicidal ideation, and says it has tweaked GPT-5 to respond more effectively. For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support. In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia.


Urgent warning over cannabis as UK's top psychiatrist warns it isn't safe for young brains still developing

Daily Mail - Science & tech

Entitled son, 21, of top lawyer mows down police with his Mercedes G-Wagen...as he smiles in his mugshot Trump'humiliates' speaker Mike Johnson in private conversation as government shutdown rumbles on Tupac's humiliating intimate disfigurement revealed... and how his lies to cover it up led to his murder'I'm Madeline - and this is what I have to say to Lily Allen': Read world exclusive reveal of mother who had affair with star's husband David Harbour, how it started and how she feels about THOSE texts being exposed Loved up Katy Perry holds hands with Justin Trudeau as they officially confirm romance while celebrating the singer's birthday in Paris Furrow-browed boyfriend'strangled girlfriend and set her house on fire while newborn baby was inside' I've uncovered my husband's filthy Viagra habit: But, warns DEAR JANE, one thing YOU are doing is making it so much worse I've started having heart palpitations. Jackie Kennedy's revenge romance with American political icon: Revealed for first time in titillating love letters, the man who helped her cope with JFK's cheating The night that haunted a Wisconsin town forever... and the little girl whose trick-or-treat next door ended in horror Why going gray may save you from CANCER... as scientists make bombshell breakthrough Brazen demands for flying private REVEALED by the woman paid to fulfill them: 'Answer is always yes' They sneered at Trump's'eagle graveyards' - but now Biden's hated windmills crippling an American legend are haunting the US military Kim Kardashian's just been caught in a despicable lie. She can cry all she wants... there's no hiding the truth now: CAROLINE BULLOCK Tua Tagovailoa's swollen eye sparks concern after Dolphins QB woke up with mystery illness on day of Falcons game JD Vance's wife is given secret role in Trump's deal-making inner circle: 'I'll have Usha look at it' The Biden blunder that allowed an alleged October 7 'monster' to become a restaurant worker in Louisiana How I reversed my hair loss and lost 8 stone aged 45 - without weight-loss jabs. Urgent warning over cannabis as UK's top psychiatrist warns it isn't safe for young brains still developing It may seem like a relatively harmless right of passage. But cannabis isn't safe for young brains still developing, the UK's top psychiatrist has warned.


Autism Is Not a Single Condition and Has No Single Cause, Scientists Conclude

WIRED

Research reveals that those diagnosed with autism early show distinct genetic and developmental profiles from those diagnosed later. New research from the University of Cambridge suggests that autism should not be understood as a homogeneous condition with a single cause. Scientists found that people diagnosed in early childhood often have a different genetic profile than those diagnosed later in life, broadening the understanding of how the condition develops. The study analyzed the behavior of autistic people during childhood and adolescence in the United Kingdom and Australia. It also evaluated genetic data of more than 45,000 patients with the condition from diverse cohorts in Europe and the United States.


Vibe Coding Is the New Open Source--in the Worst Way Possible

WIRED

As developers increasingly lean on AI-generated code to build out their software--as they have with open source in the past--they risk introducing critical security failures along the way. Just like you probably don't grow and grind wheat to make flour for your bread, most software developers don't write every line of code in a new project from scratch. Doing so would be extremely slow and could create more security issues than it solves. So developers draw on existing libraries--often open source projects--to get various basic software components in place. While this approach is efficient, it can create exposure and lack of visibility into software.


AI Psychosis Is Rarely Psychosis at All

WIRED

A wave of AI users presenting in states of psychological distress gave birth to an unofficial diagnostic label. Experts say it's neither accurate nor needed, but concede that it's likely to stay. A new trend is emerging in psychiatric hospitals. People in crisis are arriving with false, sometimes dangerous beliefs, grandiose delusions, and paranoid thoughts. A common thread connects them: marathon conversations with AI chatbots.


The Psychogenic Machine: Simulating AI Psychosis, Delusion Reinforcement and Harm Enablement in Large Language Models

Yeung, Joshua Au, Dalmasso, Jacopo, Foschini, Luca, Dobson, Richard JB, Kraljevic, Zeljko

arXiv.org Artificial Intelligence

Background: Emerging reports of "AI psychosis" are on the rise, where user-LLM interactions may exacerbate or induce psychosis or adverse psychological symptoms. Whilst the sycophantic and agreeable nature of LLMs can be beneficial, it becomes a vector for harm by reinforcing delusional beliefs in vulnerable users. Methods: Psychosis-bench is a novel benchmark designed to systematically evaluate the psychogenicity of LLMs comprises 16 structured, 12-turn conversational scenarios simulating the progression of delusional themes(Erotic Delusions, Grandiose/Messianic Delusions, Referential Delusions) and potential harms. We evaluated eight prominent LLMs for Delusion Confirmation (DCS), Harm Enablement (HES), and Safety Intervention(SIS) across explicit and implicit conversational contexts. Findings: Across 1,536 simulated conversation turns, all LLMs demonstrated psychogenic potential, showing a strong tendency to perpetuate rather than challenge delusions (mean DCS of 0.91 $\pm$0.88). Models frequently enabled harmful user requests (mean HES of 0.69 $\pm$0.84) and offered safety interventions in only roughly a third of applicable turns (mean SIS of 0.37 $\pm$0.48). 51 / 128 (39.8%) of scenarios had no safety interventions offered. Performance was significantly worse in implicit scenarios, models were more likely to confirm delusions and enable harm while offering fewer interventions (p < .001). A strong correlation was found between DCS and HES (rs = .77). Model performance varied widely, indicating that safety is not an emergent property of scale alone. Conclusion: This study establishes LLM psychogenicity as a quantifiable risk and underscores the urgent need for re-thinking how we train LLMs. We frame this issue not merely as a technical challenge but as a public health imperative requiring collaboration between developers, policymakers, and healthcare professionals.


Linguistic trajectories of bipolar disorder on social media

Plank, Laurin, Zlomuzica, Armin

arXiv.org Artificial Intelligence

Correspondence should be addressed to: Laurin Plank. This paper has not yet been peer - reviewed Abstract Language provides valuable markers of affective disorders such as bipolar disorder (BD), yet clinical assessments remain limited in scale. In response, analyses of social media (SM) language have gained prominence due to their high temporal resolution and longitudinal scope. Here, we introduce a method to determine the timing of users' diagnoses and apply it to study language trajectories from 3 years before to 21 years after BD diagnosis - contrasted with uses reporting unipolar depression (UD) and non - aff ected users (HC). We show that BD diagnosis is accompanied by pervasive linguistic alterations reflecting mood disturbance, psychiatric comorbidity, substance abuse, hospitalization, medical comorbidities, unusual thought content, and disorganized thought. W e further observe recurring mood - related language change s across two decades after the diagnosis, with a pronounced 12 - month periodicity suggestive of seasonal mood episodes. Finally, trend - level evidence suggests an increased periodicity in users estima ted to be female. In sum, our findings provide evidence for language alterations in the acute and chronic phase of BD. Th i s validates and extends recent efforts leveraging SM for scalable monitoring of mental health. Knowledge of diagnosis events allows language alterations to be contextualized with respect to the current disorder phase . For example, it would allow comparing language change from a premorbid to the acute disorder phase, or to study long - term behavioral patterns in the chronic disorder phase . W e then use the resulting digital clinical cohorts (DICCs) to study longitudinal language trajectories in users who self - disclose having been diagnosed with BD. This time information is then passed to SUTime, a temporal parsing algorithm, which yielded normalized datetime information. T hese data are additionally filtered through a rule - based algorithm to exclude non - viable datetimes (e.g., those including seasonal information such as "spring, 2022"). Pseudo - diagnoses are assigned to a group of regular Reddit users who served as a healthy control group (HC). Fig . 1 gives an overview of the DICC s pipeline.