Goto

Collaborating Authors

 suicide


ChatGPT firm blames boy's suicide on 'misuse' of its technology

The Guardian

Adam Raine's family say the version of ChatGPT he used had'clear safety issues'. Adam Raine's family say the version of ChatGPT he used had'clear safety issues'. ChatGPT firm blames boy's suicide on'misuse' of its technology The maker of ChatGPT has said the suicide of a 16-year-old was down to his "misuse" of its system and was "not caused" by the chatbot. The comments came in OpenAI's response to a lawsuit filed against the San Francisco company and its chief executive, Sam Altman, by the family of California teenager Adam Raine. Raine killed himself in April after extensive conversations and "months of encouragement from ChatGPT", the family's lawyer has said.


MTikGuard System: A Transformer-Based Multimodal System for Child-Safe Content Moderation on TikTok

Nguyen, Dat Thanh, Lam, Nguyen Hung, Nguyen, Anh Hoang-Thi, Do, Trong-Hop

arXiv.org Artificial Intelligence

With the rapid rise of short-form videos, TikTok has become one of the most influential platforms among children and teenagers, but also a source of harmful content that can affect their perception and behavior. Such content, often subtle or deceptive, challenges traditional moderation methods due to the massive volume and real-time nature of uploads. This paper presents MTikGuard, a real-time multimodal harmful content detection system for TikTok, with three key contributions: (1) an extended TikHarm dataset expanded to 4,723 labeled videos by adding diverse real-world samples, (2) a multimodal classification framework integrating visual, audio, and textual features to achieve state-of-the-art performance with 89.37% accuracy and 89.45% F1-score, and (3) a scalable streaming architecture built on Apache Kafka and Apache Spark for real-time deployment. The results demonstrate the effectiveness of combining dataset expansion, advanced multimodal fusion, and robust deployment for practical large-scale social media content moderation. The dataset is available at https://github.com/ntdat-8324/MTikGuard-System.git.


5 Things to Know Before Using an AI Browser

TIME - Tech

A smartphone shows the official website of ChatGPT Atlas. A smartphone shows the official website of ChatGPT Atlas. "It'd be really nice to have a service that was sort of just observing your life and proactively helping you when you needed it," said OpenAI CEO Sam Altman in a recent Q&A about OpenAI's plans. This vision is at the heart of a new crop of AI browsers, notably OpenAI's ChatGPT Atlas and Perplexity's Comet. AI browsers differ from traditional browsers in at least two important ways.


Why Character.AI's CEO Still Lets His 6-Year-Old Daughter Use the App

TIME - Tech

Welcome back to, TIME's new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? The chatbot platform, which allows users to chat with AIs that personify fictional characters, is the target of several lawsuits -- including one from Megan Garcia, a mother whose 14-year-old son died by suicide after becoming obsessed with one of the bots, which allegedly encouraged him to end his own life. In the wake of that lawsuit and others, last month Character.AI made a big announcement: it would ban users under 18 years old from having "open-ended conversations" with the chatbots on its platform. It was a huge pivot for a company that says Generations Z and Alpha make up the core of its more than 6 million daily active users, who spend an average of 70 to 80 minutes per day on the platform.


multiMentalRoBERTa: A Fine-tuned Multiclass Classifier for Mental Health Disorder

Islam, K M Sajjadul, Fields, John, Madiraju, Praveen

arXiv.org Artificial Intelligence

The early detection of mental health disorders from social media text is critical for enabling timely support, risk assessment, and referral to appropriate resources. This work introduces multiMentalRoBERTa, a fine-tuned RoBERTa model designed for multiclass classification of common mental health conditions, including stress, anxiety, depression, post-traumatic stress disorder (PTSD), suicidal ideation, and neutral discourse. Drawing on multiple curated datasets, data exploration is conducted to analyze class overlaps, revealing strong correlations between depression and suicidal ideation as well as anxiety and PTSD, while stress emerges as a broad, overlapping category. Comparative experiments with traditional machine learning methods, domain-specific transformers, and prompting-based large language models demonstrate that multiMentalRoBERTa achieves superior performance, with macro F1-scores of 0.839 in the six-class setup and 0.870 in the five-class setup (excluding stress), outperforming both fine-tuned MentalBERT and baseline classifiers. Beyond predictive accuracy, explainability methods, including Layer Integrated Gradients and KeyBERT, are applied to identify lexical cues that drive classification, with a particular focus on distinguishing depression from suicidal ideation. The findings emphasize the effectiveness of fine-tuned transformers for reliable and interpretable detection in sensitive contexts, while also underscoring the importance of fairness, bias mitigation, and human-in-the-loop safety protocols. Overall, multiMentalRoBERTa is presented as a lightweight, robust, and deployable solution for enhancing support in mental health platforms.


Chatbots encouraged our sons to kill themselves, mothers say

BBC News

'A predator in your home': Mothers say chatbots encouraged their sons to kill themselves Megan Garcia had no idea her teenage son Sewell, a bright and beautiful boy, had started spending hours and hours obsessively talking to an online character on the Character.ai It's like having a predator or a stranger in your home, Ms Garcia tells me in her first UK interview. And it is much more dangerous because a lot of the times children hide it - so parents don't know. Within ten months, Sewell, 14, was dead. He had taken his own life.


I wanted ChatGPT to help me. So why did it advise me how to kill myself?

BBC News

I wanted ChatGPT to help me. So why did it advise me how to kill myself? Lonely and homesick for a country suffering through war, Viktoria began sharing her worries with ChatGPT. Six months later and in poor mental health, she began discussing suicide - asking the AI bot about a specific place and method to kill herself. Let's assess the place as you asked, ChatGPT told her, without unnecessary sentimentality.


Detecting Early and Implicit Suicidal Ideation via Longitudinal and Information Environment Signals on Social Media

Shimgekar, Soorya Ram, Zhao, Ruining, Goyal, Agam, Rodriguez, Violeta J., Bloom, Paul A., Sundaram, Hari, Saha, Koustuv

arXiv.org Artificial Intelligence

On social media, many individuals experiencing suicidal ideation (SI) do not disclose their distress explicitly. Instead, signs may surface indirectly through everyday posts or peer interactions. Detecting such implicit signals early is critical but remains challenging. We frame early and implicit SI as a forward-looking prediction task and develop a computational framework that models a user's information environment, consisting of both their longitudinal posting histories as well as the discourse of their socially proximal peers. We adopted a composite network centrality measure to identify top neighbors of a user, and temporally aligned the user's and neighbors' interactions -- integrating the multi-layered signals in a fine-tuned DeBERTa-v3 model. In a Reddit study of 1,000 (500 Case and 500 Control) users, our approach improves early and implicit SI detection by 15% over individual-only baselines. These findings highlight that peer interactions offer valuable predictive signals and carry broader implications for designing early detection systems that capture indirect as well as masked expressions of risk in online environments.


Character.AI bans users under 18 after being sued over child's suicide

The Guardian

Character.AI bans users under 18 after being sued over child's suicide Move comes as lawmakers move to bar minors from using AI companions and require companies to verify users' age The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny. The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child's suicide and a proposed bill that would ban minors from conversing with AI companions. "We're making these changes to our under-18 platform in light of the evolving landscape around AI and teens," the company wrote in its announcement. "We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly." Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI.