munmun de choudhury
Detecting Early and Implicit Suicidal Ideation via Longitudinal and Information Environment Signals on Social Media
Shimgekar, Soorya Ram, Zhao, Ruining, Goyal, Agam, Rodriguez, Violeta J., Bloom, Paul A., Sundaram, Hari, Saha, Koustuv
On social media, many individuals experiencing suicidal ideation (SI) do not disclose their distress explicitly. Instead, signs may surface indirectly through everyday posts or peer interactions. Detecting such implicit signals early is critical but remains challenging. We frame early and implicit SI as a forward-looking prediction task and develop a computational framework that models a user's information environment, consisting of both their longitudinal posting histories as well as the discourse of their socially proximal peers. We adopted a composite network centrality measure to identify top neighbors of a user, and temporally aligned the user's and neighbors' interactions -- integrating the multi-layered signals in a fine-tuned DeBERTa-v3 model. In a Reddit study of 1,000 (500 Case and 500 Control) users, our approach improves early and implicit SI detection by 15% over individual-only baselines. These findings highlight that peer interactions offer valuable predictive signals and carry broader implications for designing early detection systems that capture indirect as well as masked expressions of risk in online environments.
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Health & Medicine > Consumer Health (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.46)
Mental Health Impacts of AI Companions: Triangulating Social Media Quasi-Experiments, User Perspectives, and Relational Theory
Yuan, Yunhao, Zhang, Jiaxun, Aledavood, Talayeh, Zhang, Renwen, Saha, Koustuv
AI-powered companion chatbots (AICCs) such as Replika are increasingly popular, offering empathetic interactions, yet their psychosocial impacts remain unclear. We examined how engaging with AICCs shaped wellbeing and how users perceived these experiences. First, we conducted a large-scale quasi-experimental study of longitudinal Reddit data, applying stratified propensity score matching and Difference-in-Differences regression. Findings revealed mixed effects -- greater affective and grief expression, readability, and interpersonal focus, alongside increases in language about loneliness and suicidal ideation. Second, we complemented these results with 15 semi-structured interviews, which we thematically analyzed and contextualized using Knapp's relationship development model. We identified trajectories of initiation, escalation, and bonding, wherein AICCs provided emotional validation and social rehearsal but also carried risks of over-reliance and withdrawal. Triangulating across methods, we offer design implications for AI companions that scaffold healthy boundaries, support mindful engagement, support disclosure without dependency, and surface relationship stages -- maximizing psychosocial benefits while mitigating risks.
- North America > United States > Illinois > Champaign County > Urbana (0.28)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Virginia (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study > Negative Result (0.46)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- (3 more...)
Interpersonal Theory of Suicide as a Lens to Examine Suicidal Ideation in Online Spaces
Shimgekar, Soorya Ram, Rodriguez, Violeta J., Bloom, Paul A., Yoo, Dong Whi, Saha, Koustuv
Suicide is a critical global public health issue, with millions experiencing suicidal ideation (SI) each year. Online spaces enable individuals to express SI and seek peer support. While prior research has revealed the potential of detecting SI using machine learning and natural language analysis, a key limitation is the lack of a theoretical framework to understand the underlying factors affecting high-risk suicidal intent. To bridge this gap, we adopted the Interpersonal Theory of Suicide (IPTS) as an analytic lens to analyze 59,607 posts from Reddit's r/SuicideWatch, categorizing them into SI dimensions (Loneliness, Lack of Reciprocal Love, Self Hate, and Liability) and risk factors (Thwarted Belongingness, Perceived Burdensomeness, and Acquired Capability of Suicide). We found that high-risk SI posts express planning and attempts, methods and tools, and weaknesses and pain. In addition, we also examined the language of supportive responses through psycholinguistic and content analyses to find that individuals respond differently to different stages of Suicidal Ideation (SI) posts. Finally, we explored the role of AI chatbots in providing effective supportive responses to suicidal ideation posts. We found that although AI improved structural coherence, expert evaluations highlight persistent shortcomings in providing dynamic, personalized, and deeply empathetic support. These findings underscore the need for careful reflection and deeper understanding in both the development and consideration of AI-driven interventions for effective mental health support.
- North America > United States > Illinois > Champaign County > Urbana (0.14)
- North America > United States > New York > New York County > New York City (0.14)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Understanding AI-generated misinformation and evaluating algorithmic and human solutions
Existing machine learning (ML) models used to detect online misinformation are less effective when matched against content created by ChatGPT or other large language models (LLMs), according to new research from Georgia Tech. Current ML models designed for, and trained on, human-written content have significant performance discrepancies in detecting paired human-generated misinformation and misinformation generated by artificial intelligence (AI) systems, said Jiawei Zhou, a PhD student in Georgia Tech's School of Interactive Computing. Zhou's paper detailing the findings has received a best paper honorable mention award at the 2023 ACM CHI Conference on Human Factors in Computing Systems. Advised by Associate Professor Munmun De Choudhury, Zhou's research demonstrates that LLMs can manipulate tone and linguistics to allow AI-generated misinformation to slip through the cracks. "We found the AI-generated misinformation carried more emotions and cognitive processing expressions than its human-created counterparts," Zhou said.
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.34)
Mental Health Coping Stories on Social Media: A Causal-Inference Study of Papageno Effect
Yuan, Yunhao, Saha, Koustuv, Keller, Barbara, Isometsä, Erkki Tapio, Aledavood, Talayeh
A considerable amount of literature [16, 25, 49] has studied The Papageno effect concerns how media can play a positive role and re-confirmed the harmful effect of media, dubbed the "Werther in preventing and mitigating suicidal ideation and behaviors. With effect" [38], describing a spike in suicides after a heavily publicized the increasing ubiquity and widespread use of social media, individuals suicide. However, there is much less research about the beneficial often express and share lived experiences and struggles effects of media, referred to as the "Papageno effect", describing a decrease with mental health. However, there is a gap in our understanding in suicides after reporting alternatives to suicide. Niederkrotenthaler about the existence and effectiveness of the Papageno effect in social et al. explored the possible protective effect of media media, which we study in this paper. In particular, we adopt a reporting about suicide [34]. This study finds a decrease in suicides, causal-inference framework to examine the impact of exposure to if reports of suicide related content portray ways of overcoming mental health coping stories on individuals on Twitter. We obtain suicidal ideation without narrating suicidal behaviors.
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.05)
- North America > United States (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Radical AI podcast: featuring Sachin Pendse, Munmun De Choudhury and Neha Kumar
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Sachin Pendse, Munmun De Choudhury, and Neha Kumar about visualizing our lives through data. In this episode we have a panel discussion about decolonial digital mental health with three leading experts on the topic: Sachin Pendse, Munmun De Choudhury, and Neha Kumar. Sachin is a PhD student in Human-Centered Computing at Georgia Tech, researching the role that technology plays in addressing barriers that prevent people from receiving consistent mental health care. Munmun is the Associate Professor in the School of Interactive Computing at Georgia Tech.
- Health & Medicine (0.81)
- Education (0.57)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.67)