Goto

Collaborating Authors

 distress


Disney advert banned for showing 'disturbing' severed body

BBC News

Disney advert banned for showing'disturbing' severed body A menacing Disney advert featuring a severed body has been banned by the advertising regulator, which said it was likely to frighten and cause distress to children. The Advertising Standards Authority (ASA) found the entertainment giant had broken its rules with its advert for the Predator Badlands film. Parents complained that the digital poster, which featured a large alien holding aloft the severed body of a smaller, human figure, was inappropriate and disturbing for young children. Disney said the severed body was actually that of a robot, and the fact it had been cut in two further emphasised its non-human nature. The advert, which was seen on the roadside in Giffnock, Glasgow, was promoting the Disney sci-fi film ahead of its release in November.


A Research Leader Behind ChatGPT's Mental Health Work Is Leaving OpenAI

WIRED

A Research Leader Behind ChatGPT's Mental Health Work Is Leaving OpenAI The model policy team leads core parts of AI safety research, including how ChatGPT responds to users in crisis. An OpenAI safety research leader who helped shape ChatGPT's responses to users experiencing mental health crises announced her departure from the company internally last month, WIRED has learned. Andrea Vallone, the head of a safety research team known as model policy, is slated to leave OpenAI at the end of the year. Wood said OpenAI is actively looking for a replacement and that, in the interim, Vallone's team will report directly to Johannes Heidecke, the company's head of safety systems. Vallone's departure comes as OpenAI faces growing scrutiny over how its flagship product responds to users in distress .


Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization

Neural Information Processing Systems

Distributed learning allows a group of independent data owners to collaboratively learn a model over their data sets without exposing their private data. We present a distributed learning approach that combines differential privacy with secure multi-party computation. We explore two popular methods of differential privacy, output perturbation and gradient perturbation, and advance the state-of-the-art for both methods in the distributed learning setting. In our output perturbation method, the parties combine local models within a secure computation and then add the required differential privacy noise before revealing the model. In our gradient perturbation method, the data owners collaboratively train a global model via an iterative learning algorithm. At each iteration, the parties aggregate their local gradients within a secure computation, adding sufficient noise to ensure privacy before the gradient updates are revealed. For both methods, we show that the noise can be reduced in the multi-party setting by adding the noise inside the secure computation after aggregation, asymptotically improving upon the best previous results. Experiments on real world data sets demonstrate that our methods provide substantial utility gains for typical privacy requirements.


Cross-Lingual Mental Health Ontologies for Indian Languages: Bridging Patient Expression and Clinical Understanding through Explainable AI and Human-in-the-Loop Validation

Kandala, Ananth, Kandala, Ratna, Moharir, Akshata Kishore, Manchanda, Niva, Singh, Sunaina

arXiv.org Artificial Intelligence

Mental health communication in India is linguistically fragmented, culturally diverse, and often underrepresented in clinical NLP. Current health ontologies and mental health resources are dominated by diagnostic frameworks centered on English or Western culture, leaving a gap in representing patient distress expressions in Indian languages. We propose cross-linguistic graphs of patient stress expressions (CL-PDE), a framework for building cross-lingual mental health ontologies through graph-based methods that capture culturally embedded expressions of distress, align them across languages, and link them with clinical terminology. Our approach addresses critical gaps in healthcare communication by grounding AI systems in culturally valid representations, allowing more inclusive and patient-centric NLP tools for mental health care in multilingual contexts.


Machine Learning Enabled Early Warning System For Financial Distress Using Real-Time Digital Signals

pant, Laxmi, Reza, Syed Ali, Rahman, Md Khalilor, Rahman, MD Saifur, Sharmin, Shamima, Mithu, Md Fazlul Huq, Hasnain, Kazi Nehal, Farabi, Adnan, khanom, Mahamuda, Kabir, Raisul

arXiv.org Artificial Intelligence

International Journal of Applied Mathematics Volume 38 No. 5 s, 2025 ISSN: 1311 - 1728 (printed version); ISSN: 1314 - 8060 (on - line version) Received: August 0 7, 2025 550 Abstract The growing instability of both global and domestic economic environments has increased the risk of financial distress at the household level. However, traditional econometric models often rely on delayed and aggregated data, limiting their effectiveness. This study introduces a machine learning - based early warning system that utilizes real - time digital and macroeconomic signals to identify financial distress in near real - time. Using a panel dataset of 750 households tracked over three monitoring rounds spa nning 13 months, the framework combines socioeconomic attributes, macroeconomic indicators (such as GDP growth, inflation, and foreign exchange fluctuations), and digital economy measures (including ICT demand and market volatility). Through data preproces sing and feature engineering, we introduce lagged variables, volatility measures, and interaction terms to capture both gradual and sudden changes in financial stability. We benchmark baseline classifiers, such as logistic regression and decision trees, ag ainst advanced ensemble models including random forests, XGBoost, and LightGBM. Our results indicate that the engineered features from the digital economy significantly enhance predictive accuracy. The system performs reliably for both binary distress dete ction and multi - class severity classification, with SHAP - based explanations identifying inflation volatility and ICT demand as key predictors. Crucially, the framework is International Journal of Applied Mathematics Volume 38 No. 5 s, 2025 ISSN: 1311 - 1728 (printed version); ISSN: 1314 - 8060 (on - line version) Received: August 0 7, 2025 551 By implementing machine learning in a transparent and interpretable manner, this study demonstrates the feasibility and impact of providing near - real - time early warnings of financial distress. This offers actionable insights that can strengthen household resilience and guide preemptive intervention strategies. Keywords: Financial Distress, Early Warning Systems, Machine Learning, Digital Economy, Temporal Classification, Explainable AI 1. Introduction 1.1 Background and Motivation The prediction of financial distress has long been recognized as a critical element for ensuring economic resilience and mitigating systemic risk across households, firms, and national economies.


OpenAI adds parental controls and 'child in distress' alerts to ChatGPT

PCWorld

Yesterday, OpenAI announced that it will be introducing new parental controls in ChatGPT within a month. The feature will allow parents to link their own accounts to those of their teenage children and control how the AI chatbot can be used by them. Among other things, the memory and chat history features can be switched off via parental controls, and the system can also send automatic notifications to the parent if it detects that a child is in "acute distress." OpenAI also states that more security features are on the way in the next 120 days as part of a broader effort to make ChatGPT safer to use, and these initiatives are "guided by experts." The launch of parental controls comes after OpenAI was sued in a high-profile case in which the parents of a teenage suicide victim claim that ChatGPT helped him plan and go through with his suicide.


OpenAI announces parental controls for ChatGPT after teen's suicide

Al Jazeera

OpenAI has announced plans to introduce parental controls for ChatGPT amid growing controversy over how artificial intelligence is affecting young people's mental health. In a blog post on Tuesday, the California-based AI company said it was rolling out the features in recognition of families needing support "in setting healthy guidelines that fit a teen's unique stage of development". Under the changes, parents will be able to link their ChatGPT accounts with those of their children, disable certain features, including memory and chat history, and control how the chatbot responds to queries via "age-appropriate model behavior rules." Parents will also be able to receive notifications when their teen shows signs of distress, OpenAI said, adding that it would seek expert input in implementing the feature to "support trust between parents and teens". OpenAI, which last week announced a series of measures aimed at enhancing safety for vulnerable users, said the changes would come into effect within the next month.


Why Your Chatbot Might Secretly Hate You

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Last Friday, the A.I. lab Anthropic announced in a blog post that it has given its chatbot Claude the right to walk away from conversations when it feels "distress." In its post, the company says it will let certain models of Claude nope out in "rare, extreme cases of persistently harmful or abusive user interactions." It's not Claude saying "The lawyers won't let me write erotic Donald Trump/Minnie Mouse fanfic for you." It's Claude saying "I'm sick of your bullshit, and you have to go." Anthropic, which has been quietly dabbling in the question of "A.I. welfare" for some time, conducted actual tests to see if Claude secretly hates his job.


Leveraging Large Language Models for Predictive Analysis of Human Misery

Seal, Bishanka, Seetharaman, Rahul, Bansal, Aman, Nandy, Abhilash

arXiv.org Artificial Intelligence

This study investigates the use of Large Language Models (LLMs) for predicting human-perceived misery scores from natural language descriptions of real-world scenarios. The task is framed as a regression problem, where the model assigns a scalar value from 0 to 100 to each input statement. We evaluate multiple prompting strategies, including zero-shot, fixed-context few-shot, and retrieval-based prompting using BERT sentence embeddings. Few-shot approaches consistently outperform zero-shot baselines, underscoring the value of contextual examples in affective prediction. To move beyond static evaluation, we introduce the "Misery Game Show", a novel gamified framework inspired by a television format. It tests LLMs through structured rounds involving ordinal comparison, binary classification, scalar estimation, and feedback-driven reasoning. This setup enables us to assess not only predictive accuracy but also the model's ability to adapt based on corrective feedback. The gamified evaluation highlights the broader potential of LLMs in dynamic emotional reasoning tasks beyond standard regression.


Exploring Safety Alignment Evaluation of LLMs in Chinese Mental Health Dialogues via LLM-as-Judge

Cai, Yunna, Wang, Fan, Wang, Haowei, Wang, Kun, Yang, Kailai, Ananiadou, Sophia, Li, Moyan, Fan, Mingming

arXiv.org Artificial Intelligence

Evaluating the safety alignment of LLM responses in high-risk mental health dialogues is particularly difficult due to missing gold-standard answers and the ethically sensitive nature of these interactions. To address this challenge, we propose PsyCrisis-Bench, a reference-free evaluation benchmark based on real-world Chinese mental health dialogues. It evaluates whether the model responses align with the safety principles defined by experts. Specifically designed for settings without standard references, our method adopts a prompt-based LLM-as-Judge approach that conducts in-context evaluation using expert-defined reasoning chains grounded in psychological intervention principles. We employ binary point-wise scoring across multiple safety dimensions to enhance the explainability and traceability of the evaluation. Additionally, we present a manually curated, high-quality Chinese-language dataset covering self-harm, suicidal ideation, and existential distress, derived from real-world online discourse. Experiments on 3600 judgments show that our method achieves the highest agreement with expert assessments and produces more interpretable evaluation rationales compared to existing approaches. Our dataset and evaluation tool are publicly available to facilitate further research.