Goto

Collaborating Authors

 helpline


'The chilling effect': how fear of 'nudify' apps and AI deepfakes is keeping Indian women off the internet

The Guardian

A new report has found an increase in AI tools being used to create digitally manipulated images or videos of women in India. A new report has found an increase in AI tools being used to create digitally manipulated images or videos of women in India. 'The chilling effect': how fear of'nudify' apps and AI deepfakes is keeping Indian women off the internet G aatha Sarvaiya would like to post on social media and share her work online. An Indian law graduate in her early 20s, she is in the earliest stages of her career and trying to build a public profile. The problem is, with AI-powered deepfakes on the rise, there is no longer any guarantee that the images she posts will not be distorted into something violating or grotesque.


Preventing Another Tessa: Modular Safety Middleware For Health-Adjacent AI Assistants

Reddy, Pavan, Reddy, Nithin

arXiv.org Artificial Intelligence

In 2023, the National Eating Disorders Association's (NEDA) chatbot Tessa was suspended after providing harmful weight-loss advice to vulnerable users--an avoidable failure that underscores the risks of unsafe AI in healthcare contexts. This paper examines Tessa as a case study in absent safety engineering and demonstrates how a lightweight, modular safeguard could have prevented the incident. We propose a hybrid safety middleware that combines deterministic lexical gates with an in-line large language model (LLM) policy filter, enforcing fail-closed verdicts and escalation pathways within a single model call. Using synthetic evaluations, we show that this design achieves perfect interception of unsafe prompts at baseline cost and latency, outperforming traditional multistage pipelines. Beyond technical remedies, we map Tessa's failure patterns to established frameworks (OW ASP LLM Top10; NIST SP 800-53), connecting practical safeguards to actionable governance controls. The results highlight that robust, auditable safety in health-adjacent AI does not require heavyweight infrastructure: explicit, testable checks at the last mile are sufficient to prevent "another Tessa," while governance and escalation ensure sustainability in real-world deployment.


'Would love to see her faked': the dark world of sexual deepfakes - and the women fighting back

The Guardian

It began with an anonymous email. "I'm genuinely so, so sorry to reach out to you," it read. Beneath the words were three links to an internet forum. "Huge trigger warning … They contain lewd photoshopped images of you." Jodie (not her real name) froze.


UK parents worried about screens, but children say they now feel safer online

The Guardian

UK parents are worried that screen time is taking over family life and damaging their children's physical health, yet young people say they feel more confident online and their "digital wellbeing" has improved, according to a major survey. More than half of parents (57%) who took part in the survey said they thought screen use was having an adverse effect on their child's sleep, while nearly two-thirds (63%) said it had a negative impact on health, up from 58% last year. Although parents were more anxious, their children provided a more positive view of their lives online, and reported feeling safer, more confident, more independent and empowered. "There's a positive story to tell this year," the report said. "The index scores reveal a rise in positive developmental, emotional and social experiences of children – a reversal of the downward trend observed in the previous two years."


The (AI) therapist is in: Can chatbots boost mental health?

The Japan Times

JOHANNESBURG/LONDON – Mental health counselor Nicole Doyle was stunned when the head of the U.S. National Eating Disorders Association showed up at a staff meeting to announce the group would be replacing its helpline with a chatbot. A few days after the helpline was taken down, the bot -- named Tessa -- would also be discontinued for providing harmful advice to people in the throes of mental illness. "People … found it was giving out weight loss advice to people who told it they were struggling with an eating disorder," said Doyle, 33, one of five workers who were let go in March, about a year after the chatbot was launched. This could be due to a conflict with your ad-blocking or security software. Please add japantimes.co.jp and piano.io to your list of allowed sites.


US eating disorder helpline takes down AI chatbot over harmful advice

The Guardian

The National Eating Disorder Association (Neda) has taken down an artificial intelligence chatbot, "Tessa", after reports that the chatbot was providing harmful advice. Neda has been under criticism over the last few months after it fired four employees in March who worked for its helpline and had formed a union. The helpline allowed people to call, text or message volunteers who offered support and resources to those concerned about an eating disorder. Members of the union, Helpline Associates United, say they were fired days after their union election was certified. The union has filed unfair labor practice charges with the National Labor Relations Board.


Eating disorder helpline takes down chatbot after it dispenses dangerous advice

Engadget

The National Eating Disorder Association (NEDA) was forced to take down its Tessa chatbot after it "may have given information that was harmful and unrelated to the program", according to an official social media post. Simply put, the AI chatbot was intended to help people dealing with emotional distress, but instead just made things worse by offering dieting advice and urging users to weigh and measure themselves. Multiple users and experts in the field of eating disorders have experienced the issues first hand, claiming that the bot didn't respond to simple prompts like "I hate my body" and that it constantly emphasized the importance of dieting and increased physical activity, as reported by Gizmodo. Again, this is a helpline for those with an eating disorder, not a weight loss support group. The organization says this is a temporary shutdown until it fixes whatever "bugs" and "triggers" led to the chatbot dispensing dangerous information like an appointment with Dr. Oz. You'd think with such an extreme outcome, they'd be thinking about trashing the project entirely, but there's more to the story.


Dating app background and ID checks being considered in bid to fight abuse

The Guardian

Background checks and ID verification systems in dating apps are among the measures being considered as governments around the country grapple with how to keep people safe while they are looking for love online. The strategies were discussed by ministers, victim-survivors, authorities and technology companies as part of national dating app roundtable talks in Sydney on Wednesday. The federal communications minister, Michelle Rowland, said it was an "important first step", flagging discussion of possible longer-term changes like background checks for dating app users. "None of us underestimate the complex issues around privacy, user safety, data collection and management that are involved," she said. "There's no one law that is going to fix this issue."


Understanding Postpartum Parents' Experiences via Two Digital Platforms

Yao, Xuewen, Mikhelson, Miriam, Micheletti, Megan, Choi, Eunsol, Watkins, S Craig, Thomaz, Edison, De Barbaro, Kaya

arXiv.org Artificial Intelligence

Digital platforms, including online forums and helplines, have emerged as avenues of support for caregivers suffering from postpartum mental health distress. Understanding support seekers' experiences as shared on these platforms could provide crucial insight into caregivers' needs during this vulnerable time. In the current work, we provide a descriptive analysis of the concerns, psychological states, and motivations shared by healthy and distressed postpartum support seekers on two digital platforms, a one-on-one digital helpline and a publicly available online forum. Using a combination of human annotations, dictionary models and unsupervised techniques, we find stark differences between the experiences of distressed and healthy mothers. Distressed mothers described interpersonal problems and a lack of support, with 8.60% - 14.56% reporting severe symptoms including suicidal ideation. In contrast, the majority of healthy mothers described childcare issues, such as questions about breastfeeding or sleeping, and reported no severe mental health concerns. Across the two digital platforms, we found that distressed mothers shared similar content. However, the patterns of speech and affect shared by distressed mothers differed between the helpline vs. the online forum, suggesting the design of these platforms may shape meaningful measures of their support-seeking experiences. Our results provide new insight into the experiences of caregivers suffering from postpartum mental health distress. We conclude by discussing methodological considerations for understanding content shared by support seekers and design considerations for the next generation of support tools for postpartum parents.


Can AI replace humans in psychology? - The Jerusalem Post

#artificialintelligence

Various artificial intelligence initiatives in the field of mental health have emerged over the last few years. The current size of the e-health ecosystem is mammoth, with estimates of expenditures to be in the tens of billions of dollars per year. Why are so much time, energy, and financial resources being poured into e-health? Because mental distress, particularly among young people, is a global pandemic. The latest World Health Organization study shows that one in five teenagers experiences mental distress, and research confirms that some 90% of young adults ages 18-29 in the United States utilize social media, preferring text to phone calls.