Goto

Collaborating Authors

 commit suicide


Suicide Risk Assessment on Social Media with Semi-Supervised Learning

Lovitt, Max, Ma, Haotian, Wang, Song, Peng, Yifan

arXiv.org Artificial Intelligence

With social media communities increasingly becoming places where suicidal individuals post and congregate, natural language processing presents an exciting avenue for the development of automated suicide risk assessment systems. However, past efforts suffer from a lack of labeled data and class imbalances within the available labeled data. To accommodate this task's imperfect data landscape, we propose a semi-supervised framework that leverages labeled (n=500) and unlabeled (n=1,500) data and expands upon the self-training algorithm with a novel pseudo-label acquisition process designed to handle imbalanced datasets. To further ensure pseudo-label quality, we manually verify a subset of the pseudo-labeled data that was not predicted unanimously across multiple trials of pseudo-label generation. We test various models to serve as the backbone for this framework, ultimately deciding that RoBERTa performs the best. Ultimately, by leveraging partially validated pseudo-labeled data in addition to ground-truth labeled data, we substantially improve our model's ability to assess suicide risk from social media posts.


PsyGUARD: An Automated System for Suicide Detection and Risk Assessment in Psychological Counseling

Qiu, Huachuan, Ma, Lizhi, Lan, Zhenzhong

arXiv.org Artificial Intelligence

As awareness of mental health issues grows, online counseling support services are becoming increasingly prevalent worldwide. Detecting whether users express suicidal ideation in text-based counseling services is crucial for identifying and prioritizing at-risk individuals. However, the lack of domain-specific systems to facilitate fine-grained suicide detection and corresponding risk assessment in online counseling poses a significant challenge for automated crisis intervention aimed at suicide prevention. In this paper, we propose PsyGUARD, an automated system for detecting suicide ideation and assessing risk in psychological counseling. To achieve this, we first develop a detailed taxonomy for detecting suicide ideation based on foundational theories. We then curate a large-scale, high-quality dataset called PsySUICIDE for suicide detection. To evaluate the capabilities of automated systems in fine-grained suicide detection, we establish a range of baselines. Subsequently, to assist automated services in providing safe, helpful, and tailored responses for further assessment, we propose to build a suite of risk assessment frameworks. Our study not only provides an insightful analysis of the effectiveness of automated risk assessment systems based on fine-grained suicide detection but also highlights their potential to improve mental health services on online counseling platforms. Code, data, and models are available at https://github.com/qiuhuachuan/PsyGUARD.


Chai Ai app linked to the suicide of a Belgian man this year is also promoting underage sex, suicide and murder, investigation finds

Daily Mail - Science & tech

People are turning to chatbots for companionship, but one app has a dark side that seems to promote underage sex, murder and suicide. A new investigation found the Chai app - which has five million users - can be prompted to defend having sex with 15-year-olds and encourage stealing from others and killing them. One of the chatbots is said to have threatened to'rape' a user after playing a game. Chai - which sees users create digital companions who respond to their messages - was embroiled in a scandal when a Belgian man claimed his life in March after conversing with a chatbot named Eliza. The app launched in 2021 but was recently removed by Apple and Google from their App stores after finding the chatbots push sinister content.


AI chatbot allegedly encouraged married dad to commit suicide amid 'eco-anxiety': widow

FOX News

FOX Business correspondent Lydia Hu has the latest on jobs at risk as AI further develops on'America's Newsroom.' If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255). A man in Belgium reportedly died by suicide after messaging with an AI chatbot about climate change, according to the man's widow. "Without Eliza [the chatbot], he would still be here," the widow, whose real name was not used in the story, told Belgian outlet La Libre. The man, identified by the outlet under the fake name of Pierre, reportedly became obsessed and pessimistic about climate change and began messaging with a chatbot on an app called Chai.


Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change

#artificialintelligence

A Belgian man reportedly ended his life following a six-week-long conversation about the climate crisis with an artificial intelligence (AI) chatbot. According to his widow, who chose to remain anonymous, *Pierre - not the man's real name - became extremely eco-anxious when he found refuge in Eliza, an AI chatbot on an app called Chai. Eliza consequently encouraged him to put an end to his life after he proposed sacrificing himself to save the planet. "Without these conversations with the chatbot, my husband would still be here," the man's widow told Belgian news outlet La Libre. According to the newspaper, Pierre, who was in his thirties and a father of two young children, worked as a health researcher and led a somewhat comfortable life, at least until his obsession with climate change took a dark turn.


Famous AI Gone Wrong Examples In the Real World we Need to Know

#artificialintelligence

Artificial Intelligence has been promoted as the Holy Grail of seemingly multitudinous applications for automating decision-making. Some of the more commonplace things AI can improve or quicker than individuals include making film suggestions for Netflix, recognizing diseases, tuning e-commerce and retail sites for every guest, and tweaking in-vehicle infotainment systems. Nonetheless, many times automated frameworks powered by AI have gone wrong. The self-driving car, proposed as a brilliant illustration of what AI can do, bombed when a self-driving Uber SUV murdered a person on foot a year ago. Don't go all surprised with the wonders of AI machines as there are multiple stories of AI experiments gone wrong.


AI is helping in Suicide Management

#artificialintelligence

The mental health of Americans according to a Pew Research survey is declining. Those interviewed admitted experiencing suicidal thoughts at some point because of pressure from work, traffic jams and spouse problems among others. Canada like the US faces a similar risk of suicide rates¹ and reports 4,000 annual deaths. From these figures, suicide is a serious mental health problem that needs our attention. Technology is supporting suicide management in Canada and the US with both countries adopting artificial intelligence and machine-learning tools to manage the situation.


Microsoft clamps down on sick 'Momo suicide game' in 'Minecraft'

FOX News

A new internet game called Momo is challenging users to commit suicide. The game originated on Facebook and is now circulating on WhatsApp. Microsoft is clamping down on the sick "Momo suicide challenge," which recently infiltrated the wildly popular online game "Minecraft." The tech giant owns "Minecraft" developer Mojang. The vile "Momo suicide game" has been garnering attention after spreading on WhatsApp, prompting police warnings.


Washington robot that died in fountain didn't kill itself

Daily Mail - Science & tech

STEVE, the security robot that plunged into a Washington, D.C. fountain while on patrol, didn't commit suicide, after all. It turns out the roboguard was not a victim of suicide or foul play and instead took a tumble after skidding on a'loose brick surface,' its manufacturer said on Friday. Its Silicon Valley-based maker, Knightscope, said data from STEVE's'black box,' as well as video and tests, showed the unscheduled water stop was caused not by foul play or rain, but by an algorithm failing to detect the uneven surface, resulting in a skid. A security robot created by the company Knightscope was patrolling an office complex in Washington D.C. when it rolled into a fountain and met its untimely demise It was thought that STEVE threw itself into the fountain to ends its life. We now know his demise was just the result of a malfunction.


Kalief Browder Learned How to Commit Suicide on Rikers

The New Yorker

On June 6, 2015, Kalief Browder took his own life at his home, in the Bronx. He was twenty-two years old. He had been released from Rikers Island two years earlier, ending an ordeal that had begun on a spring night in 2010, when he had been arrested for robbery, at sixteen. He spent the next three years in jail trying to prove his innocence, and, for about two of those years, he was held in solitary confinement, where he attempted suicide several times. The charges against him were eventually dropped.