More than a million people every week show suicidal intent when chatting with ChatGPT, OpenAI estimates

The Guardian 

OpenAI claimed that its recent GPT-5 update improved user safety in a model evaluation involving more than 1,000 self-harm and suicide conversations. OpenAI claimed that its recent GPT-5 update improved user safety in a model evaluation involving more than 1,000 self-harm and suicide conversations. More than a million ChatGPT users each week send messages that include "explicit indicators of potential suicidal planning or intent", according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues. In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07% of users active in a given week - about 560,000 of its touted 800m weekly users - show "possible signs of mental health emergencies related to psychosis or mania".