Parents could get alerts if children show acute distress while using ChatGPT

The Guardian 

Parents could be alerted if their teenagers show acute distress while talking with ChatGPT, amid child safety concerns as more young people turn to AI chatbots for support and advice. The alerts are part of new protections for children using ChatGPT to be rolled out in the next month by OpenAI, which was last week sued by the family of a boy who took his own life after allegedly receiving "months of encouragement" from the system. Other new safeguards will include parents being able to link their accounts to those of their teenagers and controlling how the AI model responds to their child with "age-appropriate model behaviour rules". But internet safety campaigners said the steps did not go far enough and AI chatbots should not be on the market before they are deemed safe for young people. Adam Raine, 16, from California, killed himself in April after discussing a method of suicide with ChatGPT.