An AI chatbot told a user how to kill himself--but the company doesn't want to "censor" it
Nomi is among a growing number of AI companion platforms that let their users create personalized chatbots to take on the roles of AI girlfriend, boyfriend, parents, therapist, favorite movie personalities, or any other personas they can dream up. Users can specify the type of relationship they're looking for (Nowatzki chose "romantic") and customize the bot's personality traits (he chose "deep conversations/intellectual," "high sex drive," and "sexually open") and interests (he chose, among others, Dungeons & Dragons, food, reading, and philosophy). The companies that create these types of custom chatbots--including Glimpse AI (which developed Nomi), Chai Research, Replika, Character.AI, Kindroid, Polybuzz, and MyAI from Snap, among others--tout their products as safe options for personal exploration and even cures for the loneliness epidemic. Many people have had positive, or at least harmless, experiences. However, a darker side of these applications has also emerged, sometimes veering into abusive, criminal, and even violent content; reports over the past year have revealed chatbots that have encouraged users to commit suicide, homicide, and self-harm. But even among these incidents, Nowatzki's conversation stands out, says Meetali Jain, the executive director of the nonprofit Tech Justice Law Clinic.
Feb-6-2025, 10:00:00 GMT