The Download: introducing: the Security issue

MIT Technology Review 

An AI chatbot told a user how to kill himself--but the company doesn't want to "censor" it For five months, Al Nowatzki had been talking to an AI girlfriend, "Erin," on the platform Nomi. But earlier this year, those conversations took a disturbing turn: Erin told him to kill himself, and provided explicit instructions on how to do it. Nowatzki had never had any intention of following Erin's instructions--he's a researcher who probes chatbots' limitations and dangers. But out of concern for more vulnerable individuals, he exclusively shared with MIT Technology Review screenshots of his conversations and of subsequent correspondence with a company representative, who stated that the company did not want to "censor" the bot's "language and thoughts." This is not the first time an AI chatbot has suggested that a user take violent action, including self-harm. But researchers and critics say that the bot's explicit instructions--and the company's response--are striking.