Tay, the neo-Nazi millennial chatbot, gets autopsied - Artificial Intelligence Online
A user told Tay to tweet Trump propaganda; she did (though the tweet has now been deleted). Microsoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users. The company appears to have been caught off-guard by her behavior. A similar bot, named XiaoIce, has been in operation in China since late 2014.
Mar-26-2016, 01:38:11 GMT
- Country:
- Asia > China (0.26)
- North America > United States
- New York (0.05)
- Industry:
- Health & Medicine > Therapeutic Area (0.32)
- Law (0.94)
- Law Enforcement & Public Safety > Terrorism (0.51)
- Technology: