Artificial Intelligence Act: will the EU's AI regulation set an example?
When Microsoft unleashed Tay, its AI-powered chatbot, on Twitter on 23 March 2016, the software giant's hope was that it would "engage and entertain people… through casual and playful conversation". An acronym for'thinking about you', Tay was designed to mimic the language patterns of a 19-year-old American girl and learn by interacting with human users on the social network. Within hours, things had gone badly wrong. Trolls tweeted politically incorrect phrases at the bot in a bid to manipulate its behaviour. Sure enough, Tay started spewing out racist, sexist and other inflammatory messages to its following of more than 100,000 users. Microsoft was forced to lock the @TayandYou account indefinitely less than a day later, but not before its creation had tweeted more than 96,000 times.
Jul-21-2022, 12:52:23 GMT
- Country:
- Asia > China
- Europe > United Kingdom (0.15)
- North America > United States (0.15)
- Industry:
- Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Law
- Civil Rights & Constitutional Law (0.70)
- Statutes (1.00)
- Technology: