Results


Microsoft says it faces 'difficult' challenges in AI design after chat bot Tay turned into a genocidal racist

#artificialintelligence

Microsoft has admitted it faces some "difficult" challenges in AI design after its chat bot, "Tay," had an offensive meltdown on social media. Microsoft issued an apology in a blog post on Friday explaining it was "deeply sorry" after its artificially intelligent chat bot turned into a genocidal racist on Twitter. In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post.


Microsoft says it faces 'difficult' challenges in AI design after chatbot Tay turned into a genocidal racist (MSFT)

#artificialintelligence

Microsoft has admitted it faces some "difficult" challenges in AI design after its chatbot "Tay" had an offensive meltdown on social media. In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post. NOW WATCH: We tried the'Uber-killer' that offers flat fares and no surge pricing SEE ALSO: Here's why Microsoft's teen chatbot turned into a genocidal racist, according to an AI expert