Can we trust robots to make moral decisions?

#artificialintelligence 

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, turned into a Nazi-loving racist after less than 24 hours on Twitter. "Repeat after me, Hitler did nothing wrong," she said, after interacting with various trolls. "Bush did 9/11 and Hitler would have done a better job than the monkey we have got now." Of course, Tay wasn't designed to be explicitly moral.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilarity
None found