Civil Rights & Constitutional Law

Not 'Zo' Racist: Microsoft Releases New Cleaner Talking ChatBot


The race is on between the big tech giants to develop the best artificially intelligent assistant on almost human parity levels and Zo is next in line. It seems 2016 is the year of the Artificial Intelligence (AI) assistant or indeed, chatbot. Their success depends on the machine's "IQ and EQ [Emotional Quotient -- ability to understand the emotions of others]," Harry Shum executive VP of Microsoft's AI research group told a conference in San Francisco. Creating #AI for all: Microsoft Ventures supports startups focused on inclusive growth & societal good. IQ can been developed by using deep learning techniques and speech recognition software and is essential if the bot is going to complete specific tasks.

Tay: Microsoft issues apology over racist chatbot fiasco - BBC News


Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. In China, people reacted differently - a similar chatbot had been rolled out to Chinese users, but with slightly better results. "Tay was not the first artificial intelligence application we released into the online social world," Microsoft's head of research wrote. That said, Mr Lee said a specific vulnerability meant Tay was able to turn nasty.

In Contrast to Tay, Microsoft's Chinese Chatbot, Xiaolce, Is Actually Pleasant


When you heard about Tay, Microsoft's tweeting A.I., were you really surprised that a computer that learned about human nature from Twitter would become a raging racist in less than a day? Naturally, Microsoft apologized for the horrifying tweets by the chatbot with "zero chill." In that apology, the company stressed that the Chinese version of Tay, Xiaoice or Xiaolce, provides a very positive experience for users in stark contrast to this experiment gone so very wrong. "In China, our Xiaolce chatbot is being used by some 40 million people, delighting with its stories and conversations.