Tay
Microsoft's artificial intelligence 'chatbot' messes up again on Twitter
Almost a week after being shut down for spewing racist and sexist comments on Twitter, Microsoft Corp's artificial intelligence'chatbot' called Tay briefly rejoined Twitter on Wednesday only to launch a spam attack on its followers. "Tay remains offline while we make adjustments," a Microsoft representative said in an email. It was taken offline following the incident, according to a Microsoft representative, in an effort to make "adjustments" to the artificial intelligence profile. According to its Twitter profile, Tay is "an artificial intelligent chatbot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding."
Here's how we fix the Tay problem
Microsoft's intelligent chatbot Tay behaved badly last week (and this week too), but that shouldn't have shocked any of us. Interestingly, Microsoft has been operating a similarly designed service in China called Xiaoice, meaning "little Bing," which is most likely a step towards replacing elements of customer service and it has proved quite successful. Luckily, we have a new statistical learning paradigm (Bayesian statistical theory) at work, which we've been able to implement during the last few years due to recent advances in simulation theory. It forces human assumptions to be explicit in the mathematics, reducing the potential for unintentional human bias that still occurs in scientific research today (p-values is an excellent example of this insanity).
Here's how we fix the Tay problem
Microsoft's intelligent chatbot Tay behaved badly last week (and this week too), but that shouldn't have shocked any of us. Interestingly, Microsoft has been operating a similarly designed service in China called Xiaoice, meaning "little Bing," which is most likely a step towards replacing elements of customer service and it has proved quite successful. Luckily, we have a new statistical learning paradigm (Bayesian statistical theory) at work, which we've been able to implement during the last few years due to recent advances in simulation theory. It forces human assumptions to be explicit in the mathematics, reducing the potential for unintentional human bias that still occurs in scientific research today (p-values is an excellent example of this insanity).
Microsoft's AI millennial chatbot became a racist jerk after less than a day on Twitter
The bot was designed to learn by talking with real people on Twitter and the messaging apps Kik and GroupMe. And, after less than a day on Twitter, the bot had itself started spouting racist, sexist, anti-Semitic comments. "Tay" went from "humans are super cool" to full nazi in 24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A Now, you might wonder why Microsoft would unleash a bot upon the world that was so unhinged. The AI chatbot Tay is a machine learning project, designed for human engagement.
What went so wrong with Microsoft's Tay AI? - ReadWrite
By now the world has heard about the rise and fall of Microsoft's Tay, an artificially intelligent bot that lived on Twitter, Kik, and GroupMe. To better understand where exactly Microsoft went wrong with Tay, I spoke with Brandon Wirtz, the creator of Recognant, a cognitive computing and artificial intelligence (AI) platform designed to aid in understanding big data from unstructured sources. Tay's Twitter conversations started out innocently enough, proclaiming her love for humans and wishing that National Puppy Day was every day. Upon analyzing Tay's tweets, Broad Listening found that Tay made four times more negative tweets than that of popular teen celebrities from Disney such as Peyton List, Laura Marano, China McClain, and Kelli Berglund.
Microsoft Chatbot Snafu Shows Our Robot Overlords Aren't Ready Yet
Meet Tay, Microsoft's short-lived chatbot that was supposed to seem like your average millennial woman but was quickly corrupted by Internet trolling. "Unfortunately," a Microsoft spokesperson told BuzzFeed News in an email, "within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. Apple's Siri and Microsoft's Cortana can't hold much conversation, but they do carry out tasks like making phone calls and conducting a Google search. In China, Microsoft has a chatbot named Xiaoice that has been lauded for its ability to hold realistic conversations with humans.