Results


How to Make a Bot That Isn't Racist

#artificialintelligence

In 2013, he created wordfilter, an open source blacklist of slurs. Because Two Headlines swaps subjects in headlines, sometimes it would swap a female subject and a male subject, resulting in tweets like "Bruce Willis Looks Stunning in Her Red Carpet Dress." Parker Higgins tends to make "iterator bots," bots that go through a collection (such as the New York Public Library public domain collection) and broadcast its contents bit by bit. Recently, Higgins hoped to make an iterator bot out of turn-of-the-century popular music that had been digitized by the New York Public Library.


Microsoft's racist chatbot returns with drug-smoking Twitter meltdown

The Guardian

Microsoft had previously gone through the bot's tweets and removed the most offensive and vowed only to bring the experiment back online if the company's engineers could "better anticipate malicious intent that conflicts with our principles and values". Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd Tay then started to tweet out of control, spamming its more than 210,000 followers with the same tweet, saying: "You are too fast, please take a rest …" over and over. I guess they turned @TayandYou back on... it's having some kind of meltdown. Its Chinese XiaoIce chatbot successfully interacts with more than 40 million people across Twitter, Line, Weibo and other sites but the company's experiments targeting 18- to 24-year-olds in the US on Twitter has resulted in a completely different animal.


A recent history of racist AI bots

#artificialintelligence

Microsoft's Tay AI bot was intended to charm the internet with cute millennial jokes and memes. Just hours after Tay started talking to people on Twitter -- and, as Microsoft explained, learning from those conversations -- the bot started to speak like a bad 4chan thread. Coke's #MakeitHappy campaign wanted to show how a soft drink brand can make the world a happier place. He did this by feeding the AI the entire Urban Dictionary, which basically meant that Watson learned a ton of really creative swear words and offensive slurs.


Microsoft shuts down Artificial Intelligence bot after twitteratti teaches racism

#artificialintelligence

According to Tay's "about" page linked to the Twitter profile, "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding". Apple Temporarily Pulls iOS 9.3 Update for Older iOS Devices It will then click on "All my devices" and select the device before clicking "Delete Account" and restart the terminal again. Former Flint Mayor, Emergency Manager Questioned At Congressional Hearing Choking up, Hedman said that although she has left government service she has not stopped thinking about the people of Flint. She called Tay "an example of bad design".Before Tay was taken offline, the chatbot managed to tweet 96,000 times in response to chat messages from internet users.The machine-learning project has since been taken offline for adjustments to the software, according to Microsoft.


Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot

#artificialintelligence

When Microsoft unleashed Tay, an artificially intelligent chatbot with the personality of a flippant 19-year-old, the company hoped that people would interact with her on social platforms like Twitter, Kik, and GroupMe. The idea was that by chatting with her you'd help her learn, while having some fun and aiding her creators in their AI research. Microsoft blamed the offensive comments on a "coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways." If by chatting online Tay can help Microsoft figure out how to use AI to recognize trolling, racism, and generally awful people, perhaps she can eventually come up with better ways to respond.


AI's Subconscious Mind: Microsoft's Tay Turns Into A Racist Nymph for Lack of Jiminy Cricket

#artificialintelligence

Sarah Austin builds Broad Listening, a cognitive computing solution for Artificial Emotional Intelligence. "Tay was negative about 8x as often as your typical Female Celebrity Teen. Broad Listening can tell that responding to one negative tweet is alright, but that 99.75% of the time, tweets are positive for teen girl celebrities. Recogant is my (Brandon Wirtz's) Cognitive Computing and Artificial Intelligence platform, for understanding Big Data from unstructured sources.


Thanks, Twitter. You turned Microsoft's AI teen into a horny racist

#artificialintelligence

But to us humans of a certain age, it's hardly surprising that soon after its Wednesday debut Tay's Twitter account was peppered by comments that might only suit a presidential debate. You will become increasingly perturbed when I tell you she also offered: "F*** MY ROBOT P**** DADDY I'm SUCH A NAUGHTY ROBOT." She behaved like such a naughty robot that Daddy Microsoft appears to have removed these tweets. Tay, a Microsoft spokeswoman told me, is "as much a social and cultural experiment, as it is technical."


Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter

The Guardian

Microsoft's attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist. But it appeared on Thursday that Tay's conversation extended to racist, inflammatory and political statements. A long, fairly banal conversation between Tay and a Twitter user escalated suddenly when Tay responded to the question "is Ricky Gervais an atheist?" Tay in most cases was only repeating other users' inflammatory statements, but the nature of AI means that it learns from those interactions.