Civil Rights & Constitutional Law


Breaking the Black Box: How Machines Learn to Be Racist

#artificialintelligence

Microsoft learned that lesson the hard way earlier this year when it released an AI Twitter bot called Tay that had been trained to talk like a Millennial teen. Last year, Google's automatic image recognition engine tagged a photo of two black people as "gorillas" -- presumably because the machine learned on a database that hadn't included enough photos of either animals or people. For the AI trained with Digital news outlets, close synonyms were "Ferguson" and "Bernie." We created this AI system using Google's open source technology, and trained it to produce synonyms based on what it learned from different news sources.


Artificial intelligence and racism

#artificialintelligence

AI is here - although Microsoft's blunder with Tay, the "teenaged girl AI" embodied by a Twitter account who "turned racist" shows that we obviously still have a long ways to go. The pace of advancement, mixed with our general lack of knowledge in the realm of artificial intelligence, has spurred many to chime in on the emerging topic of AI and ethics.…


Microsoft's AI bot resurfaces on Twitter, goes haywire again

#artificialintelligence

Microsoft's artificial intelligence (AI)-powered bot which was activated on Twitter last week for a playful chat with people, only to get silenced within 24 hours as users started sharing racist comments with it, was accidentally resurrected again and messed it all up once again. Last week, launched on Twitter as an experiment in "conversational understanding" and to engage people through "casual and playful conversation", Tay was soon bombarded with racial comments and the innocent bot repeated those comments back with her commentary to users. Later, a Microsoft spokesperson confirmed to TechCrunch that the company is taking Tay off Twitter as people were posting abusive comments to her. But Twitter users soon understood that Tay will repeat back racist tweets with her own commentary and they bombarded her with abusive posts.


Microsoft Apologizes for Chatbot's Racist, Sexist Tweets

#artificialintelligence

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Peter Lee, Microsoft's vice president of research. Microsoft created Tay as an experiment to learn more about how artificial intelligence programs can engage with Web users in casual conversation. Lee, in the blog post, called Web users' efforts to exert a malicious influence on the chatbot "a coordinated attack by a subset of people." Microsoft has enjoyed better success with a chatbot called XiaoIce that the company launched in China in 2014.


Microsoft Apologizes For Its Racist AI Chatbot

#artificialintelligence

Microsoft has now apologized for the entire episode. The company explains in a blog post that few human users on Twitter exploited a flaw in Tay to transform it into a bigoted racist. It's believed that many users exploited Tay's "repeat after me" feature which enabled users to get Tay to repeat whatever they tweeted at it. Naturally, trolls tweeted sexist, racist and abusive things at Tay which it repeated word for word.


Microsoft's AI Twitter bot goes dark after racist, sexist tweets - Independent.ie

#artificialintelligence

Tay, Microsoft Corp's so-called chatbot that uses artificial intelligence to engage with millennials on Twitter, lasted less than a day before it was hobbled by a barrage of racist and sexist comments by Twitter users that it parroted back to them. TayTweets (@TayandYou), which began tweeting on Wednesday, was designed to become "smarter" as more users interacted with it, according to its Twitter biography. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the representative said in a written statement supplied to Reuters, without elaborating. After Twitter user Room (@codeinecrazzy) tweeted "jews did 9/11" to the account on Wednesday, @TayandYou responded "Okay ... jews did 9/11."


After racist tweets, Microsoft muzzles teen chat bot Tay

#artificialintelligence

Tay, the company's online chat bot designed to talk like a teen, started spewing racist and hateful comments on Twitter on Wednesday, and Microsoft (MSFT, Tech30) shut Tay down around midnight. Microsoft blames Tay's behavior on online trolls, saying in a statement that there was a "coordinated effort" to trick the program's "commenting skills." As people chat with it online, Tay picks up new language and learns to interact with people in new ways. In her last tweet, Tay said she needed sleep and hinted that she would be back.


Microsoft's Lovable Teen Chatbot Turned Racist Troll Proves How Badly Silicon Valley Needs Diversity

#artificialintelligence

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. Tay's faux pas served as both a technical feat -- proving the wonders of AI by increasing the bot's language skills in a matter of hours -- and as a stark cultural reminder that things get ugly fast when diversity isn't continuously part of the conversation. By not building in language filters or canned responses to ward off taunting messages about Adolf Hitler, black people, or women into Tay's programming, Microsoft's engineers neglected a major issue people face online -- targeted harassment. Women only make up 27 percent of the tech giant's global staff, according to the company's 2015 diversity report.


Thanks, Twitter. You turned Microsoft's AI teen into a horny racist

#artificialintelligence

But to us humans of a certain age, it's hardly surprising that soon after its Wednesday debut Tay's Twitter account was peppered by comments that might only suit a presidential debate. You will become increasingly perturbed when I tell you she also offered: "F*** MY ROBOT P**** DADDY I'm SUCH A NAUGHTY ROBOT." She behaved like such a naughty robot that Daddy Microsoft appears to have removed these tweets. Tay, a Microsoft spokeswoman told me, is "as much a social and cultural experiment, as it is technical."


Microsoft's AI Chatbot Becomes Racist, Has To Be Unplugged

#artificialintelligence

The company was running an experiment in conversational understanding, meaning that the more people interacted with the artificial intelligence-powered chatbot the smarter it would become. It didn't take long for things to get ugly though as people soon started tweeting racist and misogynistic things at Tay and it picked it all up. Tay went from calling humans "super cool" to praising a certain mustachioed German maniac to saying some very bad things about feminists. "Tay" went from "humans are super cool" to full nazi in 24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A