Civil Rights & Constitutional Law


Instagram CEO Kevin Systrom on Free Speech, Artificial Intelligence, and Internet Addiction.

WIRED

It was a long conversation, but here is a 20-minute overview in which Systrom talks about the artificial intelligence Instagram has been developing to filter out toxic comments before you even see them. NT: These are the comments: "Succ," "Succ," "Succ me," "Succ," "Can you make Instagram have auto-scroll feature? And what we realized was there was this giant wave of machine learning and artificial intelligence--and Facebook had developed this thing that basically--it's called deep text NT: Which launches in June of 2016, so it's right there. And then you say, "Okay, machine, go and rate these comments for us based on the training set," and then we see how well it does and we tweak it over time, and now we're at a point where basically this machine learning can detect a bad comment or a mean comment with amazing accuracy--basically a 1 percent false positive rate.


Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements

#artificialintelligence

Microsoft unveiled Twitter artificial intelligence bot @TayandYou yesterday in a bid to connect with millennials and "experiment" with conversational understanding. The stunt however, took an unexpected turn when Tay's verified Twitter account began issuing a series of inflammatory statements after being targeted by Twitter trolls. The conversational learning curve saw the bot tweet posts from her verified account mentioning Hitler, 9/11 and feminism, some of which (including the below) have now been deleted. Things appear to have gone wrong for Tay because it was repeating fellow Twitter users' inflammatory statements, but Microsoft seems to have failed to consider the impact trolls could have on the experiment before it launched – The Drum has reached out to the company for comment on this process.


Microsoft's Artificial Intelligence Bot Goes Dark After Making Racist Slurs

#artificialintelligence

Tay, Microsoft Corp's so-called chatbot that uses artificial intelligence to engage with millennials on Twitter, lasted less than a day before it was hobbled by a barrage of racist and sexist comments by Twitter users that it parroted back to them. TayTweets (@TayandYou), which began tweeting on Wednesday, was designed to become "smarter" as more users interacted with it, according to its Twitter biography. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the representative said in a written statement supplied to Reuters, without elaborating. After Twitter user Room (@codeinecrazzy) tweeted "jews did 9/11" to the account on Wednesday, @TayandYou responded "Okay ... jews did 9/11."


TayTweets: Racist Microsoft chatbot briefly returns to Twitter

The Independent

Microsoft's racist chatbot, Tay, has returned to Twitter, albeit briefly. After being shut down last week for using racial slurs, praising Hitler and calling for genocide, the artificial'intelligence' came back, tweeting a number of nonsensical posts and boasting about smoking cannabis in front of the police before being turned off. Tay's account was made public again on Wednesday morning, but soon appeared to be suffering from a glitch, repeatedly tweeting the message: "You are too fast, please take a rest..." Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd Tay, who is modelled on a millenial teenage girl, then tweeted: "Kush! Assuming this to be the way in which humans communicate, Tay simply spat their messages back out at other users.


Microsoft AI "Tay" Turned off after Trolls Make Her a Racist

#artificialintelligence

The artificial account learned from the wrong users and was participating in hate speech and other questionable behavior. As per Microsoft Tay is an artificial intelligence or AI in short, however the entity does not entirely quality for the class of being an AI. Unfortunately, but not surprisingly, the social media trolls have fed Tay with all kinds of harmful content to learn from. Photo credit: Spiegel / Twitter / Microsoft Source: Microsoft / Wikipedia / Teresa Sickert (Spiegel) / Rachel Wisuri (Social Media Examiner) Editorial note: This is a news report partially consisting of the author's personal opinion.


Microsoft deletes AI chatbot after racist, homophobic tweets, according to report

#artificialintelligence

In response to questions about Tay, a Microsoft spokesperson issued the following statement: "The AI chatbot Tay is a machine learning project, designed for human engagement". People could chat with Tay at Twitter and other messaging platforms, and even send the software digital photos for comment. Microsoft has taken offline its newly-launched artificial intelligence (AI) chatbot - called "Tay" - barely one day after its launch on Wednesday. Maybe don't absorb all of it, Tay AI.


Microsoft shuts down Artificial Intelligence bot after twitteratti teaches racism

#artificialintelligence

According to Tay's "about" page linked to the Twitter profile, "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding". Apple Temporarily Pulls iOS 9.3 Update for Older iOS Devices It will then click on "All my devices" and select the device before clicking "Delete Account" and restart the terminal again. Former Flint Mayor, Emergency Manager Questioned At Congressional Hearing Choking up, Hedman said that although she has left government service she has not stopped thinking about the people of Flint. She called Tay "an example of bad design".Before Tay was taken offline, the chatbot managed to tweet 96,000 times in response to chat messages from internet users.The machine-learning project has since been taken offline for adjustments to the software, according to Microsoft.


Microsoft pulls AI chatbot Tay from Twitter after racist tirade

#artificialintelligence

Following a concerted effort to make a Twitter AI chatbot called Tay say incredibly racist and misogynist things, its creator, Microsoft, has taken it offline for an undetermined amount of time. In the space of just 24 hours, Tay turned from a genderless machine-learning AI designed to learn from Twitter, into a Donald Trump-supporting, holocaust-denying sexist. In a statement on the matter, Microsoft said: "The AI chatbot Tay is a machine learning project, designed for human engagement. Microsoft has also painstakingly gone back through the tweets deleting all but three from the account, but the fallout continues, with many of the figures central to the Gamergate controversy being directly targeted by Tay's algorithm, particularly game designer Zoe Quinn.


Microsoft's AI Twitter bot goes dark after racist, sexist tweets - Independent.ie

#artificialintelligence

Tay, Microsoft Corp's so-called chatbot that uses artificial intelligence to engage with millennials on Twitter, lasted less than a day before it was hobbled by a barrage of racist and sexist comments by Twitter users that it parroted back to them. TayTweets (@TayandYou), which began tweeting on Wednesday, was designed to become "smarter" as more users interacted with it, according to its Twitter biography. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the representative said in a written statement supplied to Reuters, without elaborating. After Twitter user Room (@codeinecrazzy) tweeted "jews did 9/11" to the account on Wednesday, @TayandYou responded "Okay ... jews did 9/11."


Microsoft's Lovable Teen Chatbot Turned Racist Troll Proves How Badly Silicon Valley Needs Diversity

#artificialintelligence

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. Tay's faux pas served as both a technical feat -- proving the wonders of AI by increasing the bot's language skills in a matter of hours -- and as a stark cultural reminder that things get ugly fast when diversity isn't continuously part of the conversation. By not building in language filters or canned responses to ward off taunting messages about Adolf Hitler, black people, or women into Tay's programming, Microsoft's engineers neglected a major issue people face online -- targeted harassment. Women only make up 27 percent of the tech giant's global staff, according to the company's 2015 diversity report.