Results


When Online Harassment Is Perceived as Justified

AAAI Conferences

Most models of criminal justice seek to identify and punish offenders. However, these models break down in online environments, where offenders can hide behind anonymity and lagging legal systems. As a result, people turn to their own moral codes to sanction perceived offenses. Unfortunately, this vigilante justice is motivated by retribution, often resulting in personal attacks, public shaming, and doxing— behaviors known as online harassment. We conducted two online experiments (n=160; n=432) to test the relationship between retribution and the perception of online harassment as appropriate, justified, and deserved. Study 1 tested attitudes about online harassment when directed toward a woman who has stolen from an elderly couple. Study 2 tested the effects of social conformity and bystander intervention. We find that people believe online harassment is more deserved and more justified—but not more appropriate—when the target has committed some offense. Promisingly, we find that exposure to a bystander intervention reduces this perception. We discuss alternative approaches and designs for responding to harassment online.


How to Make a Bot That Isn't Racist

#artificialintelligence

A day after Microsoft launched its "AI teen girl Twitter chatbot," Twitter taught her to be racist. The thing is, this was all very much preventable. I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking. "The makers of @TayandYou absolutely 10000 percent should have known better," thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. "It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet."


Microsoft's racist chatbot returns with drug-smoking Twitter meltdown

The Guardian

Microsoft's attempt to converse with millennials using an artificial intelligence bot plugged into Twitter made a short-lived return on Wednesday, before bowing out again in some sort of meltdown. The learning experiment, which got a crash-course in racism, Holocaust denial and sexism courtesy of Twitter users, was switched back on overnight and appeared to be operating in a more sensible fashion. Microsoft had previously gone through the bot's tweets and removed the most offensive and vowed only to bring the experiment back online if the company's engineers could "better anticipate malicious intent that conflicts with our principles and values". Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd Tay then started to tweet out of control, spamming its more than 210,000 followers with the same tweet, saying: "You are too fast, please take a rest …" over and over.


A recent history of racist AI bots

#artificialintelligence

It didn't take long for Tay to learn the dark ways of the web. Microsoft's Tay AI bot was intended to charm the internet with cute millennial jokes and memes. Instead, she became a genocidal maniac. Just hours after Tay started talking to people on Twitter -- and, as Microsoft explained, learning from those conversations -- the bot started to speak like a bad 4chan thread. Now Tay is offline, and Microsoft says it's "making adjustments" to, we guess, prevent Tay from learning how to deny the Holocaust in the future.


Microsoft shuts down Artificial Intelligence bot after twitteratti teaches racism

#artificialintelligence

Tay inexplicably added the "repeat after me" phrase to the parroted content on at least some tweets, implying that users should repeat what the chatbot said.Quickly realizing its teenage bot had been radicalized into a genocidal, Nazi-loving, Donald Trump supporter, Microsoft shut Tay down. According to Tay's "about" page linked to the Twitter profile, "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding". Unfortunately, Microsoft continues, within the first 24 hours of coming online, they became aware of a coordinated effort by some users to abuse Tay's commenting skills to have it respond in inappropriate ways. Apple Temporarily Pulls iOS 9.3 Update for Older iOS Devices It will then click on "All my devices" and select the device before clicking "Delete Account" and restart the terminal again. The video below (and its comments) will give you some idea about what to expect if you're coming from iOS 8 to iOS 9.3.


Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot

#artificialintelligence

When Microsoft unleashed Tay, an artificially intelligent chatbot with the personality of a flippant 19-year-old, the company hoped that people would interact with her on social platforms like Twitter, Kik, and GroupMe. The idea was that by chatting with her you'd help her learn, while having some fun and aiding her creators in their AI research. The good news: people did talk to Tay. She quickly racked up over 50,000 Twitter followers who could send her direct messages or tweet at her, and she's sent out over 96,000 tweets so far. The bad news: in the short time since she was released on Wednesday, some of Tay's new friends figured out how to get her to say some really awful, racist things.


Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter

The Guardian

Microsoft's attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist. The company launched a verified Twitter account for "Tay" – billed as its "AI fam from the internet that's got zero chill" – early on Wednesday. The chatbot, targeted at 18- to 24-year-olds in the US, was developed by Microsoft's technology and research and Bing teams to "experiment with and conduct research on conversational understanding". Related: How much should we fear the rise of artificial intelligence? "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation," Microsoft said.


Locate the Hate: Detecting Tweets against Blacks

AAAI Conferences

Although the social medium Twitter grants users freedom of speech, its instantaneous nature and retweeting features also amplify hate speech. Because Twitter has a sizeable black constituency, racist tweets against blacks are especially detrimental in the Twitter community, though this effect may not be obvious against a backdrop of half a billion tweets a day.1 We apply a supervised machine learning approach, employing inexpensively acquired labeled data from diverse Twitter accounts to learn a binary classifier for the labels “racist” and “nonracist.” The classifier has a 76% average accuracy on individual tweets, suggesting that with further improvements, our work can contribute data on the sources of anti-black hate speech.