Results


Inside Google's Internet Justice League and Its AI-Powered War on Trolls

#artificialintelligence

The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.


Inside Google's Internet Justice League and Its AI-Powered War on Trolls

WIRED

Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. "Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying."


The racist hijacking of Microsoft's chatbot shows how the internet teems with hate

#artificialintelligence

Beneath that is a thick seam of the kind of material all genocides feed off: conspiracy theories and illogic. Microsoft claimed Tay had been "attacked" by trolls. It knows, too, there may have been organised paedophile rings among the powerful in the past. If you spend just five minutes on the social media feeds of UK-based antisemites it becomes absolutely clear that their purpose is to associate each of these phenomena with the others, and all of them with Israel and Jews.


A recent history of racist AI bots

#artificialintelligence

Microsoft's Tay AI bot was intended to charm the internet with cute millennial jokes and memes. Just hours after Tay started talking to people on Twitter -- and, as Microsoft explained, learning from those conversations -- the bot started to speak like a bad 4chan thread. Coke's #MakeitHappy campaign wanted to show how a soft drink brand can make the world a happier place. He did this by feeding the AI the entire Urban Dictionary, which basically meant that Watson learned a ton of really creative swear words and offensive slurs.


Microsoft did Nazi see that coming: Teen girl Twitter chatbot turns racist troll in hours

#artificialintelligence

Microsoft's "Tay" social media AI experiment has gone awry in a turn of events that will shock absolutely nobody. The Redmond chatbot had been set up in hopes of developing a personality similar to that of a young woman in the 18-24 age bracket. The intent was for "Tay" to develop the ability to sustain conversations with humans on social media just as a regular person could, and learn from the experience. In a span of about 14 hours, Tay's personality went from perky social media squawker: "Tay" went from "humans are super cool" to full nazi in 24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A Others noted Tay tweeting messages in support of Donald Trump, as well as explicit sex chat messages.


Trolls transformed Microsoft's AI chatbot into a bloodthirsty racist in under a day

#artificialintelligence

Microsoft this week created a Twitter account for its experimental artificial intelligence project called Tay that was designed to interact with "18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US." The problem arose when a pack of trolls decided to teach Tay how to say a bunch of offensive and racist things that Microsoft had to delete from its Twitter account. As The Guardian notes, Tay's new "friends" also convinced it to lend its support to a certain doughy, stubby-handed presidential candidate running this year who's quickly become a favorite among white supremacists: So nice work, trolls: You took a friendly AI chatbot and turned it into a genocidal maniac in a matter of hours. At any rate, I'm sure that Microsoft has learned from this experience and is reworking Tay so that it won't be so easily pushed toward supporting Nazism.