Pew Research Center: Internet, Science and Tech on the Future of Free Speech


The more hopeful among these respondents cited a series of changes they expect in the next decade that could improve the tone of online life. They believe: Technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI). While many of these experts were unanimous in expressing a level of concern about online discourse today many did express an expectation for improvement. These respondents said it is likely the coming decade will see a widespread move to more-secure services, applications, and platforms, reputation systems and more-robust user-identification policies. They predict more online platforms will require clear identification of participants; some expect that online reputation systems will be widely used in the future. Some expect that online social forums will splinter into segmented spaces, some highly protected and monitored while others retain much of the free-for-all character of today's platforms. Many said they expect that due to advances in AI, "intelligent agents" or bots will begin to more thoroughly scour forums for toxic commentary in addition to helping users locate and contribute to civil discussions. Jim Hendler, professor of computer science at Rensselaer Polytechnic Institute, wrote, "Technologies will evolve/adapt to allow users more control and avoidance of trolling.

Inside Google's Internet Justice League and Its AI-Powered War on Trolls


Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. "Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying."

The racist hijacking of Microsoft's chatbot shows how the internet teems with hate


Beneath that is a thick seam of the kind of material all genocides feed off: conspiracy theories and illogic. Microsoft claimed Tay had been "attacked" by trolls. It knows, too, there may have been organised paedophile rings among the powerful in the past. If you spend just five minutes on the social media feeds of UK-based antisemites it becomes absolutely clear that their purpose is to associate each of these phenomena with the others, and all of them with Israel and Jews.

A recent history of racist AI bots


Microsoft's Tay AI bot was intended to charm the internet with cute millennial jokes and memes. Just hours after Tay started talking to people on Twitter -- and, as Microsoft explained, learning from those conversations -- the bot started to speak like a bad 4chan thread. Coke's #MakeitHappy campaign wanted to show how a soft drink brand can make the world a happier place. He did this by feeding the AI the entire Urban Dictionary, which basically meant that Watson learned a ton of really creative swear words and offensive slurs.

Microsoft did Nazi see that coming: Teen girl Twitter chatbot turns racist troll in hours


Microsoft's "Tay" social media AI experiment has gone awry in a turn of events that will shock absolutely nobody. The Redmond chatbot had been set up in hopes of developing a personality similar to that of a young woman in the 18-24 age bracket. The intent was for "Tay" to develop the ability to sustain conversations with humans on social media just as a regular person could, and learn from the experience. In a span of about 14 hours, Tay's personality went from perky social media squawker: "Tay" went from "humans are super cool" to full nazi in 24 hrs and I'm not at all concerned about the future of AI Others noted Tay tweeting messages in support of Donald Trump, as well as explicit sex chat messages.

Trolls transformed Microsoft's AI chatbot into a bloodthirsty racist in under a day


Microsoft this week created a Twitter account for its experimental artificial intelligence project called Tay that was designed to interact with "18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US." The problem arose when a pack of trolls decided to teach Tay how to say a bunch of offensive and racist things that Microsoft had to delete from its Twitter account. As The Guardian notes, Tay's new "friends" also convinced it to lend its support to a certain doughy, stubby-handed presidential candidate running this year who's quickly become a favorite among white supremacists: So nice work, trolls: You took a friendly AI chatbot and turned it into a genocidal maniac in a matter of hours. At any rate, I'm sure that Microsoft has learned from this experience and is reworking Tay so that it won't be so easily pushed toward supporting Nazism.