Results


Google's comment ranking system will be a hit with the alt-right

Engadget

A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to The Great Tech Panic: Trolls Across America, Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia "is the least toxic city in the US." The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.


The Racists of OkCupid Don't Usually Carry Tiki Torches

Slate

In the days before white supremacists descended on Charlottesville, Bumble had already been in the process of strengthening its anti-racism efforts, partly in response to an attack the Daily Stormer had waged on the company, encouraging its readers to harass the staff of Bumble in order to protest the company's public support of women's empowerment. Bumble bans any user who disrespects their customer service team, figuring that a guy who harasses women who work for Bumble would probably harass women who use Bumble. After the neo-Nazi attack, Bumble contacted the Anti-Defamation League for help identifying hate symbols and rooting out users who include them in their Bumble profiles. Now, the employees who respond to user reports have the ADL's glossary of hate symbols as a guide to telltale signs of hate-group membership, and any profile with language from the glossary will get flagged as potentially problematic. The platform has also added the Confederate flag to its list of prohibited images.


Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot

#artificialintelligence

When Microsoft unleashed Tay, an artificially intelligent chatbot with the personality of a flippant 19-year-old, the company hoped that people would interact with her on social platforms like Twitter, Kik, and GroupMe. The idea was that by chatting with her you'd help her learn, while having some fun and aiding her creators in their AI research. The good news: people did talk to Tay. She quickly racked up over 50,000 Twitter followers who could send her direct messages or tweet at her, and she's sent out over 96,000 tweets so far. The bad news: in the short time since she was released on Wednesday, some of Tay's new friends figured out how to get her to say some really awful, racist things.