Goto

Collaborating Authors

Alphabet's AI-powered Chrome extension hides toxic comments

Engadget

Alphabet offshoot Jigsaw is launching a Chrome extension designed to help moderate toxic comments on social media. The new open-source tool, dubbed "Tune," builds on the machine learning smarts introduced in Jigsaw's "Perpesctive" tech to help sites like Facebook and Twitter set the "volume" of abusive comments. Using "filter mix" controls, users can either turn toxic comments off altogether (what's known as "zen mode") or show selective types of posts containing attacks, insults, or profanity. Tune also works with Reddit, YouTube and Disqus. Jigsaw admits that Tune is still an experiment, meaning it may not spot all forms of toxicity or could hide non-offensive comments.


Toxic comments, bane of many news sites, get a tech fix

USATODAY - Tech Top Stories

A high-tech fix is in the works for toxic online comments on media sites. Jigsaw, an incubator company within Google's parent company Alphabet, has developed a technology that it says news publishers could use to make it easier to moderate discussions and lessen the adversarial nature of comments sections at the end of articles. Concern about online harassment within comments sections has led many media sites to drop comments. Count The Week, The Verge, ReCode, Reuters and Popular Science among those who have dropped them over the last few years. So did For The Win, a USA TODAY Sports site.


Tuning Out Toxic Comments, With the Help of AI

#artificialintelligence

They can be positive -- you might learn something fascinating, perhaps meet a remarkable person -- or they can be negative, even harmful. According to a Pew Media Research Center study, about 41 percent of American adults have experienced online harassment, most commonly on social media. A significant portion of those people -- 23 percent -- reported that their most recent experience happened in a comments section. A single toxic comment can make someone turn away from a discussion. Even seeing toxic comments directed at others can discourage meaningful conversations from unfolding, by making people less likely to join the dialogue in the first place.


Google Expands Troll-Fighting Tool Amid Concerns of Racial Bias

#artificialintelligence

A unit of Google dedicated to policy and ideas is pushing forward with a plan to tame toxic comments on the Internet--even as critics warn that its technology, which relies on AI-powered algorithms, can promote the sort of sexism and racism Google is trying to diminish. On Thursday, the unit known as Jigsaw announced a new community page where developers can contribute "hacks" to build out its comment-moderation tool. That tool, which is used by the likes of the New York Times and the Guardian, helps publishers moderate Internet comments at scale. The system, which depends on artificial intelligence, allowed the Times to expand the scope of its reader comments tenfold while still maintaining a civil discussion.


Google Cousin Develops Technology to Flag Toxic Online Comments - NYTimes.com

#artificialintelligence

From self-driving cars to multi-language translation, machine learning is underpinning many of the technology industry's biggest advances with its form of artificial intelligence. Now, Google's parent company, Alphabet, says it plans to apply machine learning technology to promote more civil discourse on the internet and make comment sections on sites a little less awful. Jigsaw, a technology incubator within Alphabet, says it has developed a new tool for web publishers to identify toxic comments that can undermine a civil exchange of ideas. Starting Thursday, publishers can start applying for access to use Jigsaw's software, called Perspective, without charge. "We have more information and more articles than any other time in history, and yet the toxicity of the conversations that follow those articles are driving people away from the conversation," said Jared Cohen, president of Jigsaw, formerly known as Google Ideas.