Goto

Collaborating Authors

toxic comment


Classifying Toxic Comments with Natural Language Processing

#artificialintelligence

Regardless of whether you have a Medium account, Youtube channel, or play League of Legends, you have probably seen toxic comments somewhere on the internet before. Toxic behavior, which includes rude, hateful, and threatening actions, is an issue that stops a productive comment thread, and turns it into a battle. Needless to say, developing and artificial intelligence to identify and classify toxic comments would greatly help many online groups and communities. The data for this project can be found on Kaggle. This data set contains hundreds of thousands of comments, each labelled with some of the following traits: toxic, severe toxic, obscene, threat, insult, and identity hate. Here are two examples of a toxic comment, and a non-toxic comment with their labels.


Tuning Out Toxic Comments, With the Help of AI

#artificialintelligence

They can be positive -- you might learn something fascinating, perhaps meet a remarkable person -- or they can be negative, even harmful. According to a Pew Media Research Center study, about 41 percent of American adults have experienced online harassment, most commonly on social media. A significant portion of those people -- 23 percent -- reported that their most recent experience happened in a comments section. A single toxic comment can make someone turn away from a discussion. Even seeing toxic comments directed at others can discourage meaningful conversations from unfolding, by making people less likely to join the dialogue in the first place.


Jigsaw releases data set to help develop AI that detects toxic comments

#artificialintelligence

Mitigating prejudicial and abusive behavior online is no easy feat, given the level of toxicity in some communities. More than one in five respondents in a recent survey reported being subjected to physical threats, and nearly one in five experienced sexual harassment, stalking, or sustained harassment. Of those who experienced harassment, upwards of 20% said it was the result of their gender identity, race, ethnicity, sexual orientation, religion, occupation, or disability. In pursuit of a solution, Jigsaw -- the organization working under Google parent company Alphabet to tackle cyber bullying, censorship, disinformation, and other digital issues of the day -- today released what it claims is the largest public data set of comments and annotations with toxicity labels and identity labels. It's intended to help measure bias in AI comment classification systems, which Jigsaw and others have historically measured using synthetic data from template sentences.


AI sucks at stopping online trolls spewing toxic comments

#artificialintelligence

New research has shown just how bad AI is at dealing with online trolls. Such systems struggle to automatically flag nudity and violence, don't understand text well enough to shoot down fake news and aren't effective at detecting abusive comments from trolls hiding behind their keyboards. A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv. Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as'love' in sentences.


Detecting toxic comments with multi-task Deep Learning

@machinelearnbot

The internet is a bright place, made dark by internet trolls. To help with this issue, a recent Kaggle competition has provided a large number of internet comments, labelled with whether or not they're toxic. The ultimate goal of this competition is to build a model that can detect (and possibly sensor) these toxic comments. While I hope to be an altruistic person, I'm actually more interested in using the free, large, and hand-labeled text data set to compare LSTM powered architectures and deep learning heuristics. So, I guess I get to hunt trolls while providing a casestudy in text modeling.


Google Expands Troll-Fighting Tool Amid Concerns of Racial Bias

#artificialintelligence

A unit of Google dedicated to policy and ideas is pushing forward with a plan to tame toxic comments on the Internet--even as critics warn that its technology, which relies on AI-powered algorithms, can promote the sort of sexism and racism Google is trying to diminish. On Thursday, the unit known as Jigsaw announced a new community page where developers can contribute "hacks" to build out its comment-moderation tool. That tool, which is used by the likes of the New York Times and the Guardian, helps publishers moderate Internet comments at scale. The system, which depends on artificial intelligence, allowed the Times to expand the scope of its reader comments tenfold while still maintaining a civil discussion.


Google's new AI aims to end abusive online comments using 'Perspective'

PCWorld

The internet is a tough place to have a conversation. Abuse has driven celebrities and ordinary folks from social media platforms that are ill-equipped to deal with it, and some publishers have switched off comment sections. That's why Google and Jigsaw (an early stage incubator at Google parent company Alphabet) are working on a project called Perspective. It uses artificial intelligence to try to identify toxic comments, with an aim of reducing them. The Perspective API released Thursday will provide developers with a score of how likely users are to perceive a comment as toxic.