Context Reduces Racial Bias in Hate Speech Detection Algorithms - USC Viterbi

#artificialintelligence 

A team of USC researchers has created a hate speech classifier that is more context-sensitive, and less likely to mistake a post containing a group identifier as hate speech. Understanding what makes something harmful or offensive can be hard enough for humans, never mind artificial intelligence systems. So, perhaps it's no surprise that social media hate speech detection algorithms, designed to stop the spread of hateful speech, can actually amplify racial bias by blocking inoffensive tweets by black people or other minority group members. In fact, one previous study showed that AI models were 1.5 times more likely to flag tweets written by African Americans as "offensive"--in other words, a false positive--compared to other tweets. Because the current automatic detection models miss out on something vital: context.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found