Facebook's AI for detecting hate speech is facing its biggest challenge yet

#artificialintelligence 

The single most amazing thing about Facebook is how vast it is. But while more than two and a half billion people find value in the service, this scale is also Facebook's biggest downfall. Controlling what happens in that vast digital space is nearly impossible, especially for a company that historically hasn't been very responsible about managing the possible harms implicit in its technology. Only in 2017--13 years into its history--did Facebook seriously begin facing up to the fact that its platform could be used to deliver toxic speech, propaganda, and misinformation directly to the brains of millions of people. Various flavors of toxic stuff can be found all over Facebook, from bullying and child trafficking to the rumors, hate, and fakery that helped Donald Trump become president in 2016. In the past few years, Facebook has invested heavily in measures to control this kind of toxic content. It has mainly outsourced its content moderation to a small army of reviewers in contract shops around the world. But content moderators can't begin to weed through all the harmful content, and the traffickers of such stuff are constantly evolving new ways of evading them.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found