Why people end up mad when AI flags toxic speech - Futurity

#artificialintelligence 

You are free to share this article under the Attribution 4.0 International license. The main problem: There is a huge difference between evaluating more traditional AI tasks, like recognizing spoken language, and the much messier task of identifying hate speech, harassment, or misinformation--especially in today's polarized environment. "It appears as if the models are getting almost perfect scores, so some people think they can use them as a sort of black box to test for toxicity," says Mitchell Gordon, a PhD candidate in computer science at Stanford University who worked on the project. They're evaluating these models with approaches that work well when the answers are fairly clear, like recognizing whether'java' means coffee or the computer language, but these are tasks where the answers are not clear." Facebook says its artificial intelligence models identified and pulled down 27 million pieces of hate speech in the final three months of 2020.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found