Boffins build AI that can detect cyber-abuse – and if you don't believe us, YOU CAN *%**#* *&**%* #** OFF

#artificialintelligence 

Can machine learning help clean it up? A team of computer scientists spanning the globe think so. They've built a neural network that can seemingly classify tweets into four different categories: normal, aggressor, spam, and bully – aggressor being a deliberately harmful, derogatory, or offensive tweet; and bully being a belittling or hostile message. The aim is to create a system that can filter out aggressive and bullying tweets, delete spam, and allow normal tweets through. The boffins admit it's difficult to draw a line between so-called cyber-aggression and cyber-bullying.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found