Online content moderation: Can AI help clean up social media?
Dec 20 (Thomson Reuters Foundation) -Two days after it was sued by Rohingya refugees from Myanmar over allegations that it did not take action against hate speech, social media company Meta, formerly known as Facebook, announced a new artificial intelligence system to tackle harmful content. Machine learning tools have increasingly become the go-to solution for tech firms to police their platforms, but questions have been raised about their accuracy and their potential threat to freedom of speech. WHY ARE SOCIAL MEDIA FIRMS UNDER FIRE OVER CONTENT MODERATION? The $150 billion Rohingya class-action lawsuit filed this month came at the end of a tumultuous period for social media giants, which have been criticised for failing to effectively tackle hate speech online and increasing polarization. The complaint argues that calls for violence shared on Facebook contributed to real-world violence against the Rohingya community, which suffered a military crackdown in 2017 that refugees said included mass killings and rape.
Dec-20-2021, 18:35:25 GMT
- Country:
- Africa > Ethiopia (0.05)
- Asia
- India (0.05)
- Middle East
- Myanmar (0.26)
- Thailand (0.05)
- Europe > United Kingdom (0.05)
- North America > United States
- New York (0.05)
- Oceania > New Zealand
- South Island > Canterbury Region > Christchurch (0.05)
- Industry:
- Information Technology > Services (1.00)
- Law
- Civil Rights & Constitutional Law (0.69)
- Litigation (0.91)
- Media (1.00)
- Technology: