Enabling Contextual Soft Moderation on Social Media through Contrastive Textual Deviation
Paudel, Pujan, Saeed, Mohammad Hammas, Auger, Rebecca, Wells, Chris, Stringhini, Gianluca
–arXiv.org Artificial Intelligence
Automated soft moderation systems are unable to ascertain if a post supports or refutes a false claim, resulting in a large number of contextual false positives. This limits their effectiveness, for example undermining trust in health experts by adding warnings to their posts or resorting to vague warnings instead of granular fact-checks, which result in desensitizing users. In this paper, we propose to incorporate stance detection into existing automated soft-moderation pipelines, with the goal of ruling out contextual false positives and providing more precise recommendations for social media content that should receive warnings. We develop a textual deviation task called Contrastive Textual Deviation (CTD) and show that it outperforms existing stance detection approaches when applied to soft moderation.We then integrate CTD into the stateof-the-art system for automated soft moderation Lambretta, showing that our approach can reduce contextual false positives from 20% to 2.1%, providing another important building block towards deploying reliable automated soft moderation tools on social media.
arXiv.org Artificial Intelligence
Jul-30-2024
- Country:
- Antarctica (0.04)
- Asia > Middle East
- Jordan (0.04)
- Europe > Italy
- Calabria > Catanzaro Province > Catanzaro (0.04)
- North America > United States
- Michigan > Wayne County (0.04)
- New York > New York County
- New York City (0.04)
- Wisconsin (0.04)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government
- Health & Medicine > Therapeutic Area
- Immunology (0.70)
- Infections and Infectious Diseases (0.48)
- Information Technology
- Security & Privacy (0.93)
- Services (1.00)
- Media > News (0.95)
- Technology: