MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models
Le, Thai, Wang, Suhang, Lee, Dongwon
Therefore, to mitigate such problems, researchers have developed state-of-the-art (SOTA) models to autodetect fake news on social media using sophisticated data science and machine learning techniques. In this work, then, we ask "what if adversaries attempt to attack such detection models?" and investigate related issues by (i) proposing a novel attack scenario against fake news detectors, in which adversaries can post malicious comments toward news articles to mislead SOTA fake news detectors, and (ii) developing Malcom, an end-to-end adversarial comment generation framework to achieve such an attack. Through a comprehensive evaluation, we demonstrate that about 94% and 93.5% of the time on average Malcom can successfully mislead five of the latest neural detection models to always output targeted real and fake news labels. Furthermore, Malcom can also fool black box fake news detectors to always output real news labels 90% of the time on average. We also compare Real Comment: admitting im not going to read this (...) our attack model with four baselines across two real-world Malcom: hes a conservative from a few months ago datasets, not only on attack performance but also on generated Prediction Change: Real News Fake News quality, coherency, transferability, and robustness. We release the source code of Malcom at https://github.com/lethaiq/MALCOM
Sep-27-2020
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Pennsylvania (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Technology: