Anatomy of an AI-powered malicious social botnet
Yang, Kai-Cheng, Menczer, Filippo
–arXiv.org Artificial Intelligence
Concerns have been raised that they could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a Twitter botnet that appears to employ ChatGPT to generate human-like content. Through heuristics, we identify 1,140 accounts and validate them via manual annotation. These accounts form a dense cluster of fake personas that exhibit similar behaviors, including posting machine-generated content and stolen images, and engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots.
arXiv.org Artificial Intelligence
Jul-30-2023
- Country:
- Asia > South Korea (0.04)
- Europe > Switzerland
- Basel-City > Basel (0.04)
- North America > United States
- Indiana (0.04)
- New York > New York County
- New York City (0.04)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Banking & Finance > Trading (0.93)
- Government (1.00)
- Health & Medicine (0.68)
- Information Technology
- Security & Privacy (1.00)
- Services (0.93)
- Media > News (0.94)
- Technology: