ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist Content
Chandna, Bhavik, Aboujenane, Mariam, Naseem, Usman
–arXiv.org Artificial Intelligence
Large Multimodal Models (LMMs) are increasingly vulnerable to AI-generated extremist content, including photorealistic images and text, which can be used to bypass safety mechanisms and generate harmful outputs. However, existing datasets for evaluating LMM robustness offer limited exploration of extremist content, often lacking AI-generated images, diverse image generation models, and comprehensive coverage of historical events, which hinders a complete assessment of model vulnerabilities. To fill this gap, we introduce ExtremeAIGC, a benchmark dataset and evaluation framework designed to assess LMM vulnerabilities against such content. ExtremeAIGC simulates real-world events and malicious use cases by curating diverse text- and image-based examples crafted using state-of-the-art image generation techniques. Our study reveals alarming weaknesses in LMMs, demonstrating that even cutting-edge safety measures fail to prevent the generation of extremist material. We systematically quantify the success rates of various attack strategies, exposing critical gaps in current defenses and emphasizing the need for more robust mitigation strategies.
arXiv.org Artificial Intelligence
Mar-12-2025
- Country:
- Africa > Middle East
- Morocco > Fès-Meknès Region > Fez (0.04)
- Asia
- Afghanistan (0.04)
- Middle East
- Russia (0.14)
- Vietnam (0.04)
- Europe
- North America > United States
- California > San Diego County
- San Diego (0.04)
- Oklahoma > Oklahoma County
- Oklahoma City (0.04)
- California > San Diego County
- Oceania > Australia (0.04)
- Africa > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Government > Military (1.00)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (1.00)
- Media (1.00)
- Technology: