RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors
Dugan, Liam, Hwang, Alyssa, Trhlik, Filip, Ludan, Josh Magnus, Zhu, Andrew, Xu, Hainiu, Ippolito, Daphne, Callison-Burch, Chris
–arXiv.org Artificial Intelligence
Many commercial and open-source models claim to detect machine-generated text with extremely high accuracy (99% or more). However, very few of these detectors are evaluated on shared benchmark datasets and even when they are, the datasets used for evaluation are insufficiently challenging-lacking variations in sampling strategy, adversarial attacks, and open-source generative models. In this work we present RAID: the largest and most challenging benchmark dataset for machine-generated text detection. RAID includes over 6 million generations spanning 11 models, 8 domains, 11 adversarial attacks and 4 decoding strategies. Using RAID, we evaluate the out-of-domain and adversarial robustness of 8 open- and 4 closed-source detectors and find that current detectors are easily fooled by adversarial attacks, variations in sampling strategies, repetition penalties, and unseen generative models. We release our data along with a leaderboard to encourage future research.
arXiv.org Artificial Intelligence
Jun-10-2024
- Country:
- Europe (1.00)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Leisure & Entertainment (1.00)
- Media > News (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (0.99)
- Generation (0.68)
- Large Language Model (1.00)
- Representation & Reasoning (1.00)
- Machine Learning > Neural Networks
- Communications > Social Media (1.00)
- Security & Privacy (1.00)
- Software (1.00)
- Artificial Intelligence
- Information Technology