Evaluating LLM-Contaminated Crowdsourcing Data Without Ground Truth
Zhang, Yichi, Pang, Jinlong, Zhu, Zhaowei, Liu, Yang
–arXiv.org Artificial Intelligence
The recent success of generative AI highlights the crucial role of high-quality human feedback in building trustworthy AI systems. However, the increasing use of large language models (LLMs) by crowdsourcing workers poses a significant challenge: datasets intended to reflect human input may be compromised by LLM-generated responses. Existing LLM detection approaches often rely on high-dimensional training data such as text, making them unsuitable for annotation tasks like multiple-choice labeling. In this work, we investigate the potential of peer prediction -- a mechanism that evaluates the information within workers' responses without using ground truth -- to mitigate LLM-assisted cheating in crowdsourcing with a focus on annotation tasks. Our approach quantifies the correlations between worker answers while conditioning on (a subset of) LLM-generated labels available to the requester. Building on prior research, we propose a training-free scoring mechanism with theoretical guarantees under a crowdsourcing model that accounts for LLM collusion. We establish conditions under which our method is effective and empirically demonstrate its robustness in detecting low-effort cheating on real-world crowdsourcing datasets.
arXiv.org Artificial Intelligence
Nov-7-2025
- Country:
- North America > United States
- California > Santa Cruz County
- Santa Cruz (0.04)
- Tennessee > Shelby County
- Memphis (0.04)
- California > Santa Cruz County
- North America > United States
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Education (0.66)
- Technology: