Peeking Behind Closed Doors: Risks of LLM Evaluation by Private Data Curators
Bansal, Hritik, Maini, Pratyush
–arXiv.org Artificial Intelligence
The rapid advancement in building large language models (LLMs) has intensified competition among big-tech companies and AI startups. In this regard, model evaluations are critical for product and investment-related decision-making. While open evaluation sets like MMLU initially drove progress, concerns around data contamination and data bias have constantly questioned their reliability. As a result, it has led to the rise of private data curators who have begun conducting hidden evaluations with high-quality self-curated test prompts and their own expert annotators. In this paper, we argue that despite potential advantages in addressing contamination issues, private evaluations introduce inadvertent financial and evaluation risks. In particular, the key concerns include the potential conflict of interest arising from private data curators' business relationships with their clients (leading LLM firms). In addition, we highlight that the subjective preferences of private expert annotators will lead to inherent evaluation bias towards the models trained with the private curators' data. Overall, this paper lays the foundation for studying the risks of private evaluations that can lead to wide-ranging community discussions and policy changes.
arXiv.org Artificial Intelligence
Feb-9-2025
- Country:
- North America > United States (0.15)
- Genre:
- Research Report (0.42)
- Industry:
- Information Technology > Security & Privacy (0.91)
- Technology: