Deprecating Benchmarks: Criteria and Framework
Joaquin, Ayrton San, Gipiškis, Rokas, Staufer, Leon, Gil, Ariel
–arXiv.org Artificial Intelligence
As frontier artificial intelligence (AI) models rapidly advance, benchmarks are integral to comparing different models and measuring their progress in different task-specific domains. However, there is a lack of guidance on when and how benchmarks should be deprecated once they cease to effectively perform their purpose. This risks benchmark scores over-valuing model capabilities, or worse, obscuring capabilities and safety-washing. Based on a review of benchmarking practices, we propose criteria to decide when to fully or partially deprecate benchmarks, and a framework for deprecating benchmarks. Our work aims to advance the state of benchmarking towards rigorous and quality evaluations, especially for frontier models, and our recommendations are aimed to benefit benchmark developers, benchmark users, AI governance actors (across governments, academia, and industry panels), and policy makers.
arXiv.org Artificial Intelligence
Jul-10-2025
- Country:
- Africa > Eswatini
- Asia > China (0.04)
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.05)
- Latvia > Lubāna Municipality
- Lubāna (0.04)
- Lithuania > Vilnius County
- Vilnius (0.04)
- Germany > Bavaria
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- Delaware > Kent County
- Dover (0.04)
- Ohio > Delaware County
- Delaware (0.04)
- Delaware > Kent County
- Canada > Ontario
- Genre:
- Overview (0.68)
- Research Report (0.64)
- Industry:
- Government (1.00)
- Technology: