How to build a better AI benchmark

MIT Technology Review 

Developers of these coding agents aren't necessarily doing anything as straightforward cheating, but they're crafting approaches that are too neatly tailored to the specifics of the benchmark. The initial SWE-Bench test set was limited to programs written in Python, which meant developers could gain an advantage by training their models exclusively on Python code. Soon, Yang noticed that high-scoring models would fail completely when tested on different programming languages--revealing an approach to the test that he describes as "gilded." "It looks nice and shiny at first glance, but then you try to run it on a different language and the whole thing just kind of falls apart," Yang says. You're designing to make a SWE-Bench agent, which is much less interesting." The SWE-Bench issue is a symptom of a more sweeping--and complicated--problem in AI evaluation, and one that's increasingly sparking heated debate: The benchmarks the industry uses to guide development are drifting further and further away from evaluating actual capabilities, calling their basic value into question. Making the situation worse, several benchmarks, most notably FrontierMath and Chatbot Arena, have recently come under heat for an alleged lack of transparency. Nevertheless, benchmarks still play a central role in model development, even if few experts are willing to take their results at face value. OpenAI cofounder Andrej Karpathy recently described the situation as "an evaluation crisis": the industry has fewer trusted methods for measuring capabilities and no clear path to better ones. "Historically, benchmarks were the way we evaluated AI systems," says Vanessa Parli, director of research at Stanford University's Institute for Human-Centered AI. "Is that the way we want to evaluate systems going forward?