Look Before you Leap: Estimating LLM Benchmark Scores from Descriptions
Park, Jungsoo, Mendes, Ethan, Stanovsky, Gabriel, Ritter, Alan
–arXiv.org Artificial Intelligence
Progress in large language models is constrained by an evaluation bottleneck: build a benchmark, run models, then iterate. We ask a question: can we forecast outcomes before running any experiments to inform earlier study design? For example, a team building an AI assistant for a certain task can estimate whether expected performance is around 50 or closer to 80, evidence that supports whether to proceed to a pilot study, how to scope it, and how to allocate resources. We study text-only performance forecasting, where a model predicts a score from a redacted task description and intended configuration, with no access to dataset instances. To support systematic study, we curate PRECOG, a corpus of redacted description-performance pairs spanning diverse tasks, domains, and metrics. We scrape task and configuration descriptions from arXiv, yielding 2,290 instances covering 1,519 papers, and construct a leakage free test split using papers published after the knowledge cutoff of the evaluated models. Experiments show the task is challenging but feasible: reasoning models achieve moderate prediction performance with well calibrated uncertainty, reaching mean absolute error as low as 9.9 at high confidence thresholds. We further test a zero-leakage setting, forecasting on newly released datasets or experiments before their papers are indexed, where GPT5 with built in web search still attains nontrivial prediction accuracy. Overall, our corpus and analyses offer an initial step toward open ended anticipatory evaluation, supporting difficulty estimation and smarter experiment prioritization.
arXiv.org Artificial Intelligence
Dec-3-2025
- Country:
- Asia
- Indonesia > Bali (0.04)
- Middle East > Israel
- Jerusalem District > Jerusalem (0.04)
- Europe > Germany
- North Rhine-Westphalia > Cologne Region > Cologne (0.04)
- Asia
- Genre:
- Research Report > Experimental Study (0.48)
- Technology: