Hard2Verify: A Step-Level Verification Benchmark for Open-Ended Frontier Math

Pandit, Shrey, Xu, Austin, Nguyen, Xuan-Phi, Ming, Yifei, Xiong, Caiming, Joty, Shafiq

arXiv.org Artificial Intelligence 

Large language model (LLM)-based reasoning systems have recently achieved gold medal-level performance in the IMO 2025 competition, writing mathematical proofs where, to receive full credit, each step must be not only correct but also sufficiently supported. To train LLM-based reasoners in such challenging, open-ended settings, strong verifiers capable of catching step-level mistakes are necessary prerequisites. We introduce Hard2V erify, a human-annotated, step-level verification benchmark produced with over 500 hours of human labor. Hard2V erify is designed to rigorously assess step-level verifiers at the frontier: V erifiers must provide step-level annotations or identify the first error in responses generated by frontier LLMs for very recent, challenging, and open-ended math questions. We evaluate 29 generative critics and process reward models, demonstrating that, beyond a few standouts, open-source verifiers lag closed source models. We subsequently analyze what drives poor performance in step-level verification, the impacts of scaling verifier compute, as well as fundamental questions such as self-verification and verification-generation dynamics.Figure 1: Comparison of models evaluated on both ProcessBench (Zheng et al., 2024a) and our Hard2V erify benchmark. Past benchmarks do not sufficiently evaluate in the frontier-level math settings that Hard2V erify does; On the same error identification task, Qwen2.5-Math-PRM-72B Mathematical reasoning serves as a gold-standard evaluation setting for benchmarking reasoning progress in large language models (LLMs). Over the past half-decade, benchmarks have been introduced to assess LLMs at the grade-school (Cobbe et al., 2021), high-school (Hendrycks et al., 2021), university (Zhang et al., 2023), and competition math level (MMA, 2025; He et al., 2024a; Gao et al., 2024). However, the progress of mathematical reasoning ability of LLMs has outpaced benchmark creation, with every subsequent release of a frontier LLM saturating new benchmarks, most recently with GPT -5 Pro achieving 96.5%+ on AIME 2024. As a result, recent efforts (Glazer et al., 2024; Phan et al., 2025) have written novel, unseen mathematical questions to test LLMs. 1 This paradigm requires training data with solutions that are easily verifiable, i.e., have solutions that can be easily checked against a known ground-truth by string matching or symbolic checkers. Math benchmarks, for the most part, also adopt the verifiable setup, where a model response is considered correct if its final answer matches the established ground-truth.