Process Reward Models That Think
Khalifa, Muhammad, Agarwal, Rishabh, Logeswaran, Lajanugen, Kim, Jaekyeom, Peng, Hao, Lee, Moontae, Lee, Honglak, Wang, Lu
–arXiv.org Artificial Intelligence
Step-by-step verifiers -- also known as process reward models (PRMs) -- are a key ingredient for test-time scaling. PRMs require step-level supervision, making them expensive to train. This work aims to build data-efficient PRMs as verbalized step-wise reward models that verify every step in the solution by generating a verification chain-of-thought (CoT). We propose ThinkPRM, a long CoT verifier fine-tuned on orders of magnitude fewer process labels than those required by discriminative PRMs. Our approach capitalizes on the inherent reasoning abilities of long CoT models, and outperforms LLM-as-a-Judge and discriminative verifiers -- using only 1% of the process labels in PRM800K -- across several challenging benchmarks. Specifically, ThinkPRM beats the baselines on ProcessBench, MATH-500, and AIME '24 under best-of-N selection and reward-guided search. In an out-of-domain evaluation on a subset of GPQA-Diamond and LiveCodeBench, our PRM surpasses discriminative verifiers trained on the full PRM800K by 8% and 4.5%, respectively. Lastly, under the same token budget, ThinkPRM scales up verification compute more effectively compared to LLM-as-a-Judge, outperforming it by 7.2% on a subset of ProcessBench. Our work highlights the value of generative, long CoT PRMs that can scale test-time compute for verification while requiring minimal supervision for training. Our code, data, and models are released at https://github.com/mukhal/thinkprm.
arXiv.org Artificial Intelligence
Dec-9-2025
- Country:
- Asia
- North America > United States
- Illinois > Champaign County
- Urbana (0.04)
- Michigan (0.04)
- Illinois > Champaign County
- Genre:
- Research Report (0.82)
- Workflow (1.00)
- Technology: