Smaller Models, Smarter Rewards: A Two-Sided Approach to Process and Outcome Rewards
Groeneveld, Jan Niklas, Qin, Xi, Schaefer, Alexander, Oren, Yaad
–arXiv.org Artificial Intelligence
Generating high-quality code remains a challenge for Large Language Models (LLMs). For the evolution of reasoning models on this task, reward models are a necessary intermediate step. These models judge outcomes or intermediate steps. Decoder-only transformer models can be turned into reward models by introducing a regression layer and supervised fine-tuning. While it is known that reflection capabilities generally increase with the size of a model, we want to investigate whether state-of-the-art small language models like the Phi-4 family can be turned into usable reward models blending the consideration of process rewards and outcome rewards. Targeting this goal, we construct a dataset of code samples with correctness labels derived from the APPS coding challenge benchmark. We then train a value-head model to estimate the success probability of intermediate outputs. Our evaluation shows that small LLMs are capable of serving as effective reward models or code evaluation critics, successfully identifying correct solutions among multiple candidates. Using this critic, we achieve over a 20% improvement in the search capability of the most accurate code out of multiple generations.
arXiv.org Artificial Intelligence
Dec-11-2025
- Country:
- Asia > China
- Liaoning Province > Shenyang (0.04)
- North America > United States
- California
- Orange County > Irvine (0.14)
- Santa Clara County > Palo Alto (0.04)
- California
- Asia > China
- Genre:
- Research Report > New Finding (1.00)
- Technology: