Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense
Tao, Leitian, Kulikov, Ilia, Saha, Swarnadeep, Wang, Tianlu, Xu, Jing, Li, Sharon, Weston, Jason E, Yu, Ping
–arXiv.org Artificial Intelligence
Post-training for reasoning of large language models (LLMs) increasingly relies on verifiable rewards: deterministic checkers that provide 0-1 correctness signals. While reliable, such binary feedback is brittle--many tasks admit partially correct or alternative answers that verifiers under-credit, and the resulting all-or-nothing supervision limits learning. Reward models offer richer, continuous feedback, which can serve as a complementary supervisory signal to verifiers. We introduce HERO (Hybrid Ensemble Reward Optimization), a reinforcement learning framework that integrates verifier signals with reward-model scores in a structured way. HERO employs stratified normalization to bound reward-model scores within verifier-defined groups, preserving correctness while refining quality distinctions, and variance-aware weighting to emphasize challenging prompts where dense signals matter most. Across diverse mathematical reasoning benchmarks, HERO consistently outperforms RM-only and verifier-only baselines, with strong gains on both verifiable and hard-to-verify tasks. Our results show that hybrid reward design retains the stability of verifiers while leveraging the nuance of reward models to advance reasoning.
arXiv.org Artificial Intelligence
Oct-20-2025
- Country:
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Slovenia > Drava
- Municipality of Benedikt > Benedikt (0.04)
- Italy > Calabria
- North America > United States
- Wisconsin > Dane County > Madison (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.68)
- Technology: