Advantage Shaping as Surrogate Reward Maximization: Unifying Pass@K Policy Gradients
Thrampoulidis, Christos, Mahdavi, Sadegh, Deng, Wenlong
–arXiv.org Artificial Intelligence
This note reconciles two seemingly distinct approaches to policy gradient optimization for the Pass@K objective in reinforcement learning with verifiable rewards: (1) direct REINFORCE-style methods, and (2) advantage-shaping techniques that directly modify GRPO. We show that these are two sides of the same coin. By reverse-engineering existing advantage-shaping algorithms, we reveal that they implicitly optimize surrogate rewards. We specifically interpret practical "hard-example up-weighting" modifications to GRPO as reward-level regularization. Conversely, starting from surrogate reward objectives, we provide a simple recipe for deriving both existing and new advantage-shaping methods. This perspective provides a lens for RLVR policy gradient optimization beyond our original motivation of Pass@K.
arXiv.org Artificial Intelligence
Nov-17-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America
- Canada > British Columbia (0.40)
- United States > Gulf of Mexico
- Central GOM (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.50)
- Technology: