Bootstrapped Mixed Rewards for RL Post-Training: Injecting Canonical Action Order
Gupta, Prakhar, Gupta, Vaibhav
–arXiv.org Artificial Intelligence
Post-training with reinforcement learning (RL) typically optimizes a single scalar objective and ignores structure in how solutions are produced. We ask whether a scalar hint toward a canonical solver ordering, used only during RL post-training, improves performance even when fine-tuned on randomized solution sequences. On Sudoku, we train a Transformer with standard fine-tuning on randomized solving orders, then post-train it with Group Relative Policy Optimization (GRPO) with two rewards: cell accuracy and an ordering reward that increases when the model's emission order aligns with the solver order. To compare signals cleanly, we combine them via fixed mixtures and use a simple bootstrapped scaling to equalize component magnitudes at initialization. Mixed rewards generally outperform cell-only optimization--the best mixture yields substantially higher test accuracy than the fine-tuned-only model trained on random-order and approaches the fine-tuned-only model trained on solver-order sequences in accuracy. These results suggest that coarse ordering signals can steer RL post-training toward solver-order trajectories without modifying supervised data or architecture.
arXiv.org Artificial Intelligence
Dec-5-2025
- Country:
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Michigan (0.04)
- California > San Francisco County
- North America > United States
- Genre:
- Research Report (0.84)
- Industry:
- Leisure & Entertainment > Games > Sudoku (0.30)
- Technology: