Beyond Training Objectives: Interpreting Reward Model Divergence in Large Language Models

Open in new window