ARF-RLHF: Adaptive Reward-Following for RLHF through Emotion-Driven Self-Supervision and Trace-Biased Dynamic Optimization
–arXiv.org Artificial Intelligence
Current RLHF methods such as PPO and DPO typically reduce human preferences to binary labels, which are costly to obtain and too coarse to reflect individual variation. We observe that expressions of satisfaction and dissatisfaction follow stable linguistic patterns across users, indicating that more informative supervisory signals can be extracted from free-form feedback. Building on this insight, we introduce Adaptive Reward-Following (ARF), which converts natural feedback into continuous preference trajectories and optimizes them using the novel TraceBias algorithm. Across diverse LLMs and preference domains, ARF consistently outperforms PPO and DPO, improving alignment by up to 7.6%. Our results demonstrate that continuous reward modeling provides a scalable path toward personalized and theoretically grounded RLHF.
arXiv.org Artificial Intelligence
Oct-28-2025
- Country:
- Asia
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.14)
- North America
- Canada (0.04)
- United States > Florida (0.04)
- Genre:
- Research Report > New Finding (0.86)
- Industry:
- Leisure & Entertainment (0.68)
- Technology: