Data-Centric Human Preference Optimization with Rationales
Just, Hoang Anh, Jin, Ming, Sahu, Anit, Phan, Huy, Jia, Ruoxi
–arXiv.org Artificial Intelligence
Reinforcement learning from human feedback plays a crucial role in aligning language models towards human preferences, traditionally represented through comparisons between pairs or sets of responses within a given context. While many studies have enhanced algorithmic techniques to optimize learning from such data, this work shifts focus to improving preference learning through a data-centric approach. Specifically, we propose enriching existing preference datasets with machine-generated rationales that explain the reasons behind choices. We develop a simple and principled framework to augment current preference learning methods with rationale information. Our comprehensive analysis highlights how rationales enhance learning efficiency. Extensive experiments reveal that rationale-enriched preference learning offers multiple advantages: it improves data efficiency, accelerates convergence to higher-performing models, and reduces verbosity bias and hallucination. Furthermore, this framework is versatile enough to integrate with various preference optimization algorithms. Overall, our findings highlight the potential of re-imagining data design for preference learning, demonstrating that even freely available machine-generated rationales can significantly boost performance across multiple dimensions. The code repository is available at https: //github.com/reds-lab/preference-learning-with-rationales
arXiv.org Artificial Intelligence
Aug-3-2024
- Country:
- Europe
- Finland (0.04)
- Ireland (0.04)
- Netherlands (0.04)
- United Kingdom
- England
- Northamptonshire (0.04)
- Wiltshire (0.04)
- Northern Ireland > County Antrim (0.14)
- England
- North America > United States
- Virginia (0.04)
- Europe
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government > Regional Government > Europe Government (0.68)
- Technology: