Optimizing Preference Alignment with Differentiable NDCG Ranking
Zhou, Jiacong, Wang, Xianyun, Yu, Jun
–arXiv.org Artificial Intelligence
Aligning large language models with human preferences improves interaction quality and safety by ensuring outputs better reflect human values. A promising strategy involves Reinforcement Learning from Human Feedback (RLHF), starting with collecting and ranking responses generated by a supervised fine-tuning model to refine alignment. Current methods (DPO) focus on learning from pairwise preference data, categorizing responses into preferred and less preferred pairs, and optimizing by maximizing pairwise margins. Recent studies have uncovered a substantial discrepancy between the theoretical aspirations of preference learning and its real-world results. Current preference alignment techniques underperform expectations, with ranking accuracies below 60% on standard datasets. This suggests existing methods inadequately capture ideal preference relationships within sequences. To address this challenge, this paper introduces Direct Ranking Preference Optimization (DRPO), a novel method that views human preference alignment as a Learning-to-Rank (LTR) task. DRPO leverages NDCG, a widely used LTR metric, to optimize the ranking of responses within lists based on preference data, thereby enhancing ranking accuracies. Due to the nondifferentiability of NDCG, we propose diffNDCG loss, a differentiable approximation facilitated by a sorting network to simulate NDCG. Furthermore, to improve the quality of generated response, we propose a novel margin-based Adaptive Rank Policy Score. Extensive experiments have shown that DRPO outperforms existing baseline methods, enhancing the quality of the generated responses. Large language models (LLMs), trained on extensive and diverse datasets, can be prompted to demonstrate impressive capabilities across a broad range of tasks (Huang et al., 2024; Chiang et al., 2023; OpenAI et al., 2024; Touvron et al., 2023). However, due to the varied nature of their training data, these models sometimes produce content that may not align with human preferences, including fabricated answers, offensive comments, or harmful responses (Bai et al., 2022; Wang et al., 2023). To ensure the development of AI systems that are safe and controllable, this paper investigates learning tasks for LLMs that guide them to generate responses in alignment with human preferences. Human preference alignment has become an active research area. Reinforcement Learning with Human Feedback (RLHF) (Ouyang et al., 2022) is the first proposed method in this area. However, the optimization process of RLHF is complex, and its implementation introduces challenges due to unstable and costly training. Recent studies (Hong et al., 2024; Ethayarajh et al., 2024) have started to adopt alternatives to RLHF.
arXiv.org Artificial Intelligence
Oct-17-2024
- Country:
- Asia > China
- Heilongjiang Province > Harbin (0.04)
- Zhejiang Province > Hangzhou (0.04)
- Europe > Finland
- North America > United States (0.04)
- Asia > China
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.69)
- Law (0.68)
- Technology: