Adaptive Preference Optimization with Uncertainty-aware Utility Anchor
Wang, Xiaobo, Jia, Zixia, Li, Jiaqi, Liu, Qi, Zheng, Zilong
–arXiv.org Artificial Intelligence
Offline preference optimization methods are efficient for large language models (LLMs) alignment. Direct Preference optimization (DPO)-like learning, one of the most popular approaches, stands out for its efficiency in reward modeling. However, these methods typically follow the convention to use Bradley-Terry (BT) reward modeling that faces several critical assumptions, including the requirement for pairwise training data, model distribution shifting, human rationality assumption, etc. To address these limitations, we propose a general framework for offline preference optimization methods, Adaptive Preference Optimization with Utility Anchor (UAPO), which introduces an anchoring function to estimate the uncertainties brought from preference data annotation. Our method enables training even in scenarios where the data is unpaired, significantly enhancing data utilization efficiency. Moreover, the anchor design makes UAPO more robust in the training process. Experimental results demonstrate that UAPO achieves competitive outcomes without the strict dependency on data pairing, paving the way for more flexible and effective preference optimization methods.
arXiv.org Artificial Intelligence
Sep-16-2025
- Country:
- Asia
- China > Anhui Province
- Hefei (0.04)
- Middle East > Jordan (0.04)
- Singapore (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- China > Anhui Province
- Europe
- North America > United States
- California > San Diego County
- San Diego (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- California > San Diego County
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education > Educational Setting > Online (0.46)
- Technology: