CDR: Customizable Density Ratios of Strong-over-weak LLMs for Preference Annotation

Xu, Guangxuan, Xu, Kai, Sudalairaj, Shivchander, Wang, Hao, Srivastava, Akash

arXiv.org Artificial Intelligence 

Preference tuning of large language models (LLMs) relies on high-quality human preference data, which is often expensive and time-consuming to gather. While existing methods can use trained reward models or proprietary model as judges for preference annotation, they have notable drawbacks: training reward models remain dependent on initial human data, and using proprietary model imposes license restrictions that inhibits commercial usage. In this paper, we introduce customized density ratio (CDR), a training-free and highly effective method that leverages off-the-shelf LLMs for preference data annotation. Our approach uses the log-density ratio between a better-aligned LLM and a less aligned LLM as a reward signal. We explores 221 different LLMs pairs and empirically demonstrate that increasing the performance gap between paired LLMs correlates with better reward generalization. Furthermore, we show that tailoring the density ratio reward function with specific criteria and preference exemplars enhances performance across domains and within target areas. In our experiment using density ratio from a pair of Mistral-7B models, CDR achieves a RewardBench score of 82.6, outperforming the best trained reward functions from same model class and demonstrating competitive performance against SoTA models in Safety (91.0) and Reasoning (88.0) domains. We use CDR to annotate an on-policy preference dataset with which we preference tune Llama-3-8B-Instruct with SimPO. Using reward signals from two relatively weak models, our approach pushes Llama-3-8B to achieve a 37.4% (+15.1%) Preference tuning has advanced the capabilities of large language models (LLMs), but this progress relies on high-quality human preference data which is both costly and time-consuming to gather. Cutting-edge models (e.g., ChatGPT, GPT-4, Claude-3) are aligned with curated, quality-controlled human preference data, typically provided by specialized companies. AI-feedback solutions are emerging as an alternative--either through a trained reward model (Dong et al., 2024) or proprietary LLM-as-a-judge (Cui et al., 2023). However, training reward models still rely on costly initial human preference data, and proprietary LLM-as-a-judge approaches introduce licensing restrictions that generally prevent commercial use.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found