Goto

Collaborating Authors

 Chopra, Paras


IPO: Your Language Model is Secretly a Preference Classifier

arXiv.org Artificial Intelligence

Reinforcement learning from human feedback (RLHF) has emerged as the primary method for aligning large language models (LLMs) with human preferences. While it enables LLMs to achieve human-level alignment, it often incurs significant computational and financial costs due to its reliance on training external reward models or human-labeled preferences. In this work, we propose Implicit Preference Optimization (IPO), an alternative approach that leverages generative LLMs as preference classifiers, thereby reducing the dependence on external human feedback or reward models to obtain preferences. We conduct a comprehensive evaluation on the preference classification ability of LLMs using RewardBench, assessing models across different sizes, architectures, and training levels to validate our hypothesis. Furthermore, we investigate the self-improvement capabilities of LLMs by generating multiple responses for a given instruction and employing the model itself as a preference classifier for Direct Preference Optimization (DPO)-based training. Our findings demonstrate that models trained through IPO achieve performance comparable to those utilizing state-of-the-art reward models for obtaining preferences.


Do GFlowNets Transfer? Case Study on the Game of 24/42

arXiv.org Artificial Intelligence

Generating diverse solutions is key to human-like reasoning, yet autoregres-sive language models focus on single accurate responses, limiting creativity. Our case study shows their limited zero-shot transferability by fine-tuning small and medium-sized large language models on the Game of 24 and testing them on the Game of 42 datasets. Results revealed that GFlowNets struggle to maintain solution diversity and accuracy, highlighting key limitations in their cross-task generalization and the need for future research in improved transfer learning capabilities. Recent advances have introduced approaches showing significant improvement in LLM reasoning capabilities (Touvron et al., 2023a), including supervised fine-tuning with synthetic datasets (Y u et al.; Y ue et al.), modified decoding mechanisms (Holtzman et al.; Nguyen et al., 2024), and enhanced pretraining data quality (Akter et al., 2024; Trinh et al., 2024). While these approaches demonstrate improved accuracy, they rarely account for the diversity of correct solutions, an essential aspect of human-like reasoning and creativity (Y u et al., 2024a; Hu et al.).