Just Say What You Want: Only-prompting Self-rewarding Online Preference Optimization
Xu, Ruijie, Liu, Zhihan, Liu, Yongfei, Yan, Shipeng, Wang, Zhaoran, Zhang, Zhi, He, Xuming
–arXiv.org Artificial Intelligence
We address the challenge of online Reinforcement Learning from Human Feedback (RLHF) with a focus on self-rewarding alignment methods. In online RLHF, obtaining feedback requires interaction with the environment, which can be costly when using additional reward models or the GPT-4 API. Current self-rewarding approaches rely heavily on the discriminator's judgment capabilities, which are effective for large-scale models but challenging to transfer to smaller ones. To address these limitations, we propose a novel, only-prompting self-rewarding online algorithm that generates preference datasets without relying on judgment capabilities. Additionally, we employ fine-grained arithmetic control over the optimality gap between positive and negative examples, generating more hard negatives in the later stages of training to help the model better capture subtle human preferences. Finally, we conduct extensive experiments on two base models, Mistral-7B and Mistral-Instruct-7B, which significantly bootstrap the performance of the reference model, achieving 34.5% in the Length-controlled Win Rates of AlpacaEval 2.0. Reinforcement Learning from Human Feedback (RLHF) is a prevalent technique for Large Language Model (LLM) alignment, ensuring models adhere to human preferences, produce useful and truthful responses, and prevent harmful ones (Stiennon et al., 2020; Ouyang et al., 2022; Christiano et al., 2017). Current RLHF methods are classified into online and offline approaches (Rafailov et al., 2024; Xiong et al., 2024; Meng et al., 2024).
arXiv.org Artificial Intelligence
Oct-14-2024