SAIL: Self-Improving Efficient Online Alignment of Large Language Models

Ding, Mucong, Chakraborty, Souradip, Agrawal, Vibhu, Che, Zora, Koppel, Alec, Wang, Mengdi, Bedi, Amrit, Huang, Furong

arXiv.org Machine Learning 

As artificial intelligence (AI) systems surpass human capabilities in various tasks, ensuring alignment with human values and ethics is crucial. This is especially important for large language models (LLMs), which are trained on diverse datasets that may contain harmful content. Reinforcement Learning from Human Feedback (RLHF) is an effective method for AI alignment, with models like OpenAI's GPT-4, Google's Gemini, and Anthropic Claude showing safe and aligned behaviors. However, the vast majority of the current research in RLHF (Agarwal et al., 2020; Rafailov et al., 2023; Ouyang et al., 2022; Chakraborty et al., 2024; Swamy et al., 2024) focuses on the offline setting, which involves using a fixed dataset of responses generated by the supervised fine-tuned model (SFT), ranked by human experts. Consequently, these methods are inherently offline and heavily reliant on the quality of the offline data generated by the SFT model, which exhibits drawbacks such as insufficient coverage of response-query pairs leading to sub-optimal alignment. To deal with the above shortcomings, recent literature (Guo et al., 2024a; Sharma et al., 2024; Lee et al., 2023; Yuan et al., 2024b) has focused on designing online RLHF algorithms. The setting of online RLHF transcends the constraints of a static offline dataset and aims to address two critical questions: Q1: How should we generate new responses during fine-tuning?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found