Distributionally Robust Reinforcement Learning with Human Feedback
Mandal, Debmalya, Sasnauskas, Paulius, Radanovic, Goran
–arXiv.org Artificial Intelligence
Reinforcement learning from human feedback (RLHF) has evolved to be one of the main methods for fine-tuning large language models (LLMs). However, existing RLHF methods are non-robust, and their performance deteriorates if the downstream task differs significantly from the preference dataset used in fine-tuning. In order to mitigate this problem, we introduce a distributionally robust RLHF for fine-tuning LLMs. In particular, our goal is to ensure that a fine-tuned model retains its performance even when the distribution of prompts significantly differs from the distribution encountered during fine-tuning. We formulate distributionally robust optimization (DRO) version of two popular fine-tuning methods - (1) reward-based RLHF and (2) reward-free DPO (direct preference optimization). We propose a minibatch gradient descent based algorithms for both of them, and theoretically prove convergence guarantees for the algorithms. Subsequently, we evaluate our algorithms on an out-of-distribution (OOD) task by first training the model on the Unified-Feedback dataset and evaluating its performance on two different datasets. The experimental results show that our robust training improves the accuracy of the learned reward models on average, and markedly on some tasks, such as reasoning. Furthermore, we show that the robust versions of policy optimization methods, similarly improve performance on OOD tasks. 1 Introduction Reinforcement learning with Human Feedback (RLHF) has emerged to be one of the important tools for aligning large language models (LLMs) to human intentions across a diverse set of tasks. Existing RLHF algorithms work by collecting preference dataset on a given task, and updating a base model using preference based reinforcement learning. Moreover, the availability of many public preference datasets [SDB23] has led to the adoption of RLHF across a diverse range of downstream tasks. However, real-world deployment of fine-tuned policies faces several challenges.
arXiv.org Artificial Intelligence
Mar-1-2025