On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting
–Neural Information Processing Systems
The availability of large pre-trained models is changing the landscape of Machine Learning research and practice, moving from a training from scratch to a fine-tuning'' paradigm. While in some applications the goal is to nudge'' the pre-trained distribution towards preferred outputs, in others it is to steer it towards a different distribution over the sample space. Two main paradigms have emerged to tackle this challenge: Reward Maximization (RM) and, more recently, Distribution Matching (DM). RM applies standard Reinforcement Learning (RL) techniques, such as Policy Gradients, to gradually increase the reward signal. DM prescribes to first make explicit the target distribution that the model is fine-tuned to approximate.
Neural Information Processing Systems
Dec-24-2025, 09:16:55 GMT
- Technology: