Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model
–arXiv.org Artificial Intelligence
While large language models have proven effective in a huge range of downstream applications, they often generate text that is problematic or lacks a desired attribute. In this paper, we introduce Reward-Augmented Decoding (RAD), a text generation procedure that uses a small unidirectional reward model to encourage a language model to generate text that has certain properties. Specifically, RAD uses the reward model to score generations as they are produced and rescales sampling probabilities to favor high-reward tokens. By using a unidirectional reward model, RAD can cache activations from prior generation steps to decrease computational overhead. Through experiments on generating non-toxic and sentiment-controlled text, we demonstrate that RAD performs best among methods that change only the generation procedure and matches the performance of state-of-the-art methods that involve re-training the language model. We further validate that RAD is effective on very large language models while incurring a minimal computational overhead.
arXiv.org Artificial Intelligence
Jan-1-2024
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > Minnesota
- Hennepin County > Minneapolis (0.14)
- Canada > Ontario
- North America
- Genre:
- Research Report > Promising Solution (0.48)
- Technology: