A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity
Lee, Andrew, Bai, Xiaoyan, Pres, Itamar, Wattenberg, Martin, Kummerfeld, Jonathan K., Mihalcea, Rada
–arXiv.org Artificial Intelligence
While alignment algorithms are now commonly used to tune pre-trained language models towards a user's preferences, we lack explanations for the underlying mechanisms in which models become ``aligned'', thus making it difficult to explain phenomena like jailbreaks. In this work we study a popular algorithm, direct preference optimization (DPO), and the mechanisms by which it reduces toxicity. Namely, we first study how toxicity is represented and elicited in a pre-trained language model, GPT2-medium. We then apply DPO with a carefully crafted pairwise dataset to reduce toxicity. We examine how the resulting model averts toxic outputs, and find that capabilities learned from pre-training are not removed, but rather bypassed. We use this insight to demonstrate a simple method to un-align the model, reverting it back to its toxic behavior.
arXiv.org Artificial Intelligence
Jan-3-2024
- Country:
- Asia (0.68)
- North America > United States
- Massachusetts (0.14)
- Michigan (0.14)
- Genre:
- Research Report (0.50)
- Technology: