Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment
Wu, Zhaofeng, Balashankar, Ananth, Kim, Yoon, Eisenstein, Jacob, Beirami, Ahmad
–arXiv.org Artificial Intelligence
Aligning language models (LMs) based on human-annotated preference data is a crucial step in obtaining practical and performant LM-based systems. However, multilingual human preference data are difficult to obtain at scale, making it challenging to extend this framework to diverse languages. In this work, we evaluate a simple approach for zero-shot cross-lingual alignment, where a reward model is trained on preference data in one source language and directly applied to other target languages. On summarization and open-ended dialog generation, we show that this method is consistently successful under comprehensive evaluation settings, including human evaluation: cross-lingually aligned models are preferred by humans over unaligned models on up to >70% of evaluation instances. We moreover find that a different-language reward model sometimes yields better aligned models than a same-language reward model. We also identify best practices when there is no language-specific data for even supervised finetuning, another component in alignment.
arXiv.org Artificial Intelligence
Apr-18-2024
- Country:
- Asia (0.93)
- Europe (1.00)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Technology: