Self-Alignment with Instruction Backtranslation
Li, Xian, Yu, Ping, Zhou, Chunting, Schick, Timo, Zettlemoyer, Luke, Levy, Omer, Weston, Jason, Lewis, Mike
–arXiv.org Artificial Intelligence
We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents (self-augmentation), and then selecting high quality examples from among these candidates (self-curation). This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMa-based models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective self-alignment.
arXiv.org Artificial Intelligence
Aug-14-2023
- Country:
- North America > United States (0.46)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Government (1.00)
- Health & Medicine
- Consumer Health (1.00)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area (1.00)
- Technology: