ALMA: Alignment with Minimal Annotation
Yasunaga, Michihiro, Shamis, Leonid, Zhou, Chunting, Cohen, Andrew, Weston, Jason, Zettlemoyer, Luke, Ghazvininejad, Marjan
–arXiv.org Artificial Intelligence
Recent approaches to large language model (LLM) alignment typically require millions of human annotations or rely on external aligned models for synthetic data generation. This paper introduces ALMA: Alignment with Minimal Annotation, demonstrating that effective alignment can be achieved using only 9,000 labeled examples--less than 1% of conventional approaches. ALMA generates large amounts of high-quality synthetic alignment data through new techniques: diverse prompt synthesis via few-shot learning, diverse response generation with multiple model checkpoints, and judge (reward model) enhancement through score aggregation and self-distillation. Using only a pretrained Llama3 base model, 5,000 SFT examples, and 4,000 judge annotations, ALMA achieves performance close to Llama3-Instruct across diverse alignment benchmarks (e.g., 0.1% difference on AlpacaEval 2.0 score). These results are achieved with a multiround, self-bootstrapped data synthesis and training recipe that continues to improve for 10 rounds, surpassing the typical 3-round ceiling of previous methods. These results suggest that base models already possess sufficient knowledge for effective alignment, and that synthetic data generation methods can expose it. Synthesize prompts ( 3.1) Base model (e.g. Sample diverse & many responses per prompt. Starting with only a pretrained base LLM (Llama3 Base) and minimal seed data (9k samples--less than 1% of conventional approaches), we align the model to achieve performance close to Llama3 Instruct (left panel). This is achieved through our new alignment techniques (right panel) that enhance each of the four key components in alignment: prompt synthesis ( 3.1), response synthesis ( 3.2), judge ( 3.3), and model training ( 3.4).
arXiv.org Artificial Intelligence
Dec-5-2024