Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging
Hui, Tingfeng, Zhang, Zhenyu, Wang, Shuohuan, Sun, Yu, Wu, Hua, Su, Sen
–arXiv.org Artificial Intelligence
Mixture-of-Experts (MoE) shines brightly in large language models (LLMs) and demonstrates outstanding performance in plentiful natural language processing tasks. However, existing methods transforming LLMs from dense to MoE face significant data requirements and typically rely on large-scale post-training. In this paper, we propose Upcycling Instruction Tuning (UpIT), a data-efficient approach for tuning a dense pre-trained model into a MoE instruction model. Specifically, we first point out that intermediate checkpoints during instruction tuning of the dense model are naturally suitable for specialized experts, and then propose an expert expansion stage to flexibly achieve models with flexible numbers of experts, where genetic algorithm and parameter merging are introduced to ensure sufficient diversity of new extended experts. To ensure that each specialized expert in the MoE model works as expected, we select a small amount of seed data that each expert excels to pre-optimize the router. Extensive experiments with various data scales and upcycling settings demonstrate the outstanding performance and data efficiency of UpIT, as well as stable improvement in expert or data scaling. Further analysis reveals the importance of ensuring expert diversity in upcycling. Large Language Models (LLMs) have demonstrated remarkable performance on various NLP tasks and are gradually becoming part of our daily lives through chatbot applications such as ChatGPT, Copilot, etc (Ouyang et al., 2022; Touvron et al., 2023; OpenAI, 2024). As LLMs become increasingly prevalent, the high computational of traditional dense architecture with high computational costs in the inference phase poses significant obstacles to downstream deployment. How to improve the model performance without proportionally increasing computing resources become a hot topic in the field (Muennighoff et al., 2024; Xue et al., 2024).
arXiv.org Artificial Intelligence
Oct-2-2024