Megrez2 Technical Report

Li, Boxun, Li, Yadong, Li, Zhiyuan, Liu, Congyi, Liu, Weilin, Niu, Guowei, Tan, Zheyue, Xu, Haiyang, Yao, Zhuyu, Yuan, Tao, Zhou, Dong, Zhuang, Yueqing, Zhao, Bo, Dai, Guohao, Wang, Yu

arXiv.org Artificial Intelligence 

We present Megrez2, a novel lightweight and high-performance language model architecture optimized for device native deployment. Megrez2 introduces a novel cross-layer expert sharing mechanism, which significantly reduces total parameter count by reusing expert modules across adjacent transformer layers while maintaining most of the model's capacity. It also incorporates pre-gated routing, enabling memory-efficient expert loading and faster inference. As the first instantiation of the Megrez2 architecture, we introduce the Megrez2-Preview model, which is pre-trained on a 5-trillion-token corpus and further enhanced through supervised fine-tuning and reinforcement learning with verifiable rewards. With only 3B activated and 7.5B stored parameters, Megrez2-Preview demonstrates competitive or superior performance compared to larger models on a wide range of tasks, including language understanding, instruction following, mathematical reasoning, and code generation. These results highlight the effectiveness of the Megrez2 architecture to achieve a balance between accuracy, efficiency, and deployability, making it a strong candidate for real-world, resource-constrained applications.