MiniCPM4: Ultra-Efficient LLMs on End Devices
MiniCPM Team, null, Xiao, Chaojun, Li, Yuxuan, Han, Xu, Bai, Yuzhuo, Cai, Jie, Chen, Haotian, Chen, Wentong, Cong, Xin, Cui, Ganqu, Ding, Ning, Fan, Shengda, Fang, Yewei, Fu, Zixuan, Guan, Wenyu, Guan, Yitong, Guo, Junshao, Han, Yufeng, He, Bingxiang, Huang, Yuxiang, Ji, Baoxi, Kong, Cunliang, Li, Qiuzuo, Li, Siyuan, Li, Wenhao, Li, Xin, Li, Yanghao, Li, Yishan, Li, Zhen, Liu, Dan, Lin, Biyuan, Lin, Yankai, Long, Xiang, Lu, Quanyu, Lu, Yaxi, Luo, Peiyan, Lyu, Hongya, Ou, Litu, Pan, Yinxu, Pu, Lushi, Qu, Zekai, Shi, Qundong, Song, Zijun, Su, Jiayuan, Su, Zhou, Sun, Ao, Sun, Xianghui, Tang, Peijun, Wang, Fangzheng, Wang, Feng, Wang, Shuo, Wang, Yudong, Wang, Zheng, Wu, Yesai, Xiao, Zhenyu, Xie, Jie, Xie, Zihao, Xu, Xiaoyue, Yan, Yukun, Yuan, Jiarui, Zhang, Jinqian, Zhang, Kaihuo, Zhang, Lei, Zhang, Linyue, Zhang, Xueren, Zhang, Yudi, Zhao, Hengyu, Zhao, Weilin, Zhao, Weilun, Zhao, Yuanqian, Zheng, Zhi, Zhou, Chuyue, Zhou, Ge, Zhou, Jie, Zhou, Wei, Zhou, Yanghao, Zhou, Zihan, Zhou, Zixuan, Liu, Zhiyuan, Zeng, Guoyang, Jia, Chao, Li, Dahai, Sun, Maosong
–arXiv.org Artificial Intelligence
This paper introduces MiniCPM4, a highly efficient large language model (LLM) designed explicitly for end-side devices. We achieve this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. Specifically, in terms of model architecture, we propose InfLLM v2, a trainable sparse attention mechanism that accelerates both prefilling and decoding phases for long-context processing. Regarding training data, we propose UltraClean, an efficient and accurate pre-training data filtering and generation strategy, and UltraChat v2, a comprehensive supervised fine-tuning dataset. These datasets enable satisfactory model performance to be achieved using just 8 trillion training tokens. Regarding training algorithms, we propose ModelTunnel v2 for efficient pre-training strategy search, and improve existing post-training methods by introducing chunk-wise rollout for load-balanced reinforcement learning and data-efficient tenary LLM, BitCPM. Regarding inference systems, we propose CPM.cu that integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding. To meet diverse on-device requirements, MiniCPM4 is available in two versions, with 0.5B and 8B parameters, respectively. Furthermore, we construct a hybrid reasoning model, MiniCPM4.1, which can be used in both deep reasoning mode and non-reasoning mode. Evaluation results demonstrate that MiniCPM4 and MiniCPM4.1 outperform similar-sized open-source models across benchmarks, with the 8B variants showing significant speed improvements on long sequence understanding and generation.
arXiv.org Artificial Intelligence
Sep-5-2025
- Country:
- Asia
- Middle East > Jordan (0.04)
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- North America > United States
- Florida > Miami-Dade County > Miami (0.04)
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.67)
- Energy (0.45)
- Information Technology (0.46)
- Technology: