MobileLLM-Pro Technical Report

Huber, Patrick, Chang, Ernie, Wen, Wei, Fedorov, Igor, Elgamal, Tarek, Huang, Hanxian, Suda, Naveen, Sankar, Chinnadhurai, Vogeti, Vish, Wang, Yanghan, Gladkov, Alex, Tai, Kai Sheng, Elogeel, Abdelrahman, Hefny, Tarek, Chandra, Vikas, Aly, Ahmed, Kumar, Anuj, Krishnamoorthi, Raghuraman, Sagar, Adithya

arXiv.org Artificial Intelligence 

Efficient on-device language models around 1 billion parameters are essential for powering low-latency AI applications on mobile and wearable devices. However, achieving strong performance in this model class, while supporting long context windows and practical deployment remains a significant challenge. We introduce MobileLLM-Pro, a 1-billion-parameter language model optimized for on-device deployment. MobileLLM-Pro achieves state-of-the-art results across 11 standard benchmarks, significantly outperforming both Gemma 3-1B and Llama 3.2-1B, while supporting context windows of up to 128,000 tokens and showing only minor performance regressions at 4-bit quantization. These improvements are enabled by four core innovations: (1) implicit positional distillation, a novel technique that effectively instills long-context capabilities through knowledge distillation; (2) a specialist model merging framework that fuses multiple domain experts into a compact model without parameter growth; (3) simulation-driven data mixing using utility estimation; and (4) 4-bit quantization-aware training with self-distillation. We release our model weights and code to support future research in efficient on-device language models.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found