Uni-MoE-2.0-Omni: Scaling Language-Centric Omnimodal Large Model with Advanced MoE, Training and Data
Li, Yunxin, Chen, Xinyu, Jiang, Shenyuan, Shi, Haoyuan, Liu, Zhenyu, Zhang, Xuanyu, Deng, Nanhao, Xu, Zhenran, Ma, Yicheng, Zhang, Meishan, Hu, Baotian, Zhang, Min
–arXiv.org Artificial Intelligence
We present Uni-MoE 2.0 from the Lychee family. As a fully open-source omnimodal large model (OLM), it substantially advances Lychee's Uni-MoE series in language-centric multimodal understanding, reasoning, and generating. Based on the dense LLM, we build Uni-MoE-2.0-Omni from scratch through three core contributions: dynamic-capacity Mixture-of-Experts (MoE) design, a progressive training strategy enhanced with an iterative reinforcement strategy, and a carefully curated multimodal data matching technique. It is capable of omnimodal understanding, as well as generating images, text, and speech. Architecturally, our new MoE framework balances computational efficiency and capability for 10 cross-modal inputs using shared, routed, and null experts, while our Omni-Modality 3D RoPE ensures spatio-temporal cross-modality alignment in the self-attention layer. For training, following cross-modal pretraining, we use a progressive supervised fine-tuning strategy that activates modality-specific experts and is enhanced by balanced data composition and an iterative GSPO-DPO method to stabilise RL training and improve reasoning. Data-wise, the base model, trained on approximately 75B tokens of open-source multimodal data, is equipped with special speech and image generation tokens, allowing it to learn these generative tasks by conditioning its outputs on linguistic cues. Extensive evaluation across 85 benchmarks demonstrates that our model achieves SOTA or highly competitive performance against leading OLMs, surpassing Qwen2.5-Omni (trained with 1.2T tokens) on over 50 of 76 benchmarks. Key strengths include video understanding (+7% avg. of 8), omnimodallity understanding (+7% avg. of 4), and audiovisual reasoning (+4%). It also advances long-form speech processing (reducing WER by 4.2%) and leads in low-level image processing and controllable generation across 5 metrics.
arXiv.org Artificial Intelligence
Nov-25-2025
- Country:
- Africa > Central African Republic
- Ombella-M'Poko > Bimbo (0.04)
- Asia
- China
- Guangdong Province > Shenzhen (0.04)
- Heilongjiang Province > Harbin (0.04)
- India > Telangana
- Hyderabad (0.04)
- Indonesia > Bali (0.04)
- Middle East > Israel
- Tel Aviv District > Tel Aviv (0.04)
- Singapore (0.04)
- South Korea > Seoul
- Seoul (0.04)
- China
- Europe
- Austria
- Denmark > Capital Region
- Copenhagen (0.04)
- Italy
- Netherlands > North Holland
- Amsterdam (0.04)
- Portugal > Lisbon
- Lisbon (0.04)
- Serbia > Central Serbia
- Belgrade (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- Hawaii > Honolulu County
- Honolulu (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Tennessee > Davidson County
- Nashville (0.04)
- Washington > King County
- Seattle (0.04)
- Hawaii > Honolulu County
- Mexico > Mexico City
- Oceania > Australia
- Queensland > Brisbane (0.04)
- Victoria > Melbourne (0.04)
- South America > Chile
- Africa > Central African Republic
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education > Educational Setting
- Online (0.45)
- Media (0.68)
- Education > Educational Setting
- Technology: