LongCat-Flash-Omni Technical Report
Meituan LongCat Team, null, Wang, Bairui, Bayan, null, Xiao, Bin, Zhang, Bo, Rong, Bolin, Chen, Borun, Wan, Chang, Zhang, Chao, Huang, Chen, Chen, Chen, Chen, Chen, Yang, Chengxu, Yang, Chengzuo, Han, Cong, Peng, Dandan, Ruan, Delian, Xin, Detai, Wang, Disong, Yang, Dongchao, Liu, Fanfan, Chen, Fengjiao, Yang, Fengyu, Dong, Gan, Huang, Gang, Xu, Gang, Wan, Guanglu, Tan, Guoqiang, Yu, Guoqiao, Qiu, Haibo, Lu, Hao, Liu, Hongbo, Xiang, Hongyu, Wu, Jiaheng, Yang, Jian, Liu, Jiaxing, Huang, Jing, Wang, Jingang, Ding, Jinrui, Jiang, Juchao, Kuang, Jun, Wang, Jun, Mei, Junhui, Ding, Ke, Zhang, Kefeng, Chen, Lei, Shi, Liang, Qiao, Limeng, Zheng, Liming, Ma, Lin, Guo, Liuyang, Ma, Liya, Sun, Luying, Gao, Man, Zhu, Mengshen, Cao, Miao, Lin, Minliang, Xu, Nuo, Shi, Peng, Zhang, Qi, Fang, Qian, Wang, Qian, Yang, Qian, Wang, Quanxiu, Weng, Rongxiang, Guo, Rongxin, Liang, Ruoxuan, Yang, Senbin, Xu, Shanbo, Lei, Shanglin, Ye, Shengze, Chen, Shimin, Chen, Shuaiqi, Hu, Shujie, Li, Shuo, Yang, Siqi, Xu, Siyu, Ren, Siyu, Li, Song, Liu, Songxiang, Bai, Tianhao, Dai, Tianye, Hong, Wei, Wang, Wei, Zhao, Weixiao, Cao, Wengang, Zhu, Wenlong, He, Wenlong, Su, Xi, Nan, Xi, Zhao, Xiaohan, Wang, Xiaohao, Zhao, Xiaoyu, Wang, Xiaoyu, Li, Xiaoyu, Pan, Xin, Chen, Xin, Sun, Xiusong, Xiang, Xu, Xing, Xudong, Cao, Xuezhi, Cai, Xunliang, Yang, Yang, Tan, Yanli, Yao, Yao, Sun, Yerui, Chen, Yi, Lu, Yifan, Gong, Yin, Zhang, Yining, Chen, Yitian, Gan, Yiyang, Tang, Yuchen, Xie, Yuchen, Wang, Yueqian, Zheng, Yuewen, Zhang, Yufei, Zhong, Yufeng, Qian, Yulei, Peng, Yuqi, Li, Yuqian, Jiang, Yuwei, Hu, Zeyang, Zhang, Zheng, Tian, Zhengkun, Hong, Zhiqing, Zeng, Zhixiong, Mi, Zhuqi, Li, Ziran, Wang, Ziwen, Zhao, Ziyi, Zhuang, Ziyuan, Zhao, Zizhe
–arXiv.org Artificial Intelligence
We introduce LongCat-Flash-Omni, a state-of-the-art open-source omni-modal model with 560 billion parameters, excelling at real-time audio-visual interaction. By adopting a curriculum-inspired progressive training strategy that transitions from simpler to increasingly complex modality sequence modeling tasks, LongCat-Flash-Omni attains comprehensive multimodal capabilities while maintaining strong unimodal capability. Building upon LongCat-Flash, which adopts a high-performance Shortcut-connected Mixture-of-Experts (MoE) architecture with zero-computation experts, LongCat-Flash-Omni integrates efficient multimodal perception and speech reconstruction modules. Despite its immense size of 560B parameters (with 27B activated), LongCat-Flash-Omni achieves low-latency real-time audio-visual interaction. For training infrastructure, we developed a modality-decoupled parallelism scheme specifically designed to manage the data and model heterogeneity inherent in large-scale multimodal training. This innovative approach demonstrates exceptional efficiency by sustaining over 90% of the throughput achieved by text-only training. Extensive evaluations show that LongCat-Flash-Omni achieves state-of-the-art performance on omni-modal benchmarks among open-source models. Furthermore, it delivers highly competitive results across a wide range of modality-specific tasks, including text, image, and video understanding, as well as audio understanding and generation. We provide a comprehensive overview of the model architecture design, training procedures, and data strategies, and open-source the model to foster future research and development in the community.
arXiv.org Artificial Intelligence
Dec-1-2025
- Country:
- Genre:
- Overview (1.00)
- Research Report (1.00)
- Industry:
- Education (0.67)
- Technology:
- Information Technology
- Artificial Intelligence
- Cognitive Science (1.00)
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (0.70)
- Large Language Model (1.00)
- Text Processing (0.67)
- Representation & Reasoning (1.00)
- Speech > Speech Recognition (0.93)
- Vision (1.00)
- Communications (1.00)
- Data Science (1.00)
- Artificial Intelligence
- Information Technology