Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts

Li, Yunxin, Jiang, Shenyuan, Hu, Baotian, Wang, Longyue, Zhong, Wanqi, Luo, Wenhan, Ma, Lin, Zhang, Min

arXiv.org Artificial Intelligence 

Abstract--Recent advancements in Multimodal Large Language Models (MLLMs) underscore the significance of scalable models and data to boost performance, yet this often incurs substantial computational costs. Although the Mixture of Experts (MoE) architecture has been employed to efficiently scale large language and image-text models, these efforts typically involve fewer experts and limited modalities. To address this, our work presents the pioneering attempt to develop a unified MLLM with the MoE architecture, named Uni-MoE that can handle a wide array of modalities. Specifically, it features modality-specific encoders with connectors for a unified multimodal representation. We also implement a sparse MoE architecture within the LLMs to enable efficient training and inference through modality-level data parallelism and expert-level model parallelism. To enhance the multi-expert collaboration and generalization, we present a progressive training strategy: 1) Cross-modality alignment using various connectors with different cross-modality data, 2) Training modality-specific experts with cross-modality instruction data to activate experts' preferences, and 3) Tuning the Uni-MoE framework utilizing Low-Rank Adaptation (LoRA) on mixed multimodal instruction data. We evaluate the instruction-tuned Uni-MoE on a comprehensive set of multimodal datasets. The extensive experimental results demonstrate Uni-MoE's principal advantage of significantly reducing performance bias in handling mixed multimodal datasets, alongside improved multi-expert collaboration and generalization. Additionally, there is a growing trend [6], [7], [8], [9] toward building a unified MLLM that could comprehend more modalities such as video, audio, and speech, moving beyond the traditional imagetext paradigm. To catch up with superior closed-source MLLMs like GPT-4V [10] and Gemini [11], the main efforts of open-source community contain enlarging model sizes [12], as seen with the expansion of vision foundation models to 6 billion parameters [12] and the integration with 70B Large Language models (LLMs) [13], [14], and enhancing instruction tuning with diverse multimodal datasets [3], [15], [16]. These developments underscore the increasing ability of MLLMs to process and reason across multiple modalities, showing the importance of both model scalability and the expansion of multimodal instructional data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found