Mitigating Intra- and Inter-modal Forgetting in Continual Learning of Unified Multimodal Models
Wei, Xiwen, Munir, Mustafa, Marculescu, Radu
–arXiv.org Artificial Intelligence
Unified Multimodal Generative Models (UMGMs) unify visual understanding and image generation within a single autoregressive framework. However, their ability to continually learn new tasks is severely hindered by catastrophic forgetting, both within a modality (intra-modal) and across modalities (inter-modal). While intra-modal forgetting has been studied in prior continual learning (CL) work, inter-modal forgetting remains largely unexplored. In this paper, we identify and empirically validate this phenomenon in UMGMs and provide a theoretical explanation rooted in gradient conflict between modalities. To address both intra- and inter-modal forgetting, we propose Modality-Decoupled Experts (MoDE), a lightweight and scalable architecture that isolates modality-specific updates to mitigate the gradient conflict and leverages knowledge distillation to prevent catastrophic forgetting and preserve pre-trained capabilities. Unlike previous CL methods that remain modality-coupled and suffer from modality gradient conflict, MoDE explicitly decouples modalities to prevent interference. Experiments across diverse benchmarks demonstrate that MoDE significantly mitigates both inter- and intra-modal forgetting, outperforming prior CL baselines in unified multimodal generation settings. Codes will be publicly available: https://github.com/Christina200/MoDE-official.git
arXiv.org Artificial Intelligence
Dec-4-2025
- Country:
- Asia > Middle East
- Europe > Austria
- Vienna (0.14)
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- Texas > Travis County
- Austin (0.04)
- Florida > Miami-Dade County
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Education (0.46)
- Information Technology (0.46)
- Technology: