Goto

Collaborating Authors

 flex-moe




Flexible Multimodal Neuroimaging Fusion for Alzheimer's Disease Progression Prediction

Burns, Benjamin, Xue, Yuan, Scharre, Douglas W., Ning, Xia

arXiv.org Artificial Intelligence

Alzheimer's disease (AD) is a progressive neurodegenerati ve disease with high inter-patient variance in rate of cogniti ve decline. AD progression prediction aims to forecast patient cognitive decline and benefits from incorporating multiple neuroimaging modalities . However, existing multimodal models fail to make accurate predictions when many modalities are missing during inference, as is often the cas e in clinical settings. To increase multimodal model flexibility under hi gh modality missingness, we introduce PerM-MoE, a novel sparse mixture -of-experts method that uses independent routers for each modality in pl ace of the conventional, single router. Using T1-weighted MRI, FL AIR, amyloid beta PET, and tau PET neuroimaging data from the Alzheim er's Disease Neuroimaging Initiative (ADNI), we evaluate PerM-MoE, state-of-the-art Flex-MoE, and unimodal neuroimaging models on p redicting two-year change in Clinical Dementia Rating-Sum of Boxes (C DR-SB) scores under varying levels of modality missingness. PerM-MoE outperforms the state of the art in most variations of modality miss ingness and demonstrates more effective utility of experts than Flex-Mo E. Keywords: Alzheimer's disease Neuroimaging Multimodal fusion Mixture of experts Disease progression prediction


Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts

Yun, Sukwon, Choi, Inyoung, Peng, Jie, Wu, Yangfan, Bao, Jingxuan, Zhang, Qiyiwen, Xin, Jiayi, Long, Qi, Chen, Tianlong

arXiv.org Artificial Intelligence

Multimodal learning has gained increasing importance across various fields, offering the ability to integrate data from diverse sources such as images, text, and personalized records, which are frequently observed in medical domains. However, in scenarios where some modalities are missing, many existing frameworks struggle to accommodate arbitrary modality combinations, often relying heavily on a single modality or complete data. This oversight of potential modality combinations limits their applicability in real-world situations. To address this challenge, we propose Flex-MoE (Flexible Mixture-of-Experts), a new framework designed to flexibly incorporate arbitrary modality combinations while maintaining robustness to missing data. The core idea of Flex-MoE is to first address missing modalities using a new missing modality bank that integrates observed modality combinations with the corresponding missing ones. This is followed by a uniquely designed Sparse MoE framework. Specifically, Flex-MoE first trains experts using samples with all modalities to inject generalized knowledge through the generalized router ($\mathcal{G}$-Router). The $\mathcal{S}$-Router then specializes in handling fewer modality combinations by assigning the top-1 gate to the expert corresponding to the observed modality combination. We evaluate Flex-MoE on the ADNI dataset, which encompasses four modalities in the Alzheimer's Disease domain, as well as on the MIMIC-IV dataset. The results demonstrate the effectiveness of Flex-MoE highlighting its ability to model arbitrary modality combinations in diverse missing modality scenarios. Code is available at https://github.com/UNITES-Lab/flex-moe.