Multi-modal Graph Fusion for Inductive Disease Classification in Incomplete Datasets
Vivar, Gerome, Burwinkel, Hendrik, Kazi, Anees, Zwergal, Andreas, Navab, Nassir, Ahmadi, Seyed-Ahmad
Clinical diagnostic decision making and population-based studies often rely on multi-modal data which is noisy and incomplete. Recently, several works proposed geometric deep learning approaches to solve disease classification, by modeling patients as nodes in a graph, along with graph signal processing of multi-modal features. Many of these approaches are limited by assuming modality- and feature-completeness, and by transductive inference, which requires re-training of the entire model for each new test sample. In this work, we propose a novel inductive graph-based approach that can generalize to out-of-sample patients, despite missing features from entire modalities per patient. We propose multi-modal graph fusion which is trained end-to-end towards node-level classification. We demonstrate the fundamental working principle of this method on a simplified MNIST toy dataset. In experiments on medical data, our method outperforms single static graph approach in multi-modal disease classification.
May-8-2019
- Genre:
- Research Report (0.51)
- Industry:
- Health & Medicine
- Diagnostic Medicine (0.69)
- Therapeutic Area > Neurology (0.48)
- Health & Medicine
- Technology: