Multi Layered-Parallel Graph Convolutional Network (ML-PGCN) for Disease Prediction

Kazi, Anees, Albarqouni, Shadi, Kortuem, Karsten, Navab, Nassir

arXiv.org Machine Learning 

Structural data(age, gender, weight) from Electronic Health Records (EHRs) are exploited by Computer Aided Systems (CADS) as complementary information for disease prediction. Such systems, however, fail to weight the structural data based its relevance to the disease at hand. A model capable of evaluating the significance of every element of the structural data and performing the prediction task based on the selective and weighted procedure for elements of structural data is required. Such scheme will boost more semantic automatic disease prediction task Recently multi-modal data is processed using deep learning methods like Convolutional Neural Networks(CNNs)[9], Autoencoders[6], Modified Restricted Boltzman Machine[8] etc. These methods provide richer and discriminant feature space which helps to exploit the global complementary information from available modalities, however, fail to address the problem of unequal relevance. Structural data gives statistical information about the population as a whole. This is taken into consideration more recently using graphs, providing a more semantic way of using multi-modal data[7, 4]. These methods focus more on the association between the subjects with respect to either of the modalities and then solve the tasks such as disease prediction with features from other modalities.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found