Goto

Collaborating Authors

 icam


Supplementary Materials of ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

Neural Information Processing Systems

Our encoder-decoder architecture for a 3D input is shown in Fig. A.1. The architecture for a 2D input is the same, only using 2D convolutions and a 2D attribute space. Our generator takes in as input the content and attribute latent spaces. In addition, not shown in Fig. A.1, our domain discriminator contains 6 convolutional layers with Imaging phenotype variability is common in many neurological and psychiatric disorders, and is an important feature for diagnosis. This type of variation was simulated in Baumgartner et al.






56f9f88906aebf4ad985aaec7fa01313-AuthorFeedback.pdf

Neural Information Processing Systems

We would correct all these points for any camera ready copy of the manuscript. R2 is correct in suggesting that the ultimate goal of machine learning for healthcare should be explainable models. However, interpretability and explainability need not be mutually exclusive. Accordingly we ran two experiments 1) we applied ICAM on Alzheimer's (unseen) We find R2's request for reporting image generation quality reasonable; although we stress that the objectives of We agree with R1 that more thorough details of the training process should go in the supplement. We also appreciate R1 literature suggestions and request for more benchmarking. NCC(+), it is still worse than V A-GAN and ICAM (see Table 3 in paper).


Review for NeurIPS paper: ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

Neural Information Processing Systems

This paper proposes a model for simultaneous classification and feature attribution in the context of medical image classification. The model uses GAN to learn two representations from pairs (x, y) of input images of different classes. One representation is class-relevant (z a, a for attribution) and the other is class-irrelevant (z c, c for content). The class-relevant representation is used for classification. Both representations are fed to a generator G to synthesize images so as to achieve domain translation.


ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

Neural Information Processing Systems

Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present a novel framework for creating class specific FA maps through image-to-image translation. We propose the use of a VAE-GAN to explicitly disentangle class relevance from background features for improved interpretability properties, which results in meaningful FA maps. We show that FA maps generated by our method outperform baseline FA methods when validated against ground truth. More significantly, our approach is the first to use latent space sampling to support exploration of phenotype variation.


ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

Bass, Cher, da Silva, Mariana, Sudre, Carole, Tudosiu, Petru-Daniel, Smith, Stephen M., Robinson, Emma C.

arXiv.org Machine Learning

Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present a novel framework for creating class specific FA maps through image-to-image translation. We propose the use of a VAE-GAN to explicitly disentangle class relevance from background features for improved interpretability properties, which results in meaningful FA maps. We validate our method on 2D and 3D brain image datasets of dementia (ADNI dataset), ageing (UK Biobank), and (simulated) lesion detection. We show that FA maps generated by our method outperform baseline FA methods when validated against ground truth. More significantly, our approach is the first to use latent space sampling to support exploration of phenotype variation. Our code will be available online at https://github.com/CherBass/ICAM.