icam
Supplementary Materials of ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping
Our encoder-decoder architecture for a 3D input is shown in Fig. A.1. The architecture for a 2D input is the same, only using 2D convolutions and a 2D attribute space. Our generator takes in as input the content and attribute latent spaces. In addition, not shown in Fig. A.1, our domain discriminator contains 6 convolutional layers with Imaging phenotype variability is common in many neurological and psychiatric disorders, and is an important feature for diagnosis. This type of variation was simulated in Baumgartner et al.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States (0.04)
- North America > Canada (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.69)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States > Montana (0.04)
- North America > Canada (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
- Government > Regional Government > North America Government > United States Government (0.46)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.69)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States > Montana (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
- Government > Regional Government > North America Government > United States Government (0.46)
56f9f88906aebf4ad985aaec7fa01313-AuthorFeedback.pdf
We would correct all these points for any camera ready copy of the manuscript. R2 is correct in suggesting that the ultimate goal of machine learning for healthcare should be explainable models. However, interpretability and explainability need not be mutually exclusive. Accordingly we ran two experiments 1) we applied ICAM on Alzheimer's (unseen) We find R2's request for reporting image generation quality reasonable; although we stress that the objectives of We agree with R1 that more thorough details of the training process should go in the supplement. We also appreciate R1 literature suggestions and request for more benchmarking. NCC(+), it is still worse than V A-GAN and ICAM (see Table 3 in paper).
Review for NeurIPS paper: ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping
This paper proposes a model for simultaneous classification and feature attribution in the context of medical image classification. The model uses GAN to learn two representations from pairs (x, y) of input images of different classes. One representation is class-relevant (z a, a for attribution) and the other is class-irrelevant (z c, c for content). The class-relevant representation is used for classification. Both representations are fed to a generator G to synthesize images so as to achieve domain translation.
ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping
Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present a novel framework for creating class specific FA maps through image-to-image translation. We propose the use of a VAE-GAN to explicitly disentangle class relevance from background features for improved interpretability properties, which results in meaningful FA maps. We show that FA maps generated by our method outperform baseline FA methods when validated against ground truth. More significantly, our approach is the first to use latent space sampling to support exploration of phenotype variation.
ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping
Bass, Cher, da Silva, Mariana, Sudre, Carole, Tudosiu, Petru-Daniel, Smith, Stephen M., Robinson, Emma C.
Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present a novel framework for creating class specific FA maps through image-to-image translation. We propose the use of a VAE-GAN to explicitly disentangle class relevance from background features for improved interpretability properties, which results in meaningful FA maps. We validate our method on 2D and 3D brain image datasets of dementia (ADNI dataset), ageing (UK Biobank), and (simulated) lesion detection. We show that FA maps generated by our method outperform baseline FA methods when validated against ground truth. More significantly, our approach is the first to use latent space sampling to support exploration of phenotype variation. Our code will be available online at https://github.com/CherBass/ICAM.
- North America > United States > Montana (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)