mara graziani
Exploring layers in deep learning models: interview with Mara Graziani
Mara Graziani and colleagues Laura O' Mahony, An-Phi Nguyen, Henning Müller and Vincent Andrearczyk are researching deep learning models and the associated learned representations. In this interview, Mara tells us about the team's proposed framework for concept discovery. Our paper Uncovering Unique Concept Vectors through Latent Space Decomposition focuses on understanding how representations are organized by intermediate layers of complex deep learning models. The latent space of a layer can be interpreted as a vector space spanned by individual neuron directions. In our work, we identify a new basis that aligns with the variance of the training data.
Attention-based Interpretable Regression of Gene Expression in Histology
Graziani, Mara, Marini, Niccolò, Deutschmann, Nicolas, Janakarajan, Nikita, Müller, Henning, Martínez, María Rodríguez
Interpretability of deep learning is widely used to evaluate the reliability of medical imaging models and reduce the risks of inaccurate patient recommendations. For models exceeding human performance, e.g. predicting RNA structure from microscopy images, interpretable modelling can be further used to uncover highly non-trivial patterns which are otherwise imperceptible to the human eye. We show that interpretability can reveal connections between the microscopic appearance of cancer tissue and its gene expression profiling. While exhaustive profiling of all genes from the histology images is still challenging, we estimate the expression values of a well-known subset of genes that is indicative of cancer molecular subtype, survival, and treatment response in colorectal cancer. Our approach successfully identifies meaningful information from the image slides, highlighting hotspots of high gene expression. Our method can help characterise how gene expression shapes tissue morphology and this may be beneficial for patient stratification in the pathology unit. The code is available on GitHub.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States (0.14)
- Asia > China > Hong Kong (0.04)
- Research Report > Experimental Study (0.70)
- Research Report > New Finding (0.47)
Regression Concept Vectors for Bidirectional Explanations in Histopathology
Graziani, Mara, Andrearczyk, Vincent, Müller, Henning
Explanations for deep neural network predictions in terms of domain-related concepts can be valuable in medical applications, where justifications are important for confidence in the decision-making. In this work, we propose a methodology to exploit continuous concept measures as Regression Concept Vectors (RCVs) in the activation space of a layer. The directional derivative of the decision function along the RCVs represents the network sensitivity to increasing values of a given concept measure. When applied to breast cancer grading, nuclei texture emerges as a relevant concept in the detection of tumor tissue in breast lymph node samples. We evaluate score robustness and consistency by statistical analysis.
- Europe > Switzerland > Geneva > Geneva (0.04)
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)