Collaborating Authors

This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition Artificial Intelligence

Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image "looks like" a prototype. However, perceptual similarity for humans can be different from the similarity learnt by the model. A user is unaware of the underlying classification strategy and does not know which image characteristics (e.g., color or shape) is the dominant characteristic for the decision. We address this ambiguity and argue that prototypes should be explained. Only visualizing prototypes can be insufficient for understanding what a prototype exactly represents, and why a prototype and an image are considered similar. We improve interpretability by automatically enhancing prototypes with extra information about visual characteristics considered important by the model. Specifically, our method quantifies the influence of color hue, shape, texture, contrast and saturation in a prototype. We apply our method to the existing Prototypical Part Network (ProtoPNet) and show that our explanations clarify the meaning of a prototype which might have been interpreted incorrectly otherwise. We also reveal that visually similar prototypes can have the same explanations, indicating redundancy. Because of the generality of our approach, it can improve the interpretability of any similarity-based method for prototypical image recognition.

Neural Prototype Trees for Interpretable Fine-grained Image Recognition Artificial Intelligence

Interpretable machine learning addresses the black-box nature of deep neural networks. Visual prototypes have been suggested for intrinsically interpretable image recognition, instead of generating post-hoc explanations that approximate a trained model. However, a large number of prototypes can be overwhelming. To reduce explanation size and improve interpretability, we propose the Neural Prototype Tree (ProtoTree), a deep learning method that includes prototypes in an interpretable decision tree to faithfully visualize the entire model. In addition to global interpretability, a path in the tree explains a single prediction. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it's a hummingbird! We tune the accuracy-interpretability trade-off using ensembling and pruning. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200-2011 and Stanford Cars data sets. Code is available at

Interpretable Image Classification with Differentiable Prototypes Assignment Artificial Intelligence

We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared by the classes. The training is more straightforward than in the existing methods because it does not require the pruning stage. It is obtained by introducing a fully differentiable assignment of prototypes to particular classes. Moreover, we introduce a novel focal similarity function to focus the model on the rare foreground features. We show that ProtoPool obtains state-of-the-art accuracy on the CUB-200-2011 and the Stanford Cars datasets, substantially reducing the number of prototypes. We provide a theoretical analysis of the method and a user study to show that our prototypes are more distinctive than those obtained with competitive methods.

Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes Artificial Intelligence

Machine learning has been widely adopted in many domains, including high-stakes applications such as healthcare, finance, and criminal justice. To address concerns of fairness, accountability and transparency, predictions made by machine learning models in these critical domains must be interpretable. One line of work approaches this challenge by integrating the power of deep neural networks and the interpretability of case-based reasoning to produce accurate yet interpretable image classification models. These models generally classify input images by comparing them with prototypes learned during training, yielding explanations in the form of "this looks like that." However, methods from this line of work use spatially rigid prototypes, which cannot explicitly account for pose variations. In this paper, we address this shortcoming by proposing a case-based interpretable neural network that provides spatially flexible prototypes, called a deformable prototypical part network (Deformable ProtoPNet). In a Deformable ProtoPNet, each prototype is made up of several prototypical parts that adaptively change their relative spatial positions depending on the input image. This enables each prototype to detect object features with a higher tolerance to spatial transformations, as the parts within a prototype are allowed to move. Consequently, a Deformable ProtoPNet can explicitly capture pose variations, improving both model accuracy and the richness of explanations provided. Compared to other case-based interpretable models using prototypes, our approach achieves competitive accuracy, gives an explanation with greater context, and is easier to train, thus enabling wider use of interpretable models for computer vision.

ProtoPShare: Prototype Sharing for Interpretable Image Classification and Similarity Discovery Artificial Intelligence

In this paper, we introduce ProtoPShare, a self-explained method that incorporates the paradigm of prototypical parts to explain its predictions. The main novelty of the ProtoPShare is its ability to efficiently share prototypical parts between the classes thanks to our data-dependent merge-pruning. Moreover, the prototypes are more consistent and the model is more robust to image perturbations than the state of the art method ProtoPNet. We verify our findings on two datasets, the CUB-200-2011 and the Stanford Cars.