The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers
Nauta, Meike, Seifert, Christin
–arXiv.org Artificial Intelligence
Interpretable part-prototype models are computer vision models that are explainable by design. The models learn prototypical parts and recognise these components in an image, thereby combining classification and explanation. Despite the recent attention for intrinsically interpretable models, there is no comprehensive overview on evaluating the explanation quality of interpretable part-prototype models. Based on the Co-12 properties for explanation quality as introduced in [42] (e.g., correctness, completeness, compactness), we review existing work that evaluates part-prototype models, reveal research gaps and outline future approaches for evaluation of the explanation quality of part-prototype models. This paper, therefore, contributes to the progression and maturity of this relatively new research field on interpretable part-prototype models. We additionally provide a "Co-12 cheat sheet" that acts as a concise summary of our findings on evaluating part-prototype models.
arXiv.org Artificial Intelligence
Jul-26-2023
- Country:
- Asia > Singapore (0.04)
- Europe
- Germany (0.04)
- Netherlands (0.04)
- Switzerland (0.04)
- North America > United States
- California > Los Angeles County
- Long Beach (0.04)
- New York > New York County
- New York City (0.04)
- California > Los Angeles County
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.48)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.68)
- Therapeutic Area (0.68)
- Health & Medicine
- Technology: