Classification Metrics for Image Explanations: Towards Building Reliable XAI-Evaluations
Fresz, Benjamin, Lörcher, Lena, Huber, Marco
–arXiv.org Artificial Intelligence
Decision processes of computer vision models - especially deep neural networks - are opaque in nature, meaning that these decisions cannot be understood by humans. Thus, over the last years, many methods to provide human-understandable explanations have been proposed. For image classification, the most common group are saliency methods, which provide (super-)pixelwise feature attribution scores for input images. But their evaluation still poses a problem, as their results cannot be simply compared to the unknown ground truth. To overcome this, a slew of different proxy metrics have been defined, which are - as the explainability methods themselves - often built on intuition and thus, are possibly unreliable. In this paper, new evaluation metrics for saliency methods are developed and common saliency methods are benchmarked on ImageNet. In addition, a scheme for reliability evaluation of such metrics is proposed that is based on concepts from psychometric testing. The used code can be found at https://github.com/lelo204/ClassificationMetricsForImageExplanations .
arXiv.org Artificial Intelligence
Jun-7-2024
- Country:
- Europe > Germany (0.28)
- North America > United States (0.46)
- South America > Brazil
- Rio de Janeiro (0.18)
- Genre:
- Research Report (0.82)
- Technology: