Human-Centered Evaluation of XAI Methods
Dawoud, Karam, Samek, Wojciech, Eisert, Peter, Lapuschkin, Sebastian, Bosse, Sebastian
–arXiv.org Artificial Intelligence
In the ever-evolving field of Artificial Intelligence, a critical challenge has been to decipher the decision-making processes within the so-called "black boxes" in deep learning. Over recent years, a plethora of methods have emerged, dedicated to explaining decisions across diverse tasks. Particularly in tasks like image classification, these methods typically identify and emphasize the pivotal pixels that most influence a classifier's prediction. Interestingly, this approach mirrors human behavior: when asked to explain our rationale for classifying an image, we often point to the most salient features or aspects. Capitalizing on this parallel, our research embarked on a user-centric study. We sought to objectively measure the interpretability of three leading explanation methods: (1) Prototypical Part Network, (2) Occlusion, and (3) Layer-wise Relevance Propagation. Intriguingly, our results highlight that while the regions spotlighted by these methods can vary widely, they all offer humans a nearly equivalent depth of understanding. This enables users to discern and categorize images efficiently, reinforcing the value of these methods in enhancing AI transparency.
arXiv.org Artificial Intelligence
Oct-16-2023
- Country:
- Europe > Germany (0.15)
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Government > Regional Government (0.46)
- Health & Medicine (0.68)
- Information Technology > Security & Privacy (0.68)
- Transportation (0.67)
- Technology: