Rashomon in the Streets: Explanation Ambiguity in Scene Understanding
Spieker, Helge, Betten, Jørn Eirik, Gotlieb, Arnaud, Lazaar, Nadjib, Belmecheri, Nassim
–arXiv.org Artificial Intelligence
Explainable AI (XAI) is essential for validating and trusting models in safety-critical applications like autonomous driving. However, the reliability of XAI is challenged by the Rashomon effect, where multiple, equally accurate models can offer divergent explanations for the same prediction. This paper provides the first empirical quantification of this effect for the task of action prediction in real-world driving scenes. Using Qualitative Explainable Graphs (QXGs) as a symbolic scene representation, we train Rashomon sets of two distinct model classes: interpretable, pair-based gradient boosting models and complex, graph-based Graph Neural Networks (GNNs). Using feature attribution methods, we measure the agreement of explanations both within and between these classes. Our results reveal significant explanation disagreement. Our findings suggest that explanation ambiguity is an inherent property of the problem, not just a modeling artifact.
arXiv.org Artificial Intelligence
Sep-4-2025
- Country:
- Europe
- Finland > North Karelia
- Joensuu (0.04)
- France (0.04)
- Norway > Eastern Norway
- Oslo (0.04)
- Finland > North Karelia
- North America > United States
- New Mexico > Bernalillo County > Albuquerque (0.04)
- Europe
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Automobiles & Trucks (0.50)
- Information Technology > Robotics & Automation (0.36)
- Transportation > Ground
- Road (0.36)
- Technology: