Towards Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations

Belmecheri, Nassim, Gotlieb, Arnaud, Lazaar, Nadjib, Spieker, Helge

arXiv.org Artificial Intelligence 

Artificial Intelligence (AI) methods nowadays are at the center of automated driving and connected mobility, including perception and scene understanding [1, 2, 3]. However, passing control to an AI-based system and trusting its decisions requires the ability to request explanations for these decisions [4]. Societal acceptance of automated driving significantly depends on these AI models' trustworthiness, transparency, and reliability [5]. Still, this is an open challenge, as many of the state-of-the-art machine learning (ML) models are opaque and not inherently explainable by themselves [6]. In recent years, several explainable AI methods with a focus on automated driving have been proposed. Following [6], they fall into three main categories: a) Vision-based explainable AI related to highlighting the area of an image that influences a perception model towards a certain output [4]; b) Feature-based importance scores quantify the influence of each input feature on the model output; and c) Textual-based explainable AI that aims to formulate explanations as intelligible arguments using natural language processing [7]. Unfortunately, automated support for multisensor and video-based scene explanation is still restricted to quantitative analysis, e.g., saliency heatmaps [4]. In this work, we exploit qualitative methods for scene understanding by using Qualitative Explainable Graphs (QXG) and, based on this representation, we propose a method for action explanation through simple classification models.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found