Conceptualizing Uncertainty

Roberts, Isaac, Schulz, Alexander, Schroeder, Sarah, Hinder, Fabian, Hammer, Barbara

arXiv.org Artificial Intelligence 

While advances in deep learning in the last years have led to impressive performance in many domains, such models are not always reliable, particularly when it comes to generalizing to new environments or adversarial attacks. To improve on that, numerous methods have been developed in the field of explainable artificial intelligence (xAI) [5] to provide insights into model behavior and facilitate actionable modifications. However, the majority of methods focus on explaining model predictions, which can help understand misclassifications but do not explicitly address predictive uncertainty(See Figure 1). Understanding uncertainty is crucial for detecting potential model weaknesses, particularly in dynamic environments. Since uncertainty quantification is useful in various applications, including active learning [20], classification with rejects [17], adversarial example detection [26], and reinforcement learning [24], a significant body of work aims to improve the quantification of predictive uncertainty using Bayesian deep learning (BDL) and approximations thereof [15,9,14]. In contrast, the literature on understanding the sources of uncertainty for a given model via explanations is limited, focusing on methods for feature attribution [28,27] (see section 2.4 for more related