Robustness of Explainable Artificial Intelligence in Industrial Process Modelling

Kantz, Benedikt, Staudinger, Clemens, Feilmayr, Christoph, Wachlmayr, Johannes, Haberl, Alexander, Schuster, Stefan, Pernkopf, Franz

arXiv.org Artificial Intelligence 

In the last years, there has been an effort to provide eXplainable Artificial Intelligence (XAI) aims at explanations to the ML model predictions using XAI providing understandable explanations of black (Lundberg & Lee, 2017; Ribeiro et al., 2018; Alvarez-Melis box models. In this paper, we evaluate current & Jaakkola, 2018; Shrikumar et al., 2017). XAI methods by scoring them based on ground truth simulations and sensitivity analysis. To Most of these works, even if they focus on the robustness this end, we used an Electric Arc Furnace (EAF) and trustworthiness of the XAI method, have the shortcoming model to better understand the limits and robustness that they can only be evaluated through surrogate characteristics of XAI methods such as SHapley measures (Crabbé & van der Schaar, 2023), and the ground Additive exPlanations (SHAP), Local Interpretable truth sensitivity of the evaluated datasets cannot be properly Model-agnostic Explanations (LIME), as calculated (Alvarez-Melis & Jaakkola, 2018). Some well as Averaged Local Effects (ALE) or Smooth existing approaches rather use data augmentation (Sun et al., Gradients (SG) in a highly topical setting. These 2020) or create measures estimating the importance of the XAI methods were applied to various types of features (Yeh et al., 2019); further related work is provided black-box models and then scored based on their in Section A.3. None of these systems, to the best of our correctness compared to the ground-truth sensitivity knowledge, consider the ground truth sensitivity, or gradient, of the data-generating processes using a novel of the data-generating process that created the dataset.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found