Explainability is NOT a Game

Marques-Silva, Joao, Huang, Xuanxiang

arXiv.org Artificial Intelligence 

Among the existing informal approaches to XAI, the use of Shapley values as a mechanism for feature attribution is arguably the Explainable artificial intelligence (XAI) aims to help human best-known. Shapley values [Shapley 1953] were originally proposed decision-makers in understanding complex machine learning (ML) in the context of game theory, but have found a wealth of models. One of the hallmarks of XAI are measures of relative feature application domains [Roth 1988]. More importantly, for more than importance, which are theoretically justified through the use two decades Shapley values have been proposed in the context of of Shapley values. This paper builds on recent work and offers a explaining the decisions of complex ML models [Lipovetsky and simple argument for why Shapley values can provide misleading Conklin 2001; Lundberg and Lee 2017; Strumbelj and Kononenko measures of relative feature importance, by assigning more importance 2010, 2014]. The importance of Shapley values for explainability is to features that are irrelevant for a prediction, and assigning illustrated by the massive impact of tools like SHAP [Lundberg and less importance to features that are relevant for a prediction. The Lee 2017], including many recent uses that have a direct influence significance of these results is that they effectively challenge the on human beings (see [Huang and Marques-Silva 2023] for some many proposed uses of measures of relative feature importance in recent references).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found