Alternative Methods to SHAP Derived from Properties of Kernels: A Note on Theoretical Analysis
Hiraki, Kazuhiro, Ishihara, Shinichi, Shino, Junnosuke
–arXiv.org Artificial Intelligence
In the field of machine learning, Explainable Artificial Intelligence (XAI) refers to techniques and methods that make the decisions and predictions of machine learning models easier to understand. Among them, AFA (Additive Feature Attribution) is a method that decomposes a model's prediction into the contributions of individual features. Notably, SHAP (SHapley Additive exPlanations), proposed by [5], which is based on the Shapley value [8] in cooperative game theory, is well-known in this context. Recently, research on SHAP has been rapidly expanding ([4]). To reduce the computational cost of SHAP, various methods such as Tree-SHAP[5] and Fast SHAP [3] have been proposed and applied to actual data (for example, [2]). As an alternative to SHAP, [1] considers ES (Equal Surplus) and FESP (Fair Efficient Symmetric Perturbation), both of which are based on solution concepts in cooperative game theory. In this study, we investigate the relationship between AFA and the kernel in LIME (Local Interpretable Modelagnostic Explanations) as proposed by [6].
arXiv.org Artificial Intelligence
Jun-4-2024
- Country:
- North America > United States > New York > New York County > New York City (0.04)
- Genre:
- Research Report (0.72)
- Technology: