Refutation of Shapley Values for XAI -- Additional Evidence
Huang, Xuanxiang, Marques-Silva, Joao
–arXiv.org Artificial Intelligence
Recent work demonstrated the inadequacy of Shapley values for explainable artificial intelligence (XAI). Although to disprove a theory a single counterexample suffices, a possible criticism of earlier work is that the focus was solely on Boolean classifiers. To address such possible criticism, this paper demonstrates the inadequacy of Shapley values for families of classifiers where features are not boolean, but also for families of classifiers for which multiple classes can be picked. Furthermore, the paper shows that the features changed in any minimal $l_0$ distance adversarial examples do not include irrelevant features, thus offering further arguments regarding the inadequacy of Shapley values for XAI.
arXiv.org Artificial Intelligence
Sep-30-2023
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- France > Occitanie
- Haute-Garonne > Toulouse (0.06)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- France > Occitanie
- North America > United States
- California > Alameda County
- Berkeley (0.04)
- New York > New York County
- New York City (0.04)
- California > Alameda County
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Issues > Social & Ethical Issues (0.86)
- Machine Learning (1.00)
- Natural Language (0.66)
- Representation & Reasoning (1.00)
- Game Theory (1.00)
- Artificial Intelligence
- Information Technology