Goto

Collaborating Authors

Generating Counterfactual and Contrastive Explanations using SHAP

arXiv.org Artificial Intelligence

With the advent of GDPR, the domain of explainable AI and model interpretability has gained added GDPR's Right to Explanation impetus. Methods to extract and communicate visibility General Data Protection Regulation (GDPR) is a regulation into decision-making models have become focused on data protection and regulations regarding algorithmic legal requirement. Two specific types of explanations, decision-making and is abiding on companies operating contrastive and counterfactual have been in the European Union. One of the controversial regulations identified as suitable for human understanding. In of this directive is the'Right to Explanation' which allows this paper, we propose a model agnostic method those significantly (socially) impacted by the decision of an and its systemic implementation to generate these algorithm to demand an explanation or rationale behind the explanations using shapely additive explanations decision (Eg: Being denied a loan application).


Efficient Search for Diverse Coherent Explanations

arXiv.org Machine Learning

This paper proposes new search algorithms for counterfactual explanations based upon mixed integer programming. We are concerned with complex data in which variables may take any value from a contiguous range or an additional set of discrete states. We propose a novel set of constraints that we refer to as a "mixed polytope" and show how this can be used with an integer programming solver to efficiently find coherent counterfactual explanations i.e. solutions that are guaranteed to map back onto the underlying data structure, while avoiding the need for brute-force enumeration. We also look at the problem of diverse explanations and show how these can be generated within our framework.


Counterfactual Explanations for Machine Learning: Challenges Revisited

arXiv.org Artificial Intelligence

Counterfactual explanations (CFEs) are an emerging technique under the umbrella of interpretability of machine learning (ML) models. They provide ``what if'' feedback of the form ``if an input datapoint were $x'$ instead of $x$, then an ML model's output would be $y'$ instead of $y$.'' Counterfactual explainability for ML models has yet to see widespread adoption in industry. In this short paper, we posit reasons for this slow uptake. Leveraging recent work outlining desirable properties of CFEs and our experience running the ML wing of a model monitoring startup, we identify outstanding obstacles hindering CFE deployment in industry.


The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

arXiv.org Artificial Intelligence

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counterfactual examples by investigating the local neighborhoods of instances whose predictions are to be explained and show that this risk is quite high for several datasets. Furthermore, we show that most state of the art approaches do not differentiate justified from unjustified counterfactual examples, leading to less useful explanations.


A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations

arXiv.org Artificial Intelligence

Counterfactual explanations provide a potentially significant solution to the Explainable AI (XAI) problem, but good, native counterfactuals have been shown to rarely occur in most datasets. Hence, the most popular methods generate synthetic counterfactuals using blind perturbation. However, such methods have several shortcomings: the resulting counterfactuals (i) may not be valid data-points (they often use features that do not naturally occur), (ii) may lack the sparsity of good counterfactuals (if they modify too many features), and (iii) may lack diversity (if the generated counterfactuals are minimal variants of one another). We describe a method designed to overcome these problems, one that adapts native counterfactuals in the original dataset, to generate sparse, diverse synthetic counterfactuals from naturally occurring features. A series of experiments are reported that systematically explore parametric variations of this novel method on common datasets to establish the conditions for optimal performance.