Goto

Collaborating Authors

 Bouadi, Tassadit


Generating robust counterfactual explanations

arXiv.org Artificial Intelligence

Counterfactual explanations have become a mainstay of the XAI field. This particularly intuitive statement allows the user to understand what small but necessary changes would have to be made to a given situation in order to change a model prediction. The quality of a counterfactual depends on several criteria: realism, actionability, validity, robustness, etc. In this paper, we are interested in the notion of robustness of a counterfactual. More precisely, we focus on robustness to counterfactual input changes. This form of robustness is particularly challenging as it involves a trade-off between the robustness of the counterfactual and the proximity with the example to explain. We propose a new framework, CROCO, that generates robust counterfactuals while managing effectively this trade-off, and guarantees the user a minimal robustness. An empirical evaluation on tabular datasets confirms the relevance and effectiveness of our approach.


VCNet: A self-explaining model for realistic counterfactual generation

arXiv.org Artificial Intelligence

Improvements of machine learning techniques for decision systems has led to the rise of applications in various domains such as healthcare, credit or justice. The eventual sensitivity of such domains, as well as the black-box nature of the algorithms, has motivated the need for methods that explain why some prediction was made. For example, if a person's loan is rejected as a result of a model decision, the bank must be able to explain why. In such a context, it might be interesting to provide an explanation of what that person should change to influence the model's decision. As suggested by Wachter et al. [27], one way to build this type of explanation is through the use of counterfactual explanations. A counterfactual is defined as the smallest modification of feature values that changes the prediction of a model to a given output. In addition, the explanation also provides important feedback to the user. In the context of a denied credit, a counterfactual is a close individual for whom his credit is accepted and the feature modifications suggested by a counterfactual acts as recourse for the user. For privacy reason or simply because there is no similar individual with an opposite decision, we aim to generate synthetic individuals as counterfactuals.