One-for-many Counterfactual Explanations by Column Generation
Lodi, Andrea, Ramírez-Ayerbe, Jasone
–arXiv.org Artificial Intelligence
In recent years, machine learning algorithms have been used in high-stakes decision-making settings, such as healthcare, loan approval, or parole decisions (Baesens et al., 2003; Zeng et al., 2022, 2017). Consequently, there is a growing interest and necessity in their explainability and interpretability (Du et al., 2019; Jung et al., 2020; Molnar et al., 2020; Rudin et al., 2022; Zhang et al., 2019). Once a supervised classification model has been trained, one may be interested in knowing the changes needed to be made in the features of an instance to change the prediction made by the classifier. These changes are the so-called counterfactual explanations (Martens and Provost, 2014; Wachter et al., 2017). There is a growing literature on the development of algorithms to generate counterfactual explanations, see Artelt and Hammer (2019); Guidotti (2022); Karimi et al. (2022); Sokol and Flach (2019); Stepin et al. (2021); Verma et al. (2022) for recent surveys on Counterfactual Analysis. Nevertheless, they mainly focus on the single-instance, single-counterfactual case, where for one specific instance, a single counterfactual is provided (Wachter et al., 2017; Parmentier and Vidal, 2021).
arXiv.org Artificial Intelligence
Feb-12-2024