Counterfactual Explanations for Linear Optimization

Kurtz, Jannis, Birbil, Ş. İlker, Hertog, Dick den

arXiv.org Artificial Intelligence 

As artificial intelligence (AI) continues to influence our daily lives, the need for interpretability and transparency increases. This need for comprehensive explanations has been accelerated partly by the legislative initiatives such as the General Data Protection Regulation, the European Union AI Act, and the US Blueprint for an AI Bill of Rights (EUR-Lex, 2016, 2021; OSTP, 2022). These regulations emphasize the necessity of providing clear and understandable explanations for automated systems, echoing society's demand for trustworthy AI and aligning with the right for explanation principle. These developments have attracted the attention of the researchers in machine learning who have started to develop algorithms that pave the way for explainable AI (XAI) (Biran and Cotton, 2017). Among these efforts, the concept of counterfactual explanations (CEs) has emerged as one of the key approaches in XAI to understanding the inner workings of complex AI models (Wachter et al., 2018; Maragno et al., 2022). CEs aim to identify the (smallest) change in personal data that would lead to a desired model outcome.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found