Unifying Perspectives: Plausible Counterfactual Explanations on Global, Group-wise, and Local Levels
Wielopolski, Patryk, Furman, Oleksii, Stefanowski, Jerzy, Zięba, Maciej
–arXiv.org Artificial Intelligence
Growing regulatory and societal pressures demand increased transparency in AI, particularly in understanding the decisions made by complex machine learning models. Counterfactual Explanations (CFs) have emerged as a promising technique within Explainable AI (xAI), offering insights into individual model predictions. However, to understand the systemic biases and disparate impacts of AI models, it is crucial to move beyond local CFs and embrace global explanations, which offer a holistic view across diverse scenarios and populations. Unfortunately, generating Global Counterfactual Explanations (GCEs) faces challenges in computational complexity, defining the scope of "global," and ensuring the explanations are both globally representative and locally plausible. We introduce a novel unified approach for generating Local, Group-wise, and Global Counterfactual Explanations for differentiable classification models via gradient-based optimization to address these challenges. This framework aims to bridge the gap between individual and systemic insights, enabling a deeper understanding of model decisions and their potential impact on diverse populations.
arXiv.org Artificial Intelligence
May-27-2024
- Country:
- Europe (1.00)
- North America > United States
- New York > New York County > New York City (0.14)
- Genre:
- Research Report
- New Finding (0.46)
- Promising Solution (0.48)
- Research Report
- Technology: