FGCE: Feasible Group Counterfactual Explanations for Auditing Fairness
Fragkathoulas, Christos, Papanikou, Vasiliki, Pitoura, Evaggelia, Terzi, Evimaria
–arXiv.org Artificial Intelligence
This paper introduces the first graph-based framework for generating group counterfactual explanations to audit model fairness, a crucial aspect of trustworthy machine learning. Counterfactual explanations are instrumental in understanding and mitigating unfairness by revealing how inputs should change to achieve a desired outcome. Our framework, named Feasible Group Counterfactual Explanations (FGCEs), captures real-world feasibility constraints and constructs subgroups with similar counterfactuals, setting it apart from existing methods. It also addresses key trade-offs in counterfactual generation, including the balance between the number of counterfactuals, their associated costs, and the breadth of coverage achieved. To evaluate these trade-offs and assess fairness, we propose measures tailored to group counterfactual generation. Our experimental results on benchmark datasets demonstrate the effectiveness of our approach in managing feasibility constraints and trade-offs, as well as the potential of our proposed metrics in identifying and quantifying fairness issues.
arXiv.org Artificial Intelligence
Nov-15-2024
- Country:
- Europe (0.28)
- Genre:
- Research Report (0.82)
- Industry:
- Banking & Finance > Credit (0.47)
- Education > Educational Setting (0.67)
- Law > Criminal Law (0.46)
- Technology: