deltaxplainer
Dynamic Interpretability for Model Comparison via Decision Rules
Rida, Adam, Lesot, Marie-Jeanne, Renard, Xavier, Marsala, Christophe
Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively. This paper addresses the challenge of understanding and explaining differences between machine learning models, which is crucial for model selection, monitoring and lifecycle management in real-world applications. We propose DeltaXplainer, a model-agnostic method for generating rule-based explanations describing the differences between two binary classifiers. To assess the effectiveness of DeltaXplainer, we conduct experiments on synthetic and real-world datasets, covering various model comparison scenarios involving different types of concept drift.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > Wales (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (0.89)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.87)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.66)