Trade-off Between Efficiency and Consistency for Removal-based Explanations
Zhang, Yifan, He, Haowei, Tan, Zhiquan, Yuan, Yang
In the current landscape of explanation methodologies, most predominant approaches, such as SHAP and LIME, employ removal-based techniques to evaluate the impact of individual features by simulating various scenarios with specific features omitted. Nonetheless, these methods primarily emphasize efficiency in the original context, often resulting in general inconsistencies. In this paper, we demonstrate that such inconsistency is an inherent aspect of these approaches by establishing the Impossible Trinity Theorem, which posits that interpretability, efficiency, and consistency cannot hold simultaneously. Recognizing that the attainment of an ideal explanation remains elusive, we propose the utilization of interpretation error as a metric to gauge inefficiencies and inconsistencies. To this end, we present two novel algorithms founded on the standard polynomial basis, aimed at minimizing interpretation error. Our empirical findings indicate that the proposed methods achieve a substantial reduction in interpretation error, up to 31.8 times lower when compared to alternative techniques. Code is available at https://github.com/trusty-ai/efficient-consistent-explanations.
Oct-20-2023
- Country:
- Asia > China (0.14)
- Europe (0.28)
- North America > United States (0.14)
- Genre:
- Research Report (0.81)
- Industry:
- Leisure & Entertainment (0.46)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.93)
- Statistical Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Vision (0.93)
- Machine Learning
- Data Science (0.93)
- Information Management (0.92)
- Artificial Intelligence
- Information Technology