Improving Perturbation-based Explanations by Understanding the Role of Uncertainty Calibration
Decker, Thomas, Tresp, Volker, Buettner, Florian
–arXiv.org Artificial Intelligence
Perturbation-based explanations are widely utilized to enhance the transparency of machine-learning models in practice. However, their reliability is often compromised by the unknown model behavior under the specific perturbations used. This paper investigates the relationship between uncertainty calibration - the alignment of model confidence with actual accuracy - and perturbation-based explanations. We show that models systematically produce unreliable probability estimates when subjected to explainability-specific perturbations and theoretically prove that this directly undermines global and local explanation quality. To address this, we introduce ReCalX, a novel approach to recalibrate models for improved explanations while preserving their original predictions. Empirical evaluations across diverse models and datasets demonstrate that ReCalX consistently reduces perturbation-specific miscalibration most effectively while enhancing explanation robustness and the identification of globally important input features.
arXiv.org Artificial Intelligence
Nov-14-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- France (0.04)
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Italy > Marche
- Ancona Province > Ancona (0.04)
- Portugal > Porto
- Porto (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- Asia > Middle East
- Genre:
- Overview (1.00)
- Research Report
- Experimental Study (1.00)
- New Finding (0.93)
- Industry:
- Health & Medicine (0.46)
- Technology: