Assessing reliability of explanations in unbalanced datasets: a use-case on the occurrence of frost events
Vascotto, Ilaria, Blasone, Valentina, Rodriguez, Alex, Bonaita, Alessandro, Bortolussi, Luca
–arXiv.org Artificial Intelligence
The usage of eXplainable Artificial Intelligence (XAI) methods has become essential in practical applications, given the increasing deployment of Artificial Intelligence (AI) models and the legislative requirements put forward in the latest years. A fundamental but often underestimated aspect of the explanations is their robustness, a key property that should be satisfied in order to trust the explanations. In this study, we provide some preliminary insights on evaluating the reliability of explanations in the specific case of unbalanced datasets, which are very frequent in high-risk use-cases, but at the same time considerably challenging for both AI models and XAI methods. We propose a simple evaluation focused on the minority class (i.e. the less frequent one) that leverages on-manifold generation of neighbours, explanation aggregation and a metric to test explanation consistency. We present a use-case based on a tabular dataset with numerical features focusing on the occurrence of frost events.
arXiv.org Artificial Intelligence
Oct-14-2025
- Country:
- Asia > Middle East
- Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Europe
- Italy
- Friuli Venezia Giulia > Trieste Province
- Trieste (0.05)
- Lombardy > Milan (0.04)
- Friuli Venezia Giulia > Trieste Province
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.04)
- Poland (0.04)
- Italy
- North America > United States
- California > San Diego County
- San Diego (0.04)
- New York > New York County
- New York City (0.04)
- California > San Diego County
- Asia > Middle East
- Genre:
- Research Report (0.70)
- Industry:
- Information Technology > Security & Privacy (0.93)
- Technology: