CaBRNet, an open-source library for developing and evaluating Case-Based Reasoning Models

Xu-Darme, Romain, Varasse, Aymeric, Grastien, Alban, Girard, Julien, Chihani, Zakaria

arXiv.org Artificial Intelligence 

As a reflection of the social and ethical concerns related to the increasing use of AI-based systems in modern society, the field of explainable AI (XAI) has gained tremendous momentum in recent years. XAI mainly consists of two complementary avenues of research that aim at shedding some light into the inner-workings of complex ML models. On the one hand, post-hoc explanation methods apply to existing models that have often been trained with the sole purpose of accomplishing a given task as efficiently as possible (e.g., accuracy in a classification task). On the other hand, self-explainable models are designed and trained to produce their own explanations along with their decision. The appeal of selfexplainable models resides in the fact that rather than using an approximation (i.e., a post-hoc explanation method) to understand a complex model, it is better to directly enforce a simpler (and more understandable) decision-making process during the design and training of the ML model, provided that such a model would exhibit an acceptable level of performance.