Investigating the Duality of Interpretability and Explainability in Machine Learning

Garouani, Moncef, Mothe, Josiane, Barhrhouj, Ayah, Aligon, Julien

arXiv.org Artificial Intelligence 

--The rapid evolution of machine learning (ML) has led to the widespread adoption of complex "black box" models, such as deep neural networks and ensemble methods. However, their inherently opaque nature raises concerns about transparency and interpretability, making them untrustworthy decision support systems. T o alleviate such a barrier to high-stakes adoption, research community focus has been on developing methods to explain black box models as a means to address the challenges they pose. Efforts are focused on explaining these models instead of developing ones that are inherently interpretable. Designing inherently interpretable models from the outset, however, can pave the path towards responsible and beneficial applications in the field of ML. In this position paper, we clarify the chasm between explaining black boxes and adopting inherently interpretable models. We emphasize the imperative need for model interpretability and, following the purpose of attaining better (i.e., more effective or efficient w.r .t. predictive performance) and trustworthy predictors, provide an experimental evaluation of latest hybrid learning methods that integrates symbolic knowledge into neural network predictors. We demonstrate how interpretable hybrid models could potentially supplant black box ones in different domains. In the rapidly evolving field of artificial intelligence, machine learning techniques (e.g., Artificial Neural Networks) are among the most widespread tools for high stakes decision-making across diverse domains within society [1]. The learning process consists of the model internal hyperparameters tuning in order to mine the useful information buried in the domain data and to maximize the predictive capability [2].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found