Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach

Bobes-Bascarán, José, Mosqueira-Rey, Eduardo, Fernández-Leal, Ángel, Hernández-Pereira, Elena, Alonso-Ríos, David, Moret-Bonillo, Vicente, Figueirido-Arnoso, Israel, Vidal-Ínsua, Yolanda

arXiv.org Artificial Intelligence 

Explainable AI (XAI) [1] is a research field focused on making Artificial Intelligence (AI) systems in general, and Machine Learning (ML) systems in particular, more understandable to humans. Explainable AI offers several advantages, to name a few: it fosters confidence in the prediction of the model by making the decision-making process more transparent, promotes responsible AI development, aids in debugging and identifying issues, and allows auditing of AI models and checking if they adhere to regulatory standards. The inherent explainability of AI systems has not remained static but has changed considerably as a result of technological progress. In fact, explainability has become an increasingly difficult issue to tackle, as the internal functioning of AI systems has become less intelligible as they have become more complex [2]. Initially, symbolic AI models were explainable per se, e.g., rule-based expert systems could easily show to their users which rules they had followed to make a given decision, even though the rules can incorporate measures of uncertainty and imprecision as, for example, in fuzzy systems. These type of AI models are considered transparent, which means that the model itself is understandable [3], being understandability the characteristic of a model to make a human understand its function without any need for explaining its internal structure or the algorithmic means by which the model processes data internally [4].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found