Beyond Known Reality: Exploiting Counterfactual Explanations for Medical Research

Tanyel, Toygar, Ayvaz, Serkan, Keserci, Bilgin

arXiv.org Artificial Intelligence 

As we incorporate automated decision-making systems into the real world, explainability and accountability questions become increasingly important [1]. In some fields, such as medicine and healthcare, ignoring or failing to address such a challenge can seriously limit the adoption of computer-based systems that rely on machine learning (ML) and computational intelligence methods for data analysis in real-world applications [2-4]. Previous research in eXplainable Artificial Intelligence (XAI) has primarily focused on developing techniques to interpret decisions made by black box ML models. For instance, widely used approaches such as local interpretable model-agnostic explanations (LIME) [5] and shapley additive explanations (SHAP) [6] offer attribution-based explanations to interpret ML models. These methods can assist computer scientists and ML experts in understanding the reasoning behind the predictions made by AI models. However, end-users, including clinicians and patients, may be more interested in understanding the practical implications of the ML model's predictions in relation to themselves, rather than solely focusing on how the models arrived at their predictions. For example, patients' primary concern lies not only in obtaining information about their illness but also in seeking guidance on how to regain their health. Understanding the decision-making process of either the doctor or the ML model is of lesser importance to them. Counterfactual explanations [7, 8] are a form of model-agnostic interpretation technique that identifies the minimal changes needed in input features to yield a different output, aligned with a specific desired outcome.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found