Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities
Mesinovic, Munib, Watkinson, Peter, Zhu, Tingting
–arXiv.org Artificial Intelligence
Recent advancements in AI applications to healthcare have shown incredible promise in surpassing human performance in diagnosis and disease prognosis. With the increasing complexity of AI models, however, concerns regarding their opacity, potential biases, and the need for interpretability. To ensure trust and reliability in AI systems, especially in clinical risk prediction models, explainability becomes crucial. Explainability is usually referred to as an AI system's ability to provide a robust interpretation of its decision-making logic or the decisions themselves to human stakeholders. In clinical risk prediction, other aspects of explainability like fairness, bias, trust, and transparency also represent important concepts beyond just interpretability. In this review, we address the relationship between these concepts as they are often used together or interchangeably. This review also discusses recent progress in developing explainable models for clinical risk prediction, highlighting the importance of quantitative and clinical evaluation and validation across multiple common modalities in clinical practice. It emphasizes the need for external validation and the combination of diverse interpretability methods to enhance trust and fairness. Adopting rigorous testing, such as using synthetic datasets with known generative factors, can further improve the reliability of explainability methods. Open access and code-sharing resources are essential for transparency and reproducibility, enabling the growth and trustworthiness of explainable research. While challenges exist, an end-to-end approach to explainability in clinical risk prediction, incorporating stakeholders from clinicians to developers, is essential for success.
arXiv.org Artificial Intelligence
Aug-16-2023
- Country:
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- Genre:
- Overview (1.00)
- Research Report
- Experimental Study (0.92)
- New Finding (0.67)
- Industry:
- Government > Regional Government (1.00)
- Health & Medicine
- Consumer Health (1.00)
- Diagnostic Medicine > Imaging (1.00)
- Epidemiology (1.00)
- Government Relations & Public Policy (0.92)
- Health Care Providers & Services (1.00)
- Health Care Technology > Medical Record (0.68)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area
- Cardiology/Vascular Diseases (1.00)
- Immunology (0.68)
- Infections and Infectious Diseases (1.00)
- Neurology (1.00)
- Oncology (1.00)
- Ophthalmology/Optometry (0.67)
- Pulmonary/Respiratory Diseases (1.00)
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Applied AI (1.00)
- Cognitive Science (1.00)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning
- Neural Networks > Deep Learning (1.00)
- Statistical Learning (1.00)
- Natural Language > Explanation & Argumentation (1.00)
- Representation & Reasoning
- Expert Systems (1.00)
- Rule-Based Reasoning (0.92)
- Uncertainty > Fuzzy Logic (0.93)
- Biomedical Informatics > Clinical Informatics (0.92)
- Data Science > Data Mining (1.00)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology