Goto

Collaborating Authors

Interpretable vs Explainable Machine Learning

#artificialintelligence

From medical diagnoses to credit underwriting, machine learning models are being used to make increasingly important decisions. To trust the systems powered by these models we need to know how they make predictions. This is why the difference between an interpretable and explainable model is important. The way we understand our models and degree to which we can truly understand then depends on whether they are interpretable or explainable. Put briefly, an interpretable model can be understood by a human without any other aids/techniques.


Bridging the Gap Between Explainable AI and Uncertainty Quantification to Enhance Trustability

arXiv.org Artificial Intelligence

After the tremendous advances of deep learning and other AI methods, more attention is flowing into other properties of modern approaches, such as interpretability, fairness, etc. combined in frameworks like Responsible AI. Two research directions, namely Explainable AI and Uncertainty Quantification are becoming more and more important, but have been so far never combined and jointly explored. In this paper, I show how both research areas provide potential for combination, why more research should be done in this direction and how this would lead to an increase in trustability in AI systems.


A Beginner's Guide to Four Principles of Explainable Artificial Intelligence

#artificialintelligence

Artificial Intelligence is creating cutting-edge technologies for more efficient workflow in multiple industries across the world in this tech-driven era. There are machine learning and deep learning algorithms that are too complicated for people to understand besides AI engineers or related employees. Artificial Intelligence has generated self-explaining algorithms for stakeholders and partners to comprehend the entire process of transforming enormous complex sets of real-time data into meaningful in-depth insights. This is known as Explainable Artificial Intelligence or XAI in which the results of these solutions can be easily understood by humans. It helps AI designers to explain how AI machines have generated a specific kind of insight or outcome for businesses to thrive in the market. Multiple online courses and platforms are available for a better understanding of Explainable AI by designing interpretable and inclusive Artificial Intelligence.


Papers with Code - Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

#artificialintelligence

In the present paper we present the potential of Explainable Artificial Intelligence methods for decision-support in medical image analysis scenarios. With three types of explainable methods applied to the same medical image data set our aim was to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN)... The visual explanations were provided on in-vivo gastral images obtained from a Video capsule endoscopy (VCE), with the goal of increasing the health professionals' trust in the black box predictions. We implemented two post-hoc interpretable machine learning methods LIME and SHAP and the alternative explanation approach CIU, centered on the Contextual Value and Utility (CIU). The produced explanations were evaluated using human evaluation.


Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

#artificialintelligence

In the present paper we present the potential of Explainable Artificial Intelligence methods for decision-support in medical image analysis scenarios. With three types of explainable methods applied to the same medical image data set our aim was to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). The visual explanations were provided on in-vivo gastral images obtained from a Video capsule endoscopy (VCE), with the goal of increasing the health professionals' trust in the black box predictions. We implemented two post-hoc interpretable machine learning methods LIME and SHAP and the alternative explanation approach CIU, centered on the Contextual Value and Utility (CIU). The produced explanations were evaluated using human evaluation.