In the present paper we present the potential of Explainable Artificial Intelligence methods for decision-support in medical image analysis scenarios. With three types of explainable methods applied to the same medical image data set our aim was to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN)... The visual explanations were provided on in-vivo gastral images obtained from a Video capsule endoscopy (VCE), with the goal of increasing the health professionals' trust in the black box predictions. We implemented two post-hoc interpretable machine learning methods LIME and SHAP and the alternative explanation approach CIU, centered on the Contextual Value and Utility (CIU). The produced explanations were evaluated using human evaluation.
Banks use AI to determine whether to extend credit, and how much, to customers. Radiology departments deploy AI to help distinguish between healthy tissue and tumors. And HR teams employ it to work out which of hundreds of resumes should be sent on to recruiters. These are just a few examples of how AI is being adopted across industries. And with so much at stake, businesses and governments adopting AI and machine learning are increasingly being pressed to lift the veil on how their AI models make decisions.
Explainable AI (XAI) fully describes an AI model, its expected impact and any potential biases. It helps you understand the steps taken by an AI technique to arrive at a decision. In this article, we will take a look at XAI in detail and explore how you can implement it in your organisation. "About half (46%) of South African companies indicate that they are already implementing AI within their organisations." Why is explainable AI important for your business?
Engineering Application of Data Science can be defined as using Artificial Intelligence and Machine Learning to model physical phenomena purely based on facts (field measurements, data). The main objective of this technology is the complete avoidance of assumptions, simplifications, preconceived notions, and biases. One of the major characteristics of Engineering Application of Data Science is its incorporation of Explainable Artificial Intelligence (XAI). While using actual field measurements as the main building blocks of modeling physical phenomena, Engineering Application of Data Science incorporates several types of Machine Learning Algorithms including artificial neural networks, fuzzy set theory, and evolutionary computing. Predictive models of Engineering Application of Data Science (data-driven predictive models) are not represented through unexplainable "Black Box". Predictive models of Engineering Application of Data Science are reasonably explainable.
As was mentioned earlier in this article, Type Curves that are generated using mathematical equations are very "well-behaved" (continuous, non-linear, certain shape that changes in a similar fashion from curve to curve). Figure 16 demonstrates few more examples of Type Curves that have been generated in reservoir engineering. The question is, "what is the main characteristic of a model that is capable of generating series of well-behave Type Curves?" The immediate, simple answer to this question would be: "the model that is capable of generating a series of well-behave Type Curves is a physics-based model developed by one or more mathematical equations. The well-behave Type Curves that clearly explain the behavior of the physics-based model are generated through the solutions of the mathematical equations."
The field of XAI has been rapidly growing in the past few years, especially due to the increased awareness and urgency of the need for transparency in AI models. The status quo in AI research is the existence of a "black box" in machine learning; essentially, AI models intake so much data and create such complicated neural networks to meet their outcomes, the researchers themselves do not even know what exactly transpires in the collection and analysis of Big Data by AI algorithms. Outcries against racial bias and a lack of data privacy have mounted as AI has become more ingrained in our lives, and thus far, there have been few solutions.
The problem that we face now with artificial intelligence (AI) is that many methods work, and we believe that for whatever is done under the hoods – we don't elaborate on the details. Yet it's very important to understand how the prediction is done, not only to understand the architecture of the method. That's why explainable AI (xAI) is becoming a hot topic today. There are many goals for an xAI model to fulfill. It's important that domain experts using a model can trust it.
Model explainability becomes a basic part of the machine learning pipeline. Keeping a machine learning model as a "black box" is not an option anymore. Luckily there are tools that are evolving rapidly and becoming more popular. This guide is a practical guide for XAI analysis of SHAP open source Python package for a regression problem. SHAP (Shapley Additive Explanations) by Lundberg and Lee (2016) is a method to explain individual predictions, based on the game theoretically optimal Shapley values.
Explainable AI is the hype! But depending on the use case the AI has to be explainable. Imagine if your loan broker rejected you without proper reason and you would have to move out of your house, or if the insurance premium were to be set by a black box with no real way to know what affects the resulting premium. But what is a system that provides explainable AI? It is a system that supports their decisions with compelling arguments.
"So, the machine has high accuracy and explains its decisions, but we still don't have engagement with our users?" I asked seeking clarification on a rather perplexing situation. Aware of my prior work in Explainable AI (XAI) around rationale generation, a prominent tech company had just hired me to solve a unique problem. They invested significant resources to build an AI-powered cybersecurity system that aims to help analysts manage firewall configurations, especially "bloat" that happens when people forget to close open ports. Over time, these open ports accumulate and create security vulnerability. Not only did this system have commendable accuracy, it also tried to explain its decision via technical (or algorithmic) transparency. But, there was almost zero to no traction amongst its users. I think we just need better models…we need to build better rationales [natural language explanations] … guess that's why we brought you in!" the team's director chuckled as we continued the ...