Goto

Collaborating Authors

Explanation & Argumentation


5 Explainable Machine Learning Models You Should Understand

#artificialintelligence

As we know, Machine Learning is ubiquitous in our day to day lives. From product recommendations on Amazon, targeted advertising, and suggestions of what to watch, to funny Instagram filters. If something goes wrong with these, it probably won't ruin your life. Maybe you won't get that perfect selfie, or maybe companies will have to spend more on advertising. We need to be able to dissect our model, we will need to be able to understand and explain our model before it goes anywhere near a production system.


Explainable AI, human-like comprehension and Knowledge discovery

#artificialintelligence

NLP or natural language processing, human-like comprehension and explainable AI all sound like buzz words, but they are all incredibly important. Luca Scagliarini is chief product officer at Expert.AI and has been looking into NLP long before it was cool. The full interview is in this video, but here is some background. Watch the video for the full interview and to lear more about Luca Scagliarini's wealth of knowledge on various AI technologies and how they are being applied.


Papers with Code - Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

#artificialintelligence

In the present paper we present the potential of Explainable Artificial Intelligence methods for decision-support in medical image analysis scenarios. With three types of explainable methods applied to the same medical image data set our aim was to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN)... The visual explanations were provided on in-vivo gastral images obtained from a Video capsule endoscopy (VCE), with the goal of increasing the health professionals' trust in the black box predictions. We implemented two post-hoc interpretable machine learning methods LIME and SHAP and the alternative explanation approach CIU, centered on the Contextual Value and Utility (CIU). The produced explanations were evaluated using human evaluation.


NVIDIA Blog: What is Explainable AI?

#artificialintelligence

Banks use AI to determine whether to extend credit, and how much, to customers. Radiology departments deploy AI to help distinguish between healthy tissue and tumors. And HR teams employ it to work out which of hundreds of resumes should be sent on to recruiters. These are just a few examples of how AI is being adopted across industries. And with so much at stake, businesses and governments adopting AI and machine learning are increasingly being pressed to lift the veil on how their AI models make decisions.


How explainable AI can help uplift modern businesses

#artificialintelligence

Explainable AI (XAI) fully describes an AI model, its expected impact and any potential biases. It helps you understand the steps taken by an AI technique to arrive at a decision. In this article, we will take a look at XAI in detail and explore how you can implement it in your organisation. "About half (46%) of South African companies indicate that they are already implementing AI within their organisations." Why is explainable AI important for your business?


Explainable Artificial Intelligence (XAI)

#artificialintelligence

Engineering Application of Data Science can be defined as using Artificial Intelligence and Machine Learning to model physical phenomena purely based on facts (field measurements, data). The main objective of this technology is the complete avoidance of assumptions, simplifications, preconceived notions, and biases. One of the major characteristics of Engineering Application of Data Science is its incorporation of Explainable Artificial Intelligence (XAI). While using actual field measurements as the main building blocks of modeling physical phenomena, Engineering Application of Data Science incorporates several types of Machine Learning Algorithms including artificial neural networks, fuzzy set theory, and evolutionary computing. Predictive models of Engineering Application of Data Science (data-driven predictive models) are not represented through unexplainable "Black Box". Predictive models of Engineering Application of Data Science are reasonably explainable.


Explainable Artificial Intelligence (XAI)

#artificialintelligence

As was mentioned earlier in this article, Type Curves that are generated using mathematical equations are very "well-behaved" (continuous, non-linear, certain shape that changes in a similar fashion from curve to curve). Figure 16 demonstrates few more examples of Type Curves that have been generated in reservoir engineering. The question is, "what is the main characteristic of a model that is capable of generating series of well-behave Type Curves?" The immediate, simple answer to this question would be: "the model that is capable of generating a series of well-behave Type Curves is a physics-based model developed by one or more mathematical equations. The well-behave Type Curves that clearly explain the behavior of the physics-based model are generated through the solutions of the mathematical equations."


What is Explainable AI, and How Does it Apply to Data Ethics? - AI for Good Foundation

#artificialintelligence

The field of XAI has been rapidly growing in the past few years, especially due to the increased awareness and urgency of the need for transparency in AI models. The status quo in AI research is the existence of a "black box" in machine learning; essentially, AI models intake so much data and create such complicated neural networks to meet their outcomes, the researchers themselves do not even know what exactly transpires in the collection and analysis of Big Data by AI algorithms. Outcries against racial bias and a lack of data privacy have mounted as AI has become more ingrained in our lives, and thus far, there have been few solutions.


Explainable Artificial Intelligence (xAI) Explained

#artificialintelligence

The problem that we face now with artificial intelligence (AI) is that many methods work, and we believe that for whatever is done under the hoods – we don't elaborate on the details. Yet it's very important to understand how the prediction is done, not only to understand the architecture of the method. That's why explainable AI (xAI) is becoming a hot topic today. There are many goals for an xAI model to fulfill. It's important that domain experts using a model can trust it.


Explainable AI (XAI) with SHAP - regression problem

#artificialintelligence

Model explainability becomes a basic part of the machine learning pipeline. Keeping a machine learning model as a "black box" is not an option anymore. Luckily there are tools that are evolving rapidly and becoming more popular. This guide is a practical guide for XAI analysis of SHAP open source Python package for a regression problem. SHAP (Shapley Additive Explanations) by Lundberg and Lee (2016) is a method to explain individual predictions, based on the game theoretically optimal Shapley values.