explanation


6 Python Libraries to Interpret Machine Learning Models and Build Trust - Analytics Vidhya

#artificialintelligence

The'SHapley Additive exPlanations' Python library, better knows as the SHAP library, is one of the most popular libraries for machine learning interpretability. The SHAP library uses Shapley values at its core and is aimed at explaining individual predictions. But wait – what are Shapley values? Simply put, Shapley values are derived from Game Theory, where each feature in our data is a player, and the final reward is the prediction. Depending on the reward, Shapley values tell us how to distribute this reward among the players fairly. We won't cover this technique in detail here, but you can refer to this excellent article explaining how Shapley values work: A Unique Method for Machine Learning Interpretability: Game Theory & Shapley Values! The best part about SHAP is that it offers a special module for tree-based models. Considering how popular tree-based models are in hackathons and in the industry, this module makes fast computations, even considering dependent features.


Basic Theory Neural Style Transfer #2

#artificialintelligence

Timeline: 00:00 - intro & NST series overview 02:25 - what I want this series to be 03:30 - defining the task of NST 04:01 - 2 types of style transfer 04:43 - a glimpse of the image style transfer history 06:55 - explanation of the content representation 10:10 - explanation of the style representation 14:12 - putting it all together (animation) ---------------- The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition rather than the algebraic and numerical intuition.


Completely Free Machine Learning Reading List

#artificialintelligence

It includes detailed explanations of the fundamental concepts in machine learning, data processing, model evaluation and the typical machine learning workflow. It provides many coded examples using scikit-learn.


Will XAI become the key factor to future Artificial Intelligence adoption?

#artificialintelligence

Explainable Artificial Intelligence (XAI) seems to be a hot topic nowadays. It is a topic I came across recently in a number of instances: workshops organized by the European Defense Agency (EDA), posts from technology partners such as Expert System (here) or internal discussion with SDL's Research team. The straightforward definition of XAI comes from Wikipedia: "Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is an implementation of the social right to explanation."


Artificial Intelligence Explanation In Hindi Future Of Artificial Intelligence BH Expo

#artificialintelligence

AI द स त य द श मन? AI Explanation In Hindi Future Of Artificial Intelligence BH Expo With this title i want explain how AI will behave with humans in future, what kind of future changes will be happens with help of Artificial Intelligence, What we should to do for control on AI. There two types of Artificial Intelligence: 1) Strong AI 2) Weak AI Both types of AI is explained in this video. Will AI become our friend or enemy? So guys, show your support with like, share, subscribe, comment.


Explaining machine learning models to the business

#artificialintelligence

Explainable machine learning is a sub-discipline of artificial intelligence (AI) and machine learning that attempts to summarize how machine learning systems make decisions. Summarizing how machine learning systems make decisions can be helpful for a lot of reasons, like finding data-driven insights, uncovering problems in machine learning systems, facilitating regulatory compliance, and enabling users to appeal -- or operators to override -- inevitable wrong decisions. Of course all that sounds great, but explainable machine learning is not yet a perfect science. Figure 1: Explanations created by H2O Driverless AI. These explanations are probably better suited for data scientists than for business users.


On Relating Explanations and Adversarial Examples

Neural Information Processing Systems

The importance of explanations (XP's) of machine learning (ML) model predictions and of adversarial examples (AE's) cannot be overstated, with both arguably being essential for the practical success of ML in different settings. There has been recent work on understanding and assessing the relationship between XP's and AE's. However, such work has been mostly experimental and a sound theoretical relationship has been elusive. This paper demonstrates that explanations and adversarial examples are related by a generalized form of hitting set duality, which extends earlier work on hitting set duality observed in model-based diagnosis and knowledge compilation. Furthermore, the paper proposes algorithms, which enable computing adversarial examples from explanations and vice-versa.


Explanations can be manipulated and geometry is to blame

Neural Information Processing Systems

Explanation methods aim to make neural networks more trustworthy and interpretable. In this paper, we demonstrate a property of explanation methods which is disconcerting for both of these purposes. Namely, we show that explanations can be manipulated arbitrarily by applying visually hardly perceptible perturbations to the input that keep the network's output approximately constant. We establish theoretically that this phenomenon can be related to certain geometrical properties of neural networks. This allows us to derive an upper bound on the susceptibility of explanations to manipulations.


On the (In)fidelity and Sensitivity of Explanations

Neural Information Processing Systems

We consider objective evaluation measures of saliency explanations for complex black-box machine learning models. We propose simple robust variants of two notions that have been considered in recent literature: (in)fidelity, and sensitivity. We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods. By varying the perturbation distribution that defines infidelity, we obtain novel explanations by optimizing infidelity, which we show to out-perform existing explanations in both quantitative and qualitative measurements. Another salient question given these measures is how to modify any given explanation to have better values with respect to these measures.


Towards Automatic Concept-based Explanations

Neural Information Processing Systems

Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions. Most of the current explanation methods provide explanations through feature importance scores, which identify features that are important for each individual input. However, how to systematically summarize and interpret such per sample feature importance scores itself is challenging. In this work, we propose principles and desiderata for \emph{concept} based explanation, which goes beyond per-sample features to identify higher level human-understandable concepts that apply across the entire dataset. We develop a new algorithm, ACE, to automatically extract visual concepts.