Goto

Collaborating Authors

explainable machine learning


Explainable Machine Learning

#artificialintelligence

Daily life is increasingly governed by decisions made by algorithms due to the growing availability of big data sets. Many machine learning algorithms, and neural networks specifically, are black-box models, i.e. they give no insight into how they reach their outcomes which prevents users from trusting the model. If we cannot understand the reasons for their decisions, how can we be sure that the decisions are correct? What if they are wrong, discriminating or amoral? This project aims to create new machine learning methods that can explain their decision making process, in order for users to understand the reasons behind a prediction.


Drive smarter decision-making with explainable machine learning

#artificialintelligence

Did you miss a session from the Future of Work Summit? This article was contributed by Berk Birand, CEO of Fero Labs. Is the hype around AI finally cooling? That's what some recent surveys would suggest. Most executives now say the technology is more hype than reality-- and 65% report zero value from their AI and machine learning investments.


Explainable Machine Larning for liver transplantation

arXiv.org Artificial Intelligence

In this work, we present a flexible method for explaining, in human readable terms, the predictions made by decision trees used as decision support in liver transplantation. The decision trees have been obtained through machine learning applied on a dataset collected at the liver transplantation unit at the Coru\~na University Hospital Center and are used to predict long term (five years) survival after transplantation. The method we propose is based on the representation of the decision tree as a set of rules in a logic program (LP) that is further annotated with text messages. This logic program is then processed using the tool xclingo (based on Answer Set Programming) that allows building compound explanations depending on the annotation text and the rules effectively fired when a given input is provided. We explore two alternative LP encodings: one in which rules respect the tree structure (more convenient to reflect the learning process) and one where each rule corresponds to a (previously simplified) tree path (more readable for decision making).


Interpretable vs Explainable Machine Learning

#artificialintelligence

From medical diagnoses to credit underwriting, machine learning models are being used to make increasingly important decisions. To trust the systems powered by these models we need to know how they make predictions. This is why the difference between an interpretable and explainable model is important. The way we understand our models and degree to which we can truly understand then depends on whether they are interpretable or explainable. Put briefly, an interpretable model can be understood by a human without any other aids/techniques.


Explainable Machine Learning with LIME and H2O in R

#artificialintelligence

Welcome to this hands-on, guided introduction to Explainable Machine Learning with LIME and H2O in R. By the end of this project, you will be able to use the LIME and H2O packages in R for automatic and interpretable machine learning, build classification models quickly with H2O AutoML and explain and interpret model predictions using LIME. Machine learning (ML) models such as Random Forests, Gradient Boosted Machines, Neural Networks, Stacked Ensembles, etc., are often considered black boxes. However, they are more accurate for predicting non-linear phenomena due to their flexibility. Experts agree that higher accuracy often comes at the price of interpretability, which is critical to business adoption, trust, regulatory oversight (e.g., GDPR, Right to Explanation, etc.). As more industries from healthcare to banking are adopting ML models, their predictions are being used to justify the cost of healthcare and for loan approvals or denials.


Explainable Machine Learning for Fraud Detection

arXiv.org Artificial Intelligence

The application of machine learning to support the processing of large datasets holds promise in many industries, including financial services. However, practical issues for the full adoption of machine learning remain with the focus being on understanding and being able to explain the decisions and predictions made by complex models. In this paper, we explore explainability methods in the domain of real-time fraud detection by investigating the selection of appropriate background datasets and runtime trade-offs on both supervised and unsupervised models.


Hardware Acceleration of Explainable Machine Learning using Tensor Processing Units

#artificialintelligence

Machine learning (ML) is successful in achieving human-level performance in various fields. However, it lacks the ability to explain an outcome due to its black-box nature. While existing explainable ML is promising, almost all of these methods focus on formatting interpretability as an optimization problem. Such a mapping leads to numerous iterations of time-consuming complex computations, which limits their applicability in real-time applications. In this paper, we propose a novel framework for accelerating explainable ML using Tensor Processing Units (TPUs).


Hardware Acceleration of Explainable Machine Learning using Tensor Processing Units

arXiv.org Artificial Intelligence

Machine learning (ML) is successful in achieving human-level performance in various fields. However, it lacks the ability to explain an outcome due to its black-box nature. While existing explainable ML is promising, almost all of these methods focus on formatting interpretability as an optimization problem. Such a mapping leads to numerous iterations of time-consuming complex computations, which limits their applicability in real-time applications. In this paper, we propose a novel framework for accelerating explainable ML using Tensor Processing Units (TPUs). The proposed framework exploits the synergy between matrix convolution and Fourier transform, and takes full advantage of TPU's natural ability in accelerating matrix computations. Specifically, this paper makes three important contributions. (1) To the best of our knowledge, our proposed work is the first attempt in enabling hardware acceleration of explainable ML using TPUs. (2) Our proposed approach is applicable across a wide variety of ML algorithms, and effective utilization of TPU-based acceleration can lead to real-time outcome interpretation. (3) Extensive experimental results demonstrate that our proposed approach can provide an order-of-magnitude speedup in both classification time (25x on average) and interpretation time (13x on average) compared to state-of-the-art techniques.


A taxonomy of explainable (XAI) AI models

#artificialintelligence

Vaishak Belle (University of Edinburgh & Alan Turing Institute) and Ioannis Papantonis (University of Edinburgh) which presents a taxonomy of explainable AI (XAI). XAI is a complex subject and as far as I can see, I have not yet seen a taxonomy of XAI. Model-agnostic Explainability Approaches are designed to be flexible and do not depend on the intrinsic architecture of a model(such as Random forest). These approaches solely relate the inputs to the outputs. Model agnistic approaches could be explanation by simplification, explanation by feature relevance or explanation by visualizations.


Explainable Machine Learning for Public Policy: Use Cases, Gaps, and Research Directions

arXiv.org Artificial Intelligence

In Machine Learning (ML) models used for supporting decisions in high-stakes domains such as public policy, explainability is crucial for adoption and effectiveness. While the field of explainable ML has expanded in recent years, much of this work does not take real-world needs into account. A majority of proposed methods use benchmark ML problems with generic explainability goals without clear use-cases or intended end-users. As a result, the effectiveness of this large body of theoretical and methodological work on real-world applications is unclear. This paper focuses on filling this void for the domain of public policy. We develop a taxonomy of explainability use-cases within public policy problems; for each use-case, we define the end-users of explanations and the specific goals explainability has to fulfill; third, we map existing work to these use-cases, identify gaps, and propose research directions to fill those gaps in order to have practical policy impact through ML.