Goto

Collaborating Authors

Explainable AI and the Future of Machine Learning

#artificialintelligence

As the'AI era' of increasingly complex, smart, autonomous, big-data-based tech comes upon us, the algorithms that fuel it are getting under more and more scrutiny. Whether you're a data scientist or not, it becomes obvious that the inner workings of machine learning, deep learning, and black-box neural networks are not exactly transparent. In the wake of high-profile news reports concerning user data breaches, leaks, violations, and biased algorithms, that is rapidly becoming one of the biggest -- if not the biggest -- sources of problems on the way to mass AI integration in both the public and private sectors. Here's where the push for better AI interpretability and explainability takes root. Already a focal point of machine learning consulting and a notable topic in the 2019 AI discussions, it's only likely to accelerate and become one of the central conversations of 2020 regarding the questions of both security and ethics of artificial intelligence.


Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective Frank Emmert-Streib 1,2, Olli Yli-Harja 2, and Matthias Dehmer 3 1 Predictive Society and Data Analytics Lab, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland 2 Institute of Biosciences and Medical Technology, Tampere University of Technology, Tampere, Finland 3 Institute for Intelligent Production, Faculty for Management, University of Applied Sciences Upper Austria, Steyr Campus, 4040 Steyr, Austria January 26, 2020 Abstract We are used to the availability of big data generated in nearly all fields of science as a consequence of technological progress. However, the analysis of such data possess vast challenges. One of these relates to the explainability of artificial intelligence (AI) or machine learning methods. Currently, many of such methods are non-transparent with respect to their working mechanism and for this reason are called black box models, most notably deep learning methods. However, it has been realized that this constitutes severe problems for a number of fields including the health sciences and criminal justice and arguments have been brought forward in favor of an explainable AI. In this paper, we do not assume the usual perspective presenting explainable AI as it should be, but rather we provide a discussion what explainable AI can be . The difference is that we do not present wishful thinking but reality grounded properties in relation to a scientific theory beyond physics. 1 Introduction Artificial intelligence (AI) and machine learning (ML) have achieved great successes in a number of different learning tasks including image recognition and speech processing [1-3].


IBM Research Launches Explainable AI Toolkit

#artificialintelligence

Explainability or interpretability of AI is a huge deal these days, especially due to the rise in the number of enterprises depending on the decisions made by machine learning and deep learning. Naturally, stakeholders want a level of transparency for how the algorithms came up with their recommendations. The so-called "black box" of AI is rapidly being questioned. For this reason, I was encouraged to learn of IBM's recent efforts in this area. The company's research arm just launched a new open-source AI toolkit, "AI Explainability 360," consisting of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.


The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted. Explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI. In this paper we review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems for the health-care domain and contribute to formalization of the field of explainable AI. We argue the reason to demand explainability determines what should be explained as this determines the relative importance of the properties of explainability (i.e. interpretability and fidelity). Based on this, we give concrete recommendations to choose between classes of explainable AI methods (explainable modelling versus post-hoc explanation; model-based, attribution-based, or example-based explanations; global and local explanations). Furthermore, we find that quantitative evaluation metrics, which are important for objective standardized evaluation, are still lacking for some properties (e.g. clarity) and types of explanators (e.g. example-based methods). We conclude that explainable modelling can contribute to trustworthy AI, but recognize that complementary measures might be needed to create trustworthy AI (e.g. reporting data quality, performing extensive (external) validation, and regulation).


Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

arXiv.org Machine Learning

There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.