Goto

Collaborating Authors

If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques

arXiv.org Artificial Intelligence

In recent years, there has been an explosion of AI research on counterfactual explanations as a solution to the problem of eXplainable AI (XAI). These explanations seem to offer technical, psychological and legal benefits over other explanation techniques. We survey 100 distinct counterfactual explanation methods reported in the literature. This survey addresses the extent to which these methods have been adequately evaluated, both psychologically and computationally, and quantifies the shortfalls occurring. For instance, only 21% of these methods have been user tested. Five key deficits in the evaluation of these methods are detailed and a roadmap, with standardised benchmark evaluations, is proposed to resolve the issues arising; issues, that currently effectively block scientific progress in this field.


Explaining artificial intelligence in human-centred terms – Martin Schüßler

#artificialintelligence

Since AI involves interactions between machines and humans--rather than just the former replacing the latter--'explainable AI' is a new challenge. Intelligent systems, based on machine learning, are penetrating many aspects of our society. They span a large variety of applications--from the seemingly harmless automation of micro-tasks, such as the suggestion of synonymous phrases in text editors, to more contestable uses, such as in jail-or-release decisions, anticipating child-services interventions, predictive policing and many others. Researchers have shown that for some tasks, such as lung-cancer screening, intelligent systems are capable of outperforming humans. In many other cases, however, they have not lived up to exaggerated expectations.


Progressive Explanation Generation for Human-robot Teaming

arXiv.org Artificial Intelligence

Generating explanation to explain its behavior is an essential capability for a robotic teammate. Explanations help human partners better understand the situation and maintain trust of their teammates. Prior work on robot generating explanations focuses on providing the reasoning behind its decision making. These approaches, however, fail to heed the cognitive requirement of understanding an explanation. In other words, while they provide the right explanations from the explainer's perspective, the explainee part of the equation is ignored. In this work, we address an important aspect along this direction that contributes to a better understanding of a given explanation, which we refer to as the progressiveness of explanations. A progressive explanation improves understanding by limiting the cognitive effort required at each step of making the explanation. As a result, such explanations are expected to be smoother and hence easier to understand. A general formulation of progressive explanation is presented. Algorithms are provided based on several alternative quantifications of cognitive effort as an explanation is being made, which are evaluated in a standard planning competition domain.


Evaluation of Explanations Extracted from Textual Reports

AAAI Conferences

Explanations play an important role in AI systems in general and case-based reasoning (CBR) in particular. They can be used for reasoning by the system itself or presented to the user to explain solutions proposed by the system. In our work we investigate the approach where causal explanations are automatically extracted from textual incident reports and reused in a CBR system for incident analysis. The focus of this paper is evaluation of such explanations. We propose an automatic evaluation measure based on the ability of explanations to provide an explicit connection between the problem description and the solution parts of a case.