Hoffman, Robert R.


Principles of Explanation in Human-AI Systems

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems. These systems are complex and sometimes biased, but they nevertheless make decisions that impact our lives. XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability. These systems are often not tested to determine whether the algorithm helps users accomplish any goals, and so their explainability remains unproven. We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems, and implement algorithms to serve that purpose. In this paper, we review some of the basic concepts that have been used for user-centered XAI systems over the past 40 years of research. Based on these, we describe the "Self-Explanation Scorecard", which can help developers understand how they can empower users by enabling self-explanation. Finally, we present a set of empirically-grounded, user-centered design principles that may guide developers to create successful explainable systems.


Explaining AI as an Exploratory Process: The Peircean Abduction Model

arXiv.org Artificial Intelligence

Current discussions of "Explainable AI" (XAI) do not much consider the role of abduction in explanatory reasoning (see Mueller, et al., 2018). It might be worthwhile to pursue this, to develop intelligent systems that allow for the observation and analysis of abductive reasoning and the assessment of abductive reasoning as a learnable skill. Abductive inference has been defined in many ways. For example, it has been defined as the achievement of insight. Most often abduction is taken as a single, punctuated act of syllogistic reasoning, like making a deductive or inductive inference from given premises. In contrast, the originator of the concept of abduction---the American scientist/philosopher Charles Sanders Peirce---regarded abduction as an exploratory activity. In this regard, Peirce's insights about reasoning align with conclusions from modern psychological research. Since abduction is often defined as "inferring the best explanation," the challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked. We explore these linkages in this report. This analysis provides a theoretical framework for understanding what the XAI researchers are already doing, it explains why some XAI projects are succeeding (or might succeed), and it leads to design advice.


Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI

arXiv.org Artificial Intelligence

This is an integrative review that address the question, "What makes for a good explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is necessarily selective. That said, most of the key concepts and issues are expressed in this Report. The Report encapsulates the history of computer science efforts to create systems that explain and instruct (intelligent tutoring systems and expert systems). The Report expresses the explainability issues and challenges in modern AI, and presents capsule views of the leading psychological theories of explanation. Certain articles stand out by virtue of their particular relevance to XAI, and their methods, results, and key points are highlighted. It is recommended that AI/XAI researchers be encouraged to include in their research reports fuller details on their empirical or experimental methods, in the fashion of experimental psychology research reports: details on Participants, Instructions, Procedures, Tasks, Dependent Variables (operational definitions of the measures and metrics), Independent Variables (conditions), and Control Conditions.


Metrics for Explainable AI: Challenges and Prospects

arXiv.org Artificial Intelligence

The question addressed in this paper is: If we present to a user an AI system that explains how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? In other words, how do we know that an explanainable AI system (XAI) is any good? Our focus is on the key concepts of measurement. We discuss specific methods for evaluating: (1) the goodness of explanations, (2) whether users are satisfied by explanations, (3) how well users understand the AI systems, (4) how curiosity motivates the search for explanations, (5) whether the user's trust and reliance on the AI are appropriate, and finally, (6) how the human-XAI work system performs. The recommendations we present derive from our integration of extensive research literatures and our own psychometric evaluations.


Monster Analogies

AI Magazine

Over the centuries, it has become reified in that analogical reasoning has sometimes been regarded as a fundamental cognitive process. In addition, it has become identified with a particular expressive format. Beyond this dependence, research in cognitive science suggests that analogy relies on a number of genuinely fundamental cognitive capabilities, including semantic flexibility, the perception of resemblances and of distinctions, imagination, and metaphor. Extant symbolic models of analogical reasoning have various sorts of limitation, yet each model presents some important insights and plausible mechanisms.


Monster Analogies

AI Magazine

Analogy has a rich history in Western civilization. Over the centuries, it has become reified in that analogical reasoning has sometimes been regarded as a fundamental cognitive process. In addition, it has become identified with a particular expressive format. The limitations of the modern view are illustrated by monster analogies, which show that analogy need not be regarded as something having a single form, format, or semantics. Analogy clearly does depend on the human ability to create and use well-defined or analytic formats for laying out propositions that express or imply meanings and perceptions. Beyond this dependence, research in cognitive science suggests that analogy relies on a number of genuinely fundamental cognitive capabilities, including semantic flexibility, the perception of resemblances and of distinctions, imagination, and metaphor. Extant symbolic models of analogical reasoning have various sorts of limitation, yet each model presents some important insights and plausible mechanisms. I argue that future efforts could be aimed at integration. This aim would include the incorporation of contextual information, the construction of semantic bases that are dynamic and knowledge rich, and the incorporation of multiple approaches to the problems of inference constraint.


The 1994 Florida AI Research Symposium

AI Magazine

The 1994 Florida AI Research Symposium was held 5-7 May at Pensacola Beach, Florida. This symposium brought together researchers and practitioners in AI, cognitive science, and allied disciplines to discuss timely topics, cutting-edge research, and system development efforts in areas spanning the entire AI field. Symposium highlights included Pat Hayes's comparison of the history of AI to the history of powered flight and Clark Glymour's discussion of the prehistory of AI.


Expertise in Context: Report on the Third International Workshop on Human and Machine Cognition

AI Magazine

The Third International Workshop on Human and Machine Cognition was held in Seaside, Florida, on 13-15 May 1993. Each paper session included presentations on cognitive research, educational research, AI theory and logic, and particular knowledge engineering projects. This mixture encouraged the participants from diverse disciplines to listen and respond to one another. These international workshops are held to allow leading scientists, scholars, and practitioners to discuss current issues and research in particular topics in AI and cognitive science.


The 1994 Florida AI Research Symposium

AI Magazine

The atmosphere was definitely international--270 participants from around the world. It is frustrating that no brain, let be important in AI. Included among the meeting's were frustrated by the desire However, without these frustrations, in AI, cognitive science, and discussed ways in which AI could be we would, of course, know that allied disciplines to discuss timely topics, seen as an enabling technology with there was no need for such meetings. As it was, everyone came away with a development efforts in areas spanning FLAIRS-94 was notable for the mix stack of business cards and some the entire AI field. FLAIRS-94 was of industry, government, and academic things to do right away.


Expertise in Context: Report on the Third International Workshop on Human and Machine Cognition

AI Magazine

The Third International Workshop on Human and Machine Cognition was held in Seaside, Florida, on 13-15 May 1993. Each paper session included presentations on cognitive research, educational research, AI theory and logic, and particular knowledge engineering projects. This mixture encouraged the participants from diverse disciplines to listen and respond to one another. These international workshops are held to allow leading scientists, scholars, and practitioners to discuss current issues and research in particular topics in AI and cognitive science.