Explanation & Argumentation


Turing-Completeness of Dynamics in Abstract Persuasion Argumentation

arXiv.org Artificial Intelligence

Abstract Persuasion Argumentation (APA) is a dynamic argumentation formalism that extends Dung argumentation with persuasion relations. In this work, we show through two-counter Minsky machine encoding that APA dynamics is Turing-complete.


Online Explanation Generation for Human-Robot Teaming

arXiv.org Artificial Intelligence

As Artificial Intelligence (AI) becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key requirements of an explainable agency. Prior work on explanation generation focuses on supporting the reasoning behind the robot's behavior. These approaches, however, fail to consider the cognitive effort needed to understand the received explanation. In particular, the human teammate is expected to understand any explanation provided before the task execution, no matter how much information is presented in the explanation. In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps to spread out the information to be explained and thus reducing the cognitive load of humans. However, a challenge here is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented. We base our explanation generation method in a model reconciliation setting introduced in our prior work. Our approach is evaluated both with human subjects in a standard planning competition (IPC) domain, using NASA Task Load Index (TLX), as well as in simulation with four different problems.


Natural Language Interaction with Explainable AI Models

arXiv.org Artificial Intelligence

This paper presents an explainable AI (XAI) system that provides explanations for its predictions. The system consists of two key components - namely, the prediction And-Or graph (AOG) model for recognizing and localizing concepts of interest in input data, and the XAI model for providing explanations to the user about the AOG's predictions. In this work, we Figure 1: Two frames (scenes) of a video: (a) focus on the XAI model specified to interact top-left image (scene1) shows two persons sitting with the user in natural language, at the reception and others entering the auditorium whereas the AOG's predictions are considered and (b) top-right (scene2) image people running given and represented by the corresponding out of an auditorium. Bottom-left shows the parse graphs (pg's) of the AOG. AOG parse graph (pg) for the top-left image and Our XAI model takes pg's as input and Bottom-right shows the pg for the top-right image provides answers to the user's questions using the following types of reasoning: direct evidence (e.g., detection scores), Consider for example, two frames (scenes) of part-based inference (e.g., detected parts a video shown in Figure 1. An action detection provide evidence for the concept asked), model might predict that two people in the scene1 and other evidences from spatiotemporal are in sitting posture. User might be interested context (e.g., constraints from the spatiotemporal to know more details about the prediction such surround). We identify several as: Why do the model think the people are in sitting correlations between user's questions posture? Why not standing instead of sitting? and the XAI answers using Youtube Action Why two persons are sitting instead of one?


In Search of Explainable Artificial Intelligence Geopolitical Monitor

#artificialintelligence

Today, if a new entrepreneur wants to understand why the banks rejected a loan application for his start-up, or if a young graduate wants to know why the large corporation for which he was hoping to work did not invite her for an interview, they will not be able to discover the reasons that led to these decisions. Both the bank and the corporation used artificial intelligence (AI) algorithms to determine the outcome of the loan or the job application. In practice, this means that if your loan application is rejected, or your CV rejected, no explanation can be provided. This produces an embarrassing scenario, which tends to relegate AI technologies to suggesting solutions, which must be validated by human beings. Explaining how these technologies work remains one of the great challenges that researchers and adopters must resolve in order to allow humans to become less suspicious, and more accepting, of AI.


Why 'Explainable AI' Is the Next Frontier in Financial Crime Fighting

#artificialintelligence

With new technologies like faster payments taking hold, the explosion of readily-available data, and the ever-changing regulatory landscape, staying ahead of financial crime and compliance risk has become more complex and expensive than ever before. As these trends show no sign of abating, the compliance operations and monitoring staff of a financial institution often find themselves a major cost center. Financial institutions (FIs) must manage compliance budgets without losing sight of primary functions and quality control. To answer this, many have made the move to automating time-intensive, rote tasks like data gathering and sorting through alerts by adopting innovative technologies like AI and machine learning to free up time-strapped analysts for more informed and precise decision-making processes. As FIs often benchmark themselves against their competitors, they are increasingly interested in seeing how these technologies are performing, and are asking themselves how to leverage artificial intelligence and machine learning to increase insight, reduce false positives and decrease compliance spend.


A Grounded Interaction Protocol for Explainable Artificial Intelligence

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans. Successful explanation involves both cognitive and social processes. In this paper we focus on the challenge of meaningful interaction between an explainer and an explainee and investigate the structural aspects of an interactive explanation to propose an interaction protocol. We follow a bottom-up approach to derive the model by analysing transcripts of different explanation dialogue types with 398 explanation dialogues. We use grounded theory to code and identify key components of an explanation dialogue. We formalize the model using the agent dialogue framework (ADF) as a new dialogue type and then evaluate it in a human-agent interaction study with 101 dialogues from 14 participants. Our results show that the proposed model can closely follow the explanation dialogues of human-agent conversations.


Dealing with Qualitative and Quantitative Features in Legal Domains

arXiv.org Artificial Intelligence

In this work, we enrich a formalism for argumentation by including a formal characterization of features related to the knowledge, in order to capture proper reasoning in legal domains. We add meta-data information to the arguments in the form of labels representing quantitative and qualitative data about them. These labels are propagated through an argumentative graph according to the relations of support, conflict, and aggregation between arguments.


Complexity Results and Algorithms for Bipolar Argumentation

arXiv.org Artificial Intelligence

Bipolar Argumentation Frameworks (BAFs) admit several interpretations of the support relation and diverging definitions of semantics. Recently, several classes of BAFs have been captured as instances of bipolar Assumption-Based Argumentation, a class of Assumption-Based Argumentation (ABA). In this paper, we establish the complexity of bipolar ABA, and consequently of several classes of BAFs. In addition to the standard five complexity problems, we analyse the rarely-addressed extension enumeration problem too. We also advance backtracking-driven algorithms for enumerating extensions of bipolar ABA frameworks, and consequently of BAFs under several interpretations. We prove soundness and completeness of our algorithms, describe their implementation and provide a scalability evaluation. We thus contribute to the study of the as yet uninvestigated complexity problems of (variously interpreted) BAFs as well as of bipolar ABA, and provide the lacking implementations thereof.


Bipolar in Temporal Argumentation Framework

arXiv.org Artificial Intelligence

A Timed Argumentation Framework (TAF) is a formalism where arguments are only valid for consideration in a given period of time, called availability intervals, which are defined for every individual argument. The original proposal is based on a single, abstract notion of attack between arguments that remains static and permanent in time. Thus, in general, when identifying the set of acceptable arguments, the outcome associated with a TAF will vary over time. In this work we introduce an extension of TAF adding the capability of modeling a support relation between arguments. In this sense, the resulting framework provides a suitable model for different time-dependent issues. Thus, the main contribution here is to provide an enhanced framework for modeling a positive (support) and negative (attack) interaction varying over time, which are relevant in many real-world situations. This leads to a Timed Bipolar Argumentation Framework (T-BAF), where classical argument extensions can be defined. The proposal aims at advancing in the integration of temporal argumentation in different application domain.


Technical report of "Empirical Study on Human Evaluation of Complex Argumentation Frameworks"

arXiv.org Artificial Intelligence

In abstract argumentation, multiple argumentation semantics have been proposed that allow to select sets of jointly acceptable arguments from a given argumentation framework, i.e. based only on the attack relation between arguments. The existence of multiple argumentation semantics raises the question which of these semantics predicts best how humans evaluate arguments. Previous empirical cognitive studies that have tested how humans evaluate sets of arguments depending on the attack relation between them have been limited to a small set of very simple argumentation frameworks, so that some semantics studied in the literature could not be meaningfully distinguished by these studies. In this paper we report on an empirical cognitive study that overcomes these limitations by taking into consideration twelve argumentation frameworks of three to eight arguments each. These argumentation frameworks were mostly more complex than the argumentation frameworks considered in previous studies. All twelve argumentation framework were systematically instantiated with natural language arguments based on a certain fictional scenario, and participants were shown both the natural language arguments and a graphical depiction of the attack relation between them. Our data shows that grounded and CF2 semantics were the best predictors of human argument evaluation. A detailed analysis revealed that part of the participants chose a cognitively simpler strategy that is predicted very well by grounded semantics, while another part of the participants chose a cognitively more demanding strategy that is mostly predicted well by CF2 semantics.