Explanation & Argumentation


Hands-on Machine Learning Model Interpretation – Towards Data Science

#artificialintelligence

Interpreting Machine Learning models is no longer a luxury but a necessity given the rapid adoption of AI in the industry. This article in a continuation in my series of articles aimed at'Explainable Artificial Intelligence (XAI)'. The idea here is to cut through the hype and enable you with the tools and techniques needed to start interpreting any black box machine learning model. Following are the previous articles in the series in case you want to give them a quick skim (but are not mandatory for this article). In this article we will give you hands-on guides which showcase various ways to explain potential black-box machine learning models in a model-agnostic way.


Explainable AI Human-Machine Collaboration Accenture

#artificialintelligence

No. 1 – Detecting abnormal travel expenses Most existing systems for reporting travel expenses apply pre-defined views, such as time period, service or employee group. While these systems aim to detect abnormal expenses systematically, they usually fail to explain why the claims singled out are judged to be abnormal. To address this lack of visibility into the context of abnormal travel expense claims, Accenture Labs designed and built a travel expenses system incorporating Explainable AI. By combining knowledge graph and machine learning technologies, the system delivers insight to explain any abnormal claims in real-time. No. 2 – Project risk management Most large companies manage hundreds, if not thousands, of projects every year across multiple vendors, clients and partners.


Why I agree with Geoff Hinton: I believe that Explainable AI is over-hyped by media

#artificialintelligence

Geoffrey Hinton dismissed the need for explainable AI. A range of experts have explained why he is wrong. I actually tend to agree with Geoff. Explainable AI is overrated and hyped by the media. A whole industry has sprung up with a business model of scaring everyone about AI being not explainable.


A Case For Explainable AI & Machine Learning

#artificialintelligence

Yes, it is Holy Grail of AI and for the right reason; whether it about losing a High-Value customer due to wrong Churn Prediction or losing dollars due to incorrect classification of a financial transaction. In reality, Customers are the less bothered accuracy of AI model, but their concerns are about Cluelessness of Data Scientist to explain "How do I trust its decision making?" Data Scientists building credit risk models in the consumer space faced the transparency requirement probably for as long as this field existed, due to regulatory compliance which governs the consumer risk. Marketers also have been bound by certain rules which disallow protected categories such as gender or race to enter the models. These regulations were created in US to protect consumers.



Trichotomic Argumentation Representation

arXiv.org Artificial Intelligence

The Aristotelian trichotomy distinguishes three aspects of argumentation: Logos, Ethos, and Pathos. Even rich argumentation representations like the Argument Interchange Format (AIF) are only concerned with capturing the Logos aspect. Inference Anchoring Theory (IAT) adds the possibility to represent ethical requirements on the illocutionary force edges linking locutions to illocutions, thereby allowing to capture some aspects of ethos. With the recent extensions AIF+ and Social Argument Interchange Format (S-AIF), which embed dialogue and speakers into the AIF argumentation representation, the basis for representing all three aspects identified by Aristotle was formed. In the present work, we develop the Trichotomic Argument Interchange Format (T-AIF), building on the idea from S-AIF of adding the speakers to the argumentation graph. We capture Logos in the usual known from AIF+, Ethos in form of weighted edges between actors representing trust, and Pathos via weighted edges from actors to illocutions representing their level of commitment to the propositions. This extended structured argumentation representation opens up new possibilities of defining semantic properties on this rich graph in order to characterize and profile the reasoning patterns of the participating actors.


Representation, Justification and Explanation in a Value Driven Agent: An Argumentation-Based Approach

arXiv.org Artificial Intelligence

For an autonomous system, the ability to justify and explain its decision making is crucial to improve its transparency and trustworthiness. This paper proposes an argumentation-based approach to represent, justify and explain the decision making of a value driven agent (VDA). By using a newly defined formal language, some implicit knowledge of a VDA is made explicit. The selection of an action in each situation is justified by constructing and comparing arguments supporting different actions. In terms of a constructed argumentation framework and its extensions, the reasons for explaining an action are defined in terms of the arguments for or against the action, by exploiting their defeat relation, as well as their premises and conclusions.


A Tutorial for Weighted Bipolar Argumentation with Continuous Dynamical Systems and the Java Library Attractor

arXiv.org Artificial Intelligence

Weighted bipolar argumentation frameworks allow modeling decision problems and online discussions by defining arguments and their relationships. The strength of arguments can be computed based on an initial weight and the strength of attacking and supporting arguments. While previous approaches assumed an acyclic argumentation graph and successively set arguments' strength based on the strength of their parents, recently continuous dynamical systems have been proposed as an alternative. Continuous models update arguments' strength simultaneously and continuously. While there are currently no analytical guarantees for convergence in general graphs, experiments show that continuous models can converge quickly in large cyclic graphs with thousands of arguments. Here, we focus on the high-level ideas of this approach and explain key results and applications. We also introduce Attractor, a Java library that can be used to solve weighted bipolar argumentation problems. Attractor contains implementations of several discrete and continuous models and numerical algorithms to compute solutions. It also provides base classes that can be used to implement, to evaluate and to compare continuous models easily.


A Polynomial-time Fragment of Epistemic Probabilistic Argumentation (Technical Report)

arXiv.org Artificial Intelligence

Probabilistic argumentation allows reasoning about argumentation problems in a way that is well-founded by probability theory. However, in practice, this approach can be severely limited by the fact that probabilities are defined by adding an exponential number of terms. We show that this exponential blowup can be avoided in an interesting fragment of epistemic probabilistic argumentation and that some computational problems that have been considered intractable can be solved in polynomial time. We give efficient convex programming formulations for these problems and explore how far our fragment can be extended without loosing tractability.


Counting Complexity for Reasoning in Abstract Argumentation

arXiv.org Artificial Intelligence

In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics. When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics. Finally, we take the exponential time hypothesis (ETH) into account and establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension.