Goto

Collaborating Authors

Explanation & Argumentation


Why it's vital that AI is able to explain the decisions it makes

#artificialintelligence

Currently, our algorithm is able to consider a human plan for solving the Rubik's Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik's Cube that a person can understand. Our team's next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik's Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.


How explainable artificial intelligence can help humans innovate

#artificialintelligence

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.


L.A. students must get COVID-19 vaccine to return to campus, Beutner says

Los Angeles Times

Once COVID-19 vaccines are available to children, Los Angeles students will have to be immunized before they can return to campus, Supt. He did not, however, suggest that campuses remain closed until the vaccines are available. Instead, he said, the state should set the standards for reopening schools, explain the reasoning behind the standards, and then require campuses to open when these standards are achieved. A COVID-19 vaccine requirement would be "no different than students who are vaccinated for measles or mumps," Beutner said in a pre-recorded briefing. He also compared students, staff and others getting a COVID-19 vaccine to those who "are tested for tuberculosis before they come on campus. That's the best way we know to keep all on a campus safe."


Explaining explainable AI

ZDNet

In 2020, one message in the artificial intelligence (AI) market came through loud and clear: AI's got some explaining to do! Explainable AI (XAI) has long been a fringe discipline in the broader world of AI and machine learning. It exists because many machine-learning models are either opaque or so convoluted that they defy human understanding. But why is it such a hot topic today? AI systems making inexplicable decisions are your governance, regulatory, and compliance colleagues' worst nightmare. But aside from this, there are other compelling reasons for shining a light into the inner workings of AI.


XAI-P-T: A Brief Review of Explainable Artificial Intelligence from Practice to Theory

arXiv.org Artificial Intelligence

In this work, we report the practical and theoretical aspects of Explainable AI (XAI) identified in some fundamental literature. Although there is a vast body of work on representing the XAI backgrounds, most of the corpuses pinpoint a discrete direction of thoughts. Providing insights into literature in practice and theory concurrently is still a gap in this field. This is important as such connection facilitates a learning process for the early stage XAI researchers and give a bright stand for the experienced XAI scholars. Respectively, we first focus on the categories of black-box explanation and give a practical example. Later, we discuss how theoretically explanation has been grounded in the body of multidisciplinary fields. Finally, some directions of future works are presented.


Explanation from Specification

arXiv.org Machine Learning

Explainable components in XAI algorithms often come from a familiar set of models, such as linear models or decision trees. We formulate an approach where the type of explanation produced is guided by a specification. Specifications are elicited from the user, possibly using interaction with the user and contributions from other areas. Areas where a specification could be obtained include forensic, medical, and scientific applications. Providing a menu of possible types of specifications in an area is an exploratory knowledge representation and reasoning task for the algorithm designer, aiming at understanding the possibilities and limitations of efficiently computable modes of explanations. Two examples are discussed: explanations for Bayesian networks using the theory of argumentation, and explanations for graph neural networks. The latter case illustrates the possibility of having a representation formalism available to the user for specifying the type of explanation requested, for example, a chemical query language for classifying molecules. The approach is motivated by a theory of explanation in the philosophy of science, and it is related to current questions in the philosophy of science on the role of machine learning.


How Explainable AI (XAI) for Health Care Helps Build User Trust -- Even During Life-and-Death…

#artificialintelligence

Picture this: You're using an AI model when it recommends a course of action that doesn't seem to make sense. However, because the model can't explain itself, you've got no insight into the reasoning behind the recommendation. Your only options are to trust it or not -- but without any context. It's a frustrating yet familiar experience for many who work with artificial intelligence (AI) systems, which in many cases function as so-called "black boxes" that sometimes can't even be explained by their own creators. For some applications, black box-style AI systems are completely suitable (or even preferred by those who would rather not explain their proprietary AI).


Strong Admissibility for Abstract Dialectical Frameworks

arXiv.org Artificial Intelligence

Abstract dialectical frameworks (ADFs) have been introduced as a formalism for modeling and evaluating argumentation allowing general logical satisfaction conditions. Different criteria used to settle the acceptance of arguments are called semantics. Semantics of ADFs have so far mainly been defined based on the concept of admissibility. However, the notion of strongly admissible semantics studied for abstract argumentation frameworks has not yet been introduced for ADFs. In the current work we present the concept of strong admissibility of interpretations for ADFs. Further, we show that strongly admissible interpretations of ADFs form a lattice with the grounded interpretation as top element.


DAX: Deep Argumentative eXplanation for Neural Networks

arXiv.org Artificial Intelligence

Despite the rapid growth in attention on eXplainable AI (XAI) of late, explanations in the literature provide little insight into the actual functioning of Neural Networks (NNs), significantly limiting their transparency. We propose a methodology for explaining NNs, providing transparency about their inner workings, by utilising computational argumentation (a form of symbolic AI offering reasoning abstractions for a variety of settings where opinions matter) as the scaffolding underpinning Deep Argumentative eXplanations (DAXs). We define three DAX instantiations (for various neural architectures and tasks) and evaluate them empirically in terms of stability, computational cost, and importance of depth. We also conduct human experiments with DAXs for text classification models, indicating that they are comprehensible to humans and align with their judgement, while also being competitive, in terms of user acceptance, with existing approaches to XAI that also have an argumentative spirit.


Interpreting Neural Networks as Gradual Argumentation Frameworks (Including Proof Appendix)

arXiv.org Artificial Intelligence

We show that an interesting class of feed-forward neural networks can be understood as quantitative argumentation frameworks. This connection creates a bridge between research in Formal Argumentation and Machine Learning. We generalize the semantics of feed-forward neural networks to acyclic graphs and study the resulting computational and semantical properties in argumentation graphs. As it turns out, the semantics gives stronger guarantees than existing semantics that have been tailor-made for the argumentation setting. From a machine-learning perspective, the connection does not seem immediately helpful. While it gives intuitive meaning to some feed-forward-neural networks, they remain difficult to understand due to their size and density. However, the connection seems helpful for combining background knowledge in form of sparse argumentation networks with dense neural networks that have been trained for complementary purposes and for learning the parameters of quantitative argumentation frameworks in an end-to-end fashion from data.