Explanation & Argumentation


From local explanations to global understanding with explainable AI for trees

#artificialintelligence

Tree-based machine learning models such as random forests, decision trees and gradient boosted trees are popular nonlinear predictive models, yet comparatively little attention has been paid to explaining their predictions. Here we improve the interpretability of tree-based models through three main contributions. We apply these tools to three medical machine learning problems and show how combining many high-quality local explanations allows us to represent global structure while retaining local faithfulness to the original model. These tools enable us to (1) identify high-magnitude but low-frequency nonlinear mortality risk factors in the US population, (2) highlight distinct population subgroups with shared risk characteristics, (3) identify nonlinear interaction effects among risk factors for chronic kidney disease and (4) monitor a machine learning model deployed in a hospital by identifying which features are degrading the model's performance over time. Given the popularity of tree-based machine learning models, these improvements to their interpretability have implications across a broad set of domains.



Global Big Data Conference

#artificialintelligence

In Part II of our year-ahead outlook, we explore the sleeper issues that will drive data management and the mainstreaming of AI in analytics. In the year ahead, we see the cloud, AI, and data management as the megaforces of the data and analytics agenda. And so, picking up where Big on Data bro Andrew Brust left off last week, we're looking at some of the underlying issues that are shaping adoption. In the world of data and analytics, you can't start a conversation today without bringing in cloud and AI. Yesterday in Part I, we hit the cloud checkbox: we explored how the upcoming generation change in enterprise applications will in turn shift the context of how enterprises are going to be evaluating cloud deployment.


XAI--Explainable artificial intelligence

#artificialintelligence

High-level patterns are the basis for describing big plans in big steps. Automating the discovery of abstractions has long been a challenge, and understanding the discovery and sharing of abstractions in learning and explanation are at the frontier of XAI research today.


This is how people like machines to explain themselves -- Sonder Scheme

#artificialintelligence

Core to human-centered AI is explainability. If a machine cannot explain its reasoning in a way that humans understand and on human terms, the AI isn't working for people. Researchers from Georgia Institute of Technology, Cornell University and the University of Kentucky recently published the results of teaching a machine to generate conversational explanations of its model's internal state and action data representations in real-time. They tested whether people like the machine to tell them how it made decisions, and what characteristics of explanations drove people's perceptions of explainability. Relatability is key to understandability – when an AI uses natural language to explain itself, people put themselves in the AI's shoes and evaluate understandability based on whether the AI gives the same reasons they would.


Abstract Argumentation and the Rational Man

arXiv.org Artificial Intelligence

Abstract argumentation has emerged as a method for non-monotonic reasoning that has gained tremendous traction in the symbolic artificial intelligence community. In the literature, the different approaches to abstract argumentation that were refined over the years are typically evaluated from a logics perspective; an analysis that is based on models of ideal, rational decision-making does not exist. In this paper, we close this gap by analyzing abstract argumentation from the perspective of the rational man paradigm in microeconomic theory. To assess under which conditions abstract argumentation-based choice functions can be considered economically rational, we define a new argumentation principle that ensures compliance with the rational man's reference independence property, which stipulates that a rational agent's preferences over two choice options should not be influenced by the absence or presence of additional options. We show that the argumentation semantics as proposed in Dung's classical paper, as well as all of a range of other semantics we evaluate do not fulfill this newly created principle. Consequently, we investigate how structural properties of argumentation frameworks impact the reference independence principle, and propose a restriction to argumentation expansions that allows all of the evaluated semantics to fulfill the requirements for economically rational argumentation-based choice. For this purpose, we define the rational man's expansion as a normal and non-cyclic expansion. Finally, we put reference independence into the context of preference-based argumentation and show that for this argumentation variant, which explicitly model preferences, the rational man's expansion cannot ensure reference independence.


Why explainable AI is indispensable to Zillow's business

#artificialintelligence

Zillow, an online marketplace that facilitates the buying, selling, renting, financing, and remodeling of homes, employs lots of AI technologies to do things like estimate home prices. But the output of AI systems like these can be opaque, creating a "black box" problem where practitioners and customers can't audit the systems properly. Without transparency, serious problems like algorithmic bias can persist undetected, and trust in the models becomes impossible. For obvious ethical reasons, this is why explainable AI (XAI) is so crucial to the creation and deployment of AI systems, but pragmatically, it's also key to the success of AI-powered products and services from companies like Zillow. David Fagnan, director of applied science on the Zillow Offers team, discussed with VentureBeat how and why XAI is indispensable for the company.


Formal Verification of Debates in Argumentation Theory

arXiv.org Artificial Intelligence

Humans engage in informal debates on a daily basis. By expressing their opinions and ideas in an argumentative fashion, they are able to gain a deeper understanding of a given problem and in some cases, find the best possible course of actions towards resolving it. In this paper, we develop a methodology to verify debates formalised as abstract argumentation frameworks. We first present a translation from debates to transition systems. Such transition systems can model debates and represent their evolution over time using a finite set of states. We then formalise relevant debate properties using temporal and strategy logics. These formalisations, along with a debate transition system, allow us to verify whether a given debate satisfies certain properties. The verification process can be automated using model checkers. Therefore, we also measure their performance when verifying debates, and use the results to discuss the feasibility of model checking debates.


Secure Trust Bank deploys Jaywing's 'explainable' AI for application scoring

#artificialintelligence

Secure Trust Bank has become the first UK lender to complete live deployment of credit application models built using Archetype, the proprietary'explainable' AI-driven software of credit scoring and AI experts, Jaywing.


Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers

arXiv.org Artificial Intelligence

Explaining the output of a complex machine learning (ML) model often requires approximation using a simpler model. To construct interpretable explanations that are also consistent with the original ML model, counterfactual examples --- showing how the model's output changes with small perturbations to the input --- have been proposed. This paper extends the work in counterfactual explanations by addressing the challenge of feasibility of such examples. For explanations of ML models in critical domains such as healthcare, finance, etc, counterfactual examples are useful for an end-user only to the extent that perturbation of feature inputs is feasible in the real world. We formulate the problem of feasibility as preserving causal relationships among input features and present a method that uses (partial) structural causal models to generate actionable counterfactuals. When feasibility constraints may not be easily expressed, we propose an alternative method that optimizes for feasibility as people interact with its output and provide oracle-like feedback. Our experiments on a Bayesian network and the widely used "Adult" dataset show that our proposed methods can generate counterfactual explanations that satisfy feasibility constraints.