AAAI Conferences

Broad application of answer set programming (ASP) for declarative problem solving requires the development of tools supporting the coding process. Program debugging is one of the crucial activities within this process. Modern ASP debugging approaches allow efficient computation of possible explanations of a fault. However, even for a small program a debugger might return a large number of possible explanations and selection of the correct one must be done manually. In this paper we present an interactive query-based ASP debugging method which extends previous approaches and finds the preferred explanation by means of observations. The system automatically generates a sequence of queries to a programmer asking whether a set of ground atoms must be true in all (cautiously) or some (bravely) answer sets of the program. Since some queries can be more informative than the others, we discuss query selection strategies which - given user's preferences for an explanation - can find the most informative query reducing the overall number of queries required for the identification of a preferred explanation.

Self-Explanatory Simulators for Middle-School Science Education: A Progress Report Kenneth D. Forbus

AAAI Conferences

Creating new kinds of educational software has been one motivation for qualitative physics. Our research has brought us to the stage where we are now creating such software, and focusing some of our efforts on investigating how its educational benefits can be optimized. This essay describes one architecture of the three that we are developing: The incorporation of self-explanatory simulators into active illustrations, systems that provide an environment for guided experimentation. We start by examining why qualitative physics is useful for science education, and then describe the active illustrations architecture. We then discuss some of the issues that have arisen in moving our software from laboratory to classroom, and our plans for deployment.

Explanations can be manipulated and geometry is to blame Machine Learning

Explanation methods aim to make neural networks more trustworthy and interpretable. In this paper, we demonstrate a property of explanation methods which is disconcerting for both of these purposes. Namely, we show that explanations can be manipulated arbitrarily by applying visually hardly perceptible perturbations to the input that keep the network's output approximately constant. We establish theoretically that this phenomenon can be related to certain geometrical properties of neural networks. This allows us to derive an upper bound on the susceptibility of explanations to manipulations. Based on this result, we propose effective mechanisms to enhance the robustness of explanations.

Generating User-friendly Explanations for Loan Denials using GANs Machine Learning

Financial decisions impact our lives, and thus everyone from the regulator to the consumer is interested in fair, sound, and explainable decisions. There is increasing competitive desire and regulatory incentive to deploy AI mindfully within financial services. An important mechanism towards that end is to explain AI decisions to various stakeholders. State-of-the-art explainable AI systems mostly serve AI engineers and offer little to no value to business decision makers, customers, and other stakeholders. Towards addressing this gap, in this work we consider the scenario of explaining loan denials. We build the first-of-its-kind dataset that is representative of loan-applicant friendly explanations. We design a novel Generative Adversarial Network (GAN) that can accommodate smaller datasets, to generate user-friendly textual explanations. We demonstrate how our system can also generate explanations serving different purposes: those that help educate the loan applicants, or help them take appropriate action towards a future approval. We hope that our contributions will aid the deployment of AI in financial services by serving the needs of the wider community of users seeking explanations.