Most Relevant Explanation in Bayesian Networks

AAAI Conferences

A major inference task in Bayesian networks is explaining why some variables are observed in their particular states using a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, conditional Bayes factor (CBF), defined as the GBF of a new explanation conditioned on an existing explanation, provides a soft measure on the degree of relevance of the variables in the new explanation in explaining the evidence given the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions and use the relations to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.


A General Framework for Generating Multivariate Explanations in Bayesian Networks

AAAI Conferences

Many existing explanation methods in Bayesian networks, such as Maximum a Posteriori (MAP) assignment and Most Probable Explanation (MPE), generate complete assignments for target variables. A priori, the set of target variables is often large, but only a few of them may be most relevant in explaining given evidence. Generating explanations with all the target variables is hence not always desirable. This paper addresses the problem by proposing a new framework called Most Relevant Explanation (MRE), which aims to automatically identify the most relevant target variables. We will also discuss in detail a specific instance of the framework that uses generalized Bayes factor as the relevance measure. Finally we will propose an approximate algorithm based on Reversible Jump MCMC and simulated annealing to solve MRE. Empirical results show that the new approach typically finds much more concise explanations than existing methods.


Defining Explanation in Probabilistic Systems

arXiv.org Artificial Intelligence

As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature - one due to G\"ardenfors and one due to Pearl - and show that both suffer from significant problems. We propose an approach to defining a notion of "better explanation" that combines some of the features of both together with more recent work by Pearl and others on causality.


Most Relevant Explanation: Properties, Algorithms, and Evaluations

arXiv.org Artificial Intelligence

Most Relevant Explanation (MRE) is a method for finding multivariate explanations for given evidence in Bayesian networks [12]. This paper studies the theoretical properties of MRE and develops an algorithm for finding multiple top MRE solutions. Our study shows that MRE relies on an implicit soft relevance measure in automatically identifying the most relevant target variables and pruning less relevant variables from an explanation. The soft measure also enables MRE to capture the intuitive phenomenon of explaining away encoded in Bayesian networks. Furthermore, our study shows that the solution space of MRE has a special lattice structure which yields interesting dominance relations among the solutions. A K-MRE algorithm based on these dominance relations is developed for generating a set of top solutions that are more representative. Our empirical results show that MRE methods are promising approaches for explanation in Bayesian networks.


Explaining Predictions in Bayesian Networks and Influence Diagrams

AAAI Conferences

As Bayesian Networks and Influence Diagrams are being used more and more widely, the importance of an efficient explanation mechanism becomes more apparent. We focus on predictive explanations, the ones designed to explain predictions and recommendations of probabilistic systems. We analyze the issues involved in defining, computing and evaluating such explanations and present an algorithm to compute them. Introduction As knowledge-based reasoning systems begin addressing real-world problems, they are often designed to be used not by experts but by people unfamiliar with the domain. Such people are unlikely to accept system's prediction or advice without some explanation. In addition, the systems' ever increasing size makes their computations more and more difficult to follow even for their creators. This situation makes an explanation mechanism critical for making these systems useful and widely accepted. Probabilistic systems, such as Bayesian Networks (Pearl 1988) and Influence Diagrams (Howard and Matheson 1984), need such a mechanism even more than others. Human judgment under uncertainty differs considerably from the idealized rationality of probability and decision theories.