Most Relevant Explanation in Bayesian Networks

AAAI Conferences

A major inference task in Bayesian networks is explaining why some variables are observed in their particular states using a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, conditional Bayes factor (CBF), defined as the GBF of a new explanation conditioned on an existing explanation, provides a soft measure on the degree of relevance of the variables in the new explanation in explaining the evidence given the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions and use the relations to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.


Most Relevant Explanation in Bayesian Networks

Journal of Artificial Intelligence Research

A major inference task in Bayesian networks is explaining why some variables are observed in their particular states using a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, conditional Bayes factor (CBF), defined as the GBF of a new explanation conditioned on an existing explanation, provides a soft measure on the degree of relevance of the variables in the new explanation in explaining the evidence given the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions and use the relations to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.


Explaining Predictions in Bayesian Networks and Influence Diagrams

AAAI Conferences

As Bayesian Networks and Influence Diagrams are being used more and more widely, the importance of an efficient explanation mechanism becomes more apparent. We focus on predictive explanations, the ones designed to explain predictions and recommendations of probabilistic systems. We analyze the issues involved in defining, computing and evaluating such explanations and present an algorithm to compute them. Introduction As knowledge-based reasoning systems begin addressing real-world problems, they are often designed to be used not by experts but by people unfamiliar with the domain. Such people are unlikely to accept system's prediction or advice without some explanation. In addition, the systems' ever increasing size makes their computations more and more difficult to follow even for their creators. This situation makes an explanation mechanism critical for making these systems useful and widely accepted. Probabilistic systems, such as Bayesian Networks (Pearl 1988) and Influence Diagrams (Howard and Matheson 1984), need such a mechanism even more than others. Human judgment under uncertainty differs considerably from the idealized rationality of probability and decision theories.


Craig Boutilier and Verhica Becher

AAAI Conferences

We propose a natural model of abduction based on the revision of the epistemic state of an agent. We require that explanations be sufficient to induce belief in an observation in a manner that adequately accounts for factual and hypothetical observations. Our model will generate explanations that nonmonotonically predict an observation, thus generalizing most current accounts, which require some deductive relationship between explanation and observation. It also provides a natural preference ordering on explanations, defined in terms of normality or plausibility. We reconstruct the Theorist system in our framework, and show how it can be extended to accommodate our predictive explanations and semantic preferences on explanations.


Explanation, rrelevanee and endence

AAAI Conferences

We evaluate current explanation schemes. These are either insufficiently general, or suffer from other serious drawbacks. We propose a domain-independent explanation system that is based on ignoring irrelevant variables in a probabilistic setting. We then prove important properties of some specific irrelevance-based schemes and discuss how to implement them.