Goto

Collaborating Authors

Directed Networks



Probabilistic Horn abduction and Bayesian networks

Classics

This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language.


Approximating probabilistic inference in Bayesian belief networks is NP-hard

Classics

It is known that exact computation of conditional probabilities in belief networks is NP-hard. Many investigators in the AI community have tacitly assumed that algorithms for performing approximate inference with belief networks are of polynomial complexity. Indeed, special cases of approximate inference can be performed in time polynomial in the input size. However, we have discovered that the general problem of approximating conditional probabilities with belief networks, like exact inference, resides in the NP-hard complexity class. We develop a complexity analysis to elucidate the difficulty of approximate probabilistic inference.



A Bayesian model of plan recognition

Classics

We argue that the problem of plan recognition, inferring an agent's plan from observations, is largely a problem of inference under conditions of uncertainty. We present an approach to the plan recognition problem that is based on Bayesian probability theory. In attempting to solve a plan recognition problem we first retrieve candidate explanations. These explanations (sometimes only the most promising ones) are assembled into a plan recognition Bayesian network, which is a representation of a probability distribution over the set of possible explanations. We perform Bayesian updating to choose the most likely interpretation for the set of observed actions.


A practical Bayesian framework for back-propagation networks

Classics

A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.


The computational complexity of probabilistic inference using Bayesian belief networks

Classics

Bayesian belief networks provide a natural, efficient method for representing probabilistic dependencies among a set of variables. For these reasons, numerous researchers are exploring the use of belief networks as a knowledge representation in artificial intelligence. Algorithms have been developed previously for efficient probabilistic inference using special classes of belief networks. More general classes of belief networks, however, have eluded efforts to develop efficient inference algorithms. We show that probabilistic inference using belief networks is NP-hard.


HUGIN: A shell for building Bayesian belief universes for expert systems

Classics

Causal probabilistic networks have proved to be a useful knowledge representation tool for modelling domains where causal relations in a broad sense are a natural way of relating domain objects and where uncertainty is inherited in these relations. This paper outlines an implementation the HUGIN shell--for handling a domain model expressed by a causal probabilistic network. The only topological restriction imposed on the network is that, it must not contain any directed loops. The approach is illustrated step by step by solving a. genetic breeding problem. A graph representation of the domain model is interactively created by using instances of the basic network components—nodes and arcs—as building blocks. This structure, together with the quantitative relations between nodes and their immediate causes expressed as conditional probabilities, are automatically transformed into a tree structure, a junction tree. Here a computationally efficient and conceptually simple algebra of Bayesian belief universes supports incorporation of new evidence, propagation of information, and calculation of revised beliefs in the states of the nodes in the network. Finally, as an example of a real world application, MUN1N an expert system for electromyography is discussed.IJCAI-89, Vol. 2, pp. 1080–1085