Goto

Collaborating Authors

Uncertainty




Essentials of Artificial Intelligence

Classics

A short overview of artificial intelligence and its relationship with fuzzy logic is provided. We emphasize the role fuzzy logics can play in extending some of the models of Artificial Intelligence.


Probabilistic Horn abduction and Bayesian networks

Classics

This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language.


Approximating probabilistic inference in Bayesian belief networks is NP-hard

Classics

It is known that exact computation of conditional probabilities in belief networks is NP-hard. Many investigators in the AI community have tacitly assumed that algorithms for performing approximate inference with belief networks are of polynomial complexity. Indeed, special cases of approximate inference can be performed in time polynomial in the input size. However, we have discovered that the general problem of approximating conditional probabilities with belief networks, like exact inference, resides in the NP-hard complexity class. We develop a complexity analysis to elucidate the difficulty of approximate probabilistic inference.




Understanding evidential reasoning

Classics

We address recent criticisms of evidential reasoning, an approach to the analysis of imprecise and uncertain information that is based on the Dempster-Shafer calculus of evidence. We show that evidential reasoning can be interpreted in terms of classical probability theory and that the Dempster-Shafer calculus of evidence may be considered to be a form of generalized probabilistic reasoning based on the representation of probabilistic ignorance by intervals of possible values. In particular, we emphasize that it is not necessary to resort to nonprobabilistic or subjectivist explanations to justify the validity of the approach. We answer conceptual criticisms of evidential reasoning primarily on the basis of the criticism's confusion between the current state of development of the theory — mainly theoretical limitations in the treatment of conditional information — and its potential usefulness in treating a wide variety of uncertainty analysis problems. Similarly, we indicate that the supposed lack of decision-support schemes of generalized probability approaches is not a theoretical handicap but rather an indication of basic informational shortcomings that is a desirable asset of any formal approximate reasoning approach.


A computational scheme for reasoning in dynamic probabilistic networks

Classics

A computational scheme for reasoning about dynamic systems using (causal) probabilistic networks is presented. The scheme is based on the framework of Lauritzen and Spiegel-halter (1988), and may be viewed as a generalization of the inference methods of classical time-series analysis in the sense that it allows description of non-linear, multivariate dynamic systems with complex conditional independence structures. Further, the scheme provides a method for efficient backward smoothing and possibilities for efficient, approximate forecasting methods. The scheme has been implemented on top of the HUGIN shell.


A practical Bayesian framework for back-propagation networks

Classics

A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.