Decision Analysis and Expert Systems

AI Magazine

Decision analysis and expert systems are technologies intended to support human reasoning and decision making by formalizing expert knowledge so that it is amenable to mechanized reasoning methods. Despite some common goals, these two paradigms have evolved divergently, with fundamental differences in principle and practice. Recent recognition of the deficiencies of traditional AI techniques for treating uncertainty, coupled with the development of belief nets and influence diagrams, is stimulating renewed enthusiasm among AI researchers in probabilistic reasoning and decision analysis. We present the key ideas of decision analysis and review recent research and applications that aim toward a marriage of these two paradigms. This work combines decision-analytic methods for structuring and encoding uncertain knowledge and preferences with computational techniques from AI for knowledge representation, inference, and explanation. We end by outlining remaining research issues to fully develop the potential of this enterprise.


cision Science, and Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, Pa15213. straea

AAAI Conferences

In order to compare the expert systems and decision analysis approach, each was applied to the same task, namely the diagnosis and treatment of root disorders in apple trees. This experiment illustrates a variety of theoretical and practical differences between them, including the semantics of the network representations (inference net vs. influence diagram or Bayes' belief net), approaches to modelling uncertainty and preferences, the relative effort required, and their attitudes to human reasoning under uncertainty, as the ideal to be emulated or as unreliable and to be improved upon? As schemes for representing uncertainty in Al proliferate and the debate about their various merits intensifies, [Kanal & Lemmer, 1986; Gale, 19861, it is becoming increasingly important to understand their relative advantages and drawbacks. One major axis of contention has been between proponents of various heuristic, qualitative, and fuzzy logic schemes, who argue that these are more compatible with human mental representations and consequently more practical to build and explain [Buchanan & Shortliffe, 1984; Cohen, 1985; Zadeh, 49861, and advocates of probabilistic schemes, who emphasize the virtues of being based on a normative theory of decision making under uncertainty [Pearl, 1985; Cheeseman, 1985; Spiegelhalter, 19861. The latter have argued the advantages of approaches that are coherent, i.e. strictly consistent with the axioms of probability, over the earlier approximate Bayesian schemes developed for Mycin and Prospector [Duda et a/., 19761. 'This work was supported by the National Science Foundation under grant ET-8603493 to Carnegie Mellon.


Qualitative Propagation and Scenario-based Explanation of Probabilistic Reasoning

arXiv.org Artificial Intelligence

Comprehensible explanations of probabilistic reasoning are a prerequisite for wider acceptance of Bayesian methods in expert systems and decision support systems. A study of human reasoning under uncertainty suggests two different strategies for explaining probabilistic reasoning: The first, qualitative belief propagation, traces the qualitative effect of evidence through a belief network from one variable to the next. This propagation algorithm is an alternative to the graph reduction algorithms of Wellman (1988) for inference in qualitative probabilistic networks. It is based on a qualitative analysis of intercausal reasoning, which is a generalization of Pearl's "explaining away", and an alternative to Wellman's definition of qualitative synergy. The other, Scenario-based reasoning, involves the generation of alternative causal "stories" accounting for the evidence. Comparing a few of the most probable scenarios provides an approximate way to explain the results of probabilistic reasoning. Both schemes employ causal as well as probabilistic knowledge. Probabilities may be presented as phrases and/or numbers. Users can control the style, abstraction and completeness of explanations.



Logical and Decision-Theoretic Methods for Planning under Uncertainty

AI Magazine

Decision theory and nonmonotonic logics are formalisms that can be employed to represent and solve problems of planning under uncertainty. We analyze the usefulness of these two approaches by establishing a simple correspondence between the two formalisms. The analysis indicates that planning using nonmonotonic logic comprises two decision-theoretic concepts: probabilities (degrees of belief in planning hypotheses) and utilities (degrees of preference for planning outcomes). We present and discuss examples of the following lessons from this decision-theoretic view of nonmonotonic reasoning: (1) decision theory and nonmonotonic logics are intended to solve different components of the planning problem; (2) when considered in the context of planning under uncertainty, nonmonotonic logics do not retain the domain-independent characteristics of classical (monotonic) logic; and (3) because certain nonmonotonic programming paradigms (for example, frame-based inheritance, nonmonotonic logics) are inherently problem specific, they might be inappropriate for use in solving certain types of planning problems. We discuss how these conclusions affect several current AI research issues.