Belief Revision
Decision Making "Biases" and Support for Assumption-Based Higher-Order Reasoning
Unaided human decision making appears to systematically violate consistency constraints imposed by normative theories; these biases in turn appear to justify the application of formal decision-analytic models. It is argued that both claims are wrong. In particular, we will argue that the "confirmation bias" is premised on an overly narrow view of how conflicting evidence is and ought to be handled. Effective decision aiding should focus on supporting the contral processes by means of which knowledge is extended into novel situations and in which assumptions are adopted, utilized, and revised. The Non- Monotonic Probabilist represents initial work toward such an aid.
A Model for Non-Monotonic Reasoning Using Dempster's Rule
Considerable attention has been given to the problem of non-monotonic reasoning in a belief function framework. Earlier work (M. Ginsberg) proposed solutions introducing meta-rules which recognized conditional independencies in a probabilistic sense. More recently an e-calculus formulation of default reasoning (J. Pearl) shows that the application of Dempster's rule to a non-monotonic situation produces erroneous results. This paper presents a new belief function interpretation of the problem which combines the rules in a way which is more compatible with probabilistic results and respects conditions of independence necessary for the application of Dempster's combination rule. A new general framework for combining conflicting evidence is also proposed in which the normalization factor becomes modified. This produces more intuitively acceptable results.
An Axiomatic Framework for Belief Updates
In the 1940's, a physicist named Cox provided the first formal justification for the axioms of probability based on the subjective or Bayesian interpretation. He showed that if a measure of belief satisfies several fundamental properties, then the measure must be some monotonic transformation of a probability. In this paper, measures of change in belief or belief updates are examined. In the spirit of Cox, properties for a measure of change in belief are enumerated. It is shown that if a measure satisfies these properties, it must satisfy other restrictive conditions. For example, it is shown that belief updates in a probabilistic context must be equal to some monotonic transformation of a likelihood ratio. It is hoped that this formal explication of the belief update paradigm will facilitate critical discussion and useful extensions of the approach.
A Framework for Non-Monotonic Reasoning About Probabilistic Assumptions
Attempts to replicate probabilistic reasoning in expert systems have typically overlooked a critical ingredient of that process. Probabilistic analysis typically requires extensive judgments regarding interdependencies among hypotheses and data, and regarding the appropriateness of various alternative models. The application of such models is often an iterative process, in which the plausibility of the results confirms or disconfirms the validity of assumptions made in building the model. In current expert systems, by contrast, probabilistic information is encapsulated within modular rules (involving, for example, "certainty factors"), and there is no mechanism for reviewing the overall form of the probability argument or the validity of the judgments entering into it.
Using Dempster-Shafer Theory in Knowledge Representation
In this paper, we suggest marrying Dempster-Shafer (DS) theory with Knowledge Representation (KR). Born out of this marriage is the definition of "Dempster-Shafer Belief Bases", abstract data types representing uncertain knowledge that use DS theory for representing strength of belief about our knowledge, and the linguistic structures of an arbitrary KR system for representing the knowledge itself. A formal result guarantees that both the properties of the given KR system and of DS theory are preserved. The general model is exemplified by defining DS Belief Bases where First Order Logic and (an extension of) KRYPTON are used as KR systems. The implementation problem is also touched upon.
Towards a General-Purpose Belief Maintenance System
There currently exists a gap between the theories proposed by the probability and uncertainty and the needs of Artificial Intelligence research. These theories primarily address the needs of expert systems, using knowledge structures which must be pre-compiled and remain static in structure during runtime. Many Al systems require the ability to dynamically add and remove parts of the current knowledge structure (e.g., in order to examine what the world would be like for different causal theories). This requires more flexibility than existing uncertainty systems display. In addition, many Al researchers are only interested in using "probabilities" as a means of obtaining an ordering, rather than attempting to derive an accurate probabilistic account of a situation. This indicates the need for systems which stress ease of use and don't require extensive probability information when one cannot (or doesn't wish to) provide such information. This paper attempts to help reconcile the gap between approaches to uncertainty and the needs of many AI systems by examining the control issues which arise, independent of a particular uncertainty calculus. when one tries to satisfy these needs. Truth Maintenance Systems have been used extensively in problem solving tasks to help organize a set of facts and detect inconsistencies in the believed state of the world. These systems maintain a set of true/false propositions and their associated dependencies. However, situations often arise in which we are unsure of certain facts or in which the conclusions we can draw from available information are somewhat uncertain. The non-monotonic TMS 12] was an attempt at reasoning when all the facts are not known, but it fails to take into account degrees of belief and how available evidence can combine to strengthen a particular belief. This paper addresses the problem of probabilistic reasoning as it applies to Truth Maintenance Systems. It describes a belief Maintenance System that manages a current set of beliefs in much the same way that a TMS manages a set of true/false propositions. If the system knows that belief in fact is dependent in some way upon belief in fact2, then it automatically modifies its belief in facts when new information causes a change in belief of fact2. It models the behavior of a TMS, replacing its 3-valued logic (true, false, unknown) with an infinite valued logic, in such a way as to reduce to a standard TMS if all statements are given in absolute true/false terms. Belief Maintenance Systems can, therefore, be thought of as a generalization of Truth Maintenance Systems, whose possible reasoning tasks are a superset of those for a TMS.
Using T-Norm Based Uncertainty Calculi in a Naval Situation Assessment Application
RUM (Reasoning with Uncertainty Module), is an integrated software tool based on a KEE, a frame system implemented in an object oriented language. RUM's architecture is composed of three layers: representation, inference, and control. The representation layer is based on frame-like data structures that capture the uncertainty information used in the inference layer and the uncertainty meta-information used in the control layer. The inference layer provides a selection of five T-norm based uncertainty calculi with which to perform the intersection, detachment, union, and pooling of information. The control layer uses the meta-information to select the appropriate calculus for each context and to resolve eventual ignorance or conflict in the information. This layer also provides a context mechanism that allows the system to focus on the relevant portion of the knowledge base, and an uncertain-belief revision system that incrementally updates the certainty values of well-formed formulae (wffs) in an acyclic directed deduction graph. RUM has been tested and validated in a sequence of experiments in both naval and aerial situation assessment (SA), consisting of correlating reports and tracks, locating and classifying platforms, and identifying intents and threats. An example of naval situation assessment is illustrated. The testbed environment for developing these experiments has been provided by LOTTA, a symbolic simulator implemented in Flavors. This simulator maintains time-varying situations in a multi-player antagonistic game where players must make decisions in light of uncertain and incomplete data. RUM has been used to assist one of the LOTTA players to perform the SA task.
Default Reasoning and the Transferable Belief Model
Smets, Philippe, Hsia, Yen-Teh
Inappropriate use of Dempster's rule of combination has led some authors to reject the Dempster-Shafer model, arguing that it leads to supposedly unacceptable conclusions when defaults are involved. A most classic example is about the penguin Tweety. This paper will successively present: the origin of the miss-management of the Tweety example; two types of default; the correct solution for both types based on the transferable belief model (our interpretation of the Dempster-Shafer model (Shafer 1976, Smets 1988)); Except when explicitly stated, all belief functions used in this paper are simple support functions, i.e. belief functions for which only one proposition (the focus) of the frame of discernment receives a positive basic belief mass with the remaining mass being given to the tautology. Each belief function will be described by its focus and the weight of the focus (e.g. m(A)=.9). Computation of the basic belief masses are always performed by vacuously extending each belief function to the product space built from all variables involved, combining them on that space by Dempster's rule of combination, and projecting the result to the space corresponding to each individual variable.
Hierarchical Evidence Accumulation in the Pseiki System and Experiments in Model-Driven Mobile Robot Navigation
Kak, A. C., Andress, K. M., Lopez-Abadia, C., Carroll, M. S., Lewis, J. R.
In this paper, we will review the process of evidence accumulation in the PSEIKI system for expectation-driven interpretation of images of 3-D scenes. Expectations are presented to PSEIKI as a geometrical hierarchy of abstractions. PSEIKI's job is then to construct abstraction hierarchies in the perceived image taking cues from the abstraction hierarchies in the expectations. The Dempster-Shafer formalism is used for associating belief values with the different possible labels for the constructed abstractions in the perceived image. This system has been used successfully for autonomous navigation of a mobile robot in indoor environments.