Country
Symbolic Decision Theory and Autonomous Systems
The ability to reason under uncertainty and with incomplete information is a fundamental requirement of decision support technology. In this paper we argue that the concentration on theoretical techniques for the evaluation and selection of decision options has distracted attention from many of the wider issues in decision making. Although numerical methods of reasoning under uncertainty have strong theoretical foundations, they are representationally weak and only deal with a small part of the decision process. Knowledge based systems, on the other hand, offer greater flexibility but have not been accompanied by a clear decision theory. We describe here work which is under way towards providing a theoretical framework for symbolic decision procedures. A central proposal is an extended form of inference which we call argumentation; reasoning for and against decision options from generalised domain theories. The approach has been successfully used in several decision support applications, but it is argued that a comprehensive decision theory must cover autonomous decision making, where the agent can formulate questions as well as take decisions. A major theoretical challenge for this theory is to capture the idea of reflection to permit decision agents to reason about their goals, what they believe and why, and what they need to know or do in order to achieve their goals.
A Logic of Graded Possibility and Certainty Coping with Partial Inconsistency
Lang, Jerome, Dubois, Didier, Prade, Henri
A semantics is given to possibilistic logic, a logic that handles weighted classical logic formulae, and where weights are interpreted as lower bounds on degrees of certainty or possibility, in the sense of Zadeh's possibility theory. The proposed semantics is based on fuzzy sets of interpretations. It is tolerant to partial inconsistency. Satisfiability is extended from interpretations to fuzzy sets of interpretations, each fuzzy set representing a possibility distribution describing what is known about the state of the world. A possibilistic knowledge base is then viewed as a set of possibility distributions that satisfy it. The refutation method of automated deduction in possibilistic logic, based on previously introduced generalized resolution principle is proved to be sound and complete with respect to the proposed semantics, including the case of partial inconsistency.
Non-monotonic Negation in Probabilistic Deductive Databases
Ng, Raymond T., Subrahmanian, V. S.
In this paper we study the uses and the semantics of non-monotonic negation in probabilistic deductive data bases. Based on the stable semantics for classical logic programming, we introduce the notion of stable formula, functions. We show that stable formula, functions are minimal fixpoints of operators associated with probabilistic deductive databases with negation. Furthermore, since a. probabilistic deductive database may not necessarily have a stable formula function, we provide a stable class semantics for such databases. Finally, we demonstrate that the proposed semantics can handle default reasoning naturally in the context of probabilistic deduction.
Non-monotonic Reasoning and the Reversibility of Belief Change
Traditional approaches to non-monotonic reasoning fail to satisfy a number of plausible axioms for belief revision and suffer from conceptual difficulties as well. Recent work on ranked preferential models (RPMs) promises to overcome some of these difficulties. Here we show that RPMs are not adequate to handle iterated belief change. Specifically, we show that RPMs do not always allow for the reversibility of belief change. This result indicates the need for numerical strengths of belief.
Some Properties of Plausible Reasoning
This paper presents a plausible reasoning system to illustrate some broad issues in knowledge representation: dualities between different reasoning forms, the difficulty of unifying complementary reasoning styles, and the approximate nature of plausible reasoning. These issues have a common underlying theme: there should be an underlying belief calculus of which the many different reasoning forms are special cases, sometimes approximate. The system presented allows reasoning about defaults, likelihood, necessity and possibility in a manner similar to the earlier work of Adams. The system is based on the belief calculus of subjective Bayesian probability which itself is based on a few simple assumptions about how belief should be manipulated. Approximations, semantics, consistency and consequence results are presented for the system. While this puts these often discussed plausible reasoning forms on a probabilistic footing, useful application to practical problems remains an issue.
Time-Dependent Utility and Action Under Uncertainty
Horvitz, Eric J., Rutledge, Geoffrey
We discuss representing and reasoning with knowledge about the time-dependent utility of an agent's actions. Time-dependent utility plays a crucial role in the interaction between computation and action under bounded resources. We present a semantics for time-dependent utility and describe the use of time-dependent information in decision contexts. We illustrate our discussion with examples of time-pressured reasoning in Protos, a system constructed to explore the ideal control of inference by reasoners with limit abilities.
Compressed Constraints in Probabilistic Logic and Their Revision
In probabilistic logic entailments, even moderate size problems can yield linear constraint systems with so many variables that exact methods are impractical. This difficulty can be remedied in many cases of interest by introducing a three valued logic (true, false, and "don't care"). The three-valued approach allows the construction of "compressed" constraint systems which have the same solution sets as their two-valued counterparts, but which may involve dramatically fewer variables. Techniques to calculate point estimates for the posterior probabilities of entailed sentences are discussed.
Combining Multiple-Valued Logics in Modular Expert Systems
Agustí-Cullell, Jaume, Esteva, Francesc, Garcia, Pere, Godo, Lluis, Sierra, Carles
The way experts manage uncertainty usually changes depending on the task they are performing. This fact has lead us to consider the problem of communicating modules (task implementations) in a large and structured knowledge based system when modules have different uncertainty calculi. In this paper, the analysis of the communication problem is made assuming that (i) each uncertainty calculus is an inference mechanism defining an entailment relation, and therefore the communication is considered to be inference-preserving, and (ii) we restrict ourselves to the case which the different uncertainty calculi are given by a class of truth functional Multiple-valued Logics.
About Updating
Survey of several forms of updating, with a practical illustrative example. We study several updating (conditioning) schemes that emerge naturally from a common scenarion to provide some insights into their meaning. Updating is a subtle operation and there is no single method, no single 'good' rule. The choice of the appropriate rule must always be given due consideration. Planchet (1989) presents a mathematical survey of many rules. We focus on the practical meaning of these rules. After summarizing the several rules for conditioning, we present an illustrative example in which the various forms of conditioning can be explained.
Belief and Surprise - A Belief-Function Formulation
We motivate and describe a theory of belief in this paper. This theory is developed with the following view of human belief in mind. Consider the belief that an event E will occur (or has occurred or is occurring). An agent either entertains this belief or does not entertain this belief (i.e., there is no "grade" in entertaining the belief). If the agent chooses to exercise "the will to believe" and entertain this belief, he/she/it is entitled to a degree of confidence c (1 > c > 0) in doing so. Adopting this view of human belief, we conjecture that whenever an agent entertains the belief that E will occur with c degree of confidence, the agent will be surprised (to the extent c) upon realizing that E did not occur.