Haddawy, Peter

Time, Chance, and Action

arXiv.org Artificial Intelligence

To operate intelligently in the world, an agent must reason about its actions. The consequences of an action are a function of both the state of the world and the action itself. Many aspects of the world are inherently stochastic, so a representation for reasoning about actions must be able to express chances of world states as well as indeterminacy in the effects of actions and other events. This paper presents a propositional temporal probability logic for representing and reasoning about actions. The logic can represent the probability that facts hold and events occur at various times. It can represent the probability that actions and other events affect the future. It can represent concurrent actions and conditions that hold or change during execution of an action. The model of probability relates probabilities over time. The logical language integrates both modal and probabilistic constructs and can thus represent and distinguish between possibility, probability, and truth. Several examples illustrating the use of the logic are given.

Convergent Deduction for Probabilistic Logic

arXiv.org Artificial Intelligence

This paper discusses the semantics and proof theory of Nilsson's probabilistic logic, outlining both the benefits of its well-defined model theory and the drawbacks of its proof theory. Within Nilsson's semantic framework, we derive a set of inference rules which are provably sound. The resulting proof system, in contrast to Nilsson's approach, has the important feature of convergence - that is, the inference process proceeds by computing increasingly narrow probability intervals which converge from above and below on the smallest entailed probability interval. Thus the procedure can be stopped at any time to yield partial information concerning the smallest entailed interval.

Sound Abstraction of Probabilistic Actions in The Constraint Mass Assignment Framework

arXiv.org Artificial Intelligence

This paper provides a formal and practical framework for sound abstraction of probabilistic actions. We start by precisely defining the concept of sound abstraction within the context of finite-horizon planning (where each plan is a finite sequence of actions). Next we show that such abstraction cannot be performed within the traditional probabilistic action representation, which models a world with a single probability distribution over the state space. We then present the constraint mass assignment representation, which models the world with a set of probability distributions and is a generalization of mass assignment representations. Within this framework, we present sound abstraction procedures for three types of action abstraction. We end the paper with discussions and related work on sound and approximate abstraction. We give pointers to papers in which we discuss other sound abstraction-related issues, including applications, estimating loss due to abstraction, and automatically generating abstraction hierarchies.

Theoretical Foundations for Abstraction-Based Probabilistic Planning

arXiv.org Artificial Intelligence

Modeling worlds and actions under uncertainty is one of the central problems in the framework of decision-theoretic planning. The representation must be general enough to capture real-world problems but at the same time it must provide a basis upon which theoretical results can be derived. The central notion in the framework we propose here is that of the affine-operator, which serves as a tool for constructing (convex) sets of probability distributions, and which can be considered as a generalization of belief functions and interval mass assignments. Uncertainty in the state of the worlds is modeled with sets of probability distributions, represented by affine-trees while actions are defined as tree-manipulators. A small set of key properties of the affine-operator is presented, forming the basis for most existing operator-based definitions of probabilistic action projection and action abstraction. We derive and prove correct three projection rules, which vividly illustrate the precision-complexity tradeoff in plan projection. Finally, we show how the three types of action abstraction identified by Haddawy and Doan are manifested in the present framework.

Problem-Focused Incremental Elicitation of Multi-Attribute Utility Models

arXiv.org Artificial Intelligence

Decision theory has become widely accepted in the AI community as a useful framework for planning and decision making. Applying the framework typically requires elicitation of some form of probability and utility information. While much work in AI has focused on providing representations and tools for elicitation of probabilities, relatively little work has addressed the elicitation of utility models. This imbalance is not particularly justified considering that probability models are relatively stable across problem instances, while utility models may be different for each instance. Spending large amounts of time on elicitation can be undesirable for interactive systems used in low-stakes decision making and in time-critical decision making. In this paper we investigate the issues of reasoning with incomplete utility models. We identify patterns of problem instances where plans can be proved to be suboptimal if the (unknown) utility function satisfies certain conditions. We present an approach to planning and decision making that performs the utility elicitation incrementally and in a way that is informed by the domain model.

Towards Case-Based Preference Elicitation: Similarity Measures on Preference Structures

arXiv.org Artificial Intelligence

While decision theory provides an appealing normative framework for representing rich preference structures, eliciting utility or value functions typically incurs a large cost. For many applications involving interactive systems this overhead precludes the use of formal decision-theoretic models of preference. Instead of performing elicitation in a vacuum, it would be useful if we could augment directly elicited preferences with some appropriate default information. In this paper we propose a case-based approach to alleviating the preference elicitation bottleneck. Assuming the existence of a population of users from whom we have elicited complete or incomplete preference structures, we propose eliciting the preferences of a new user interactively and incrementally, using the closest existing preference structures as potential defaults. Since a notion of closeness demands a measure of distance among preference structures, this paper takes the first step of studying various distance measures over fully and partially specified preference structures. We explore the use of Euclidean distance, Spearmans footrule, and define a new measure, the probabilistic distance. We provide computational techniques for all three measures.

The Decision-Theoretic Interactive Video Advisor

arXiv.org Artificial Intelligence

The need to help people choose among large numbers of items and to filter through large amounts of information has led to a flood of research in construction of personal recommendation agents. One of the central issues in constructing such agents is the representation and elicitation of user preferences or interests. This topic has long been studied in Decision Theory, but surprisingly little work in the area of recommender systems has made use of formal decision-theoretic techniques. This paper describes DIVA, a decision-theoretic agent for recommending movies that contains a number of novel features. DIVA represents user preferences using pairwise comparisons among items, rather than numeric ratings. It uses a novel similarity measure based on the concept of the probability of conflict between two orderings of items. The system has a rich representation of preference, distinguishing between a user's general taste in movies and his immediate interests. It takes an incremental approach to preference elicitation in which the user can provide feedback if not satisfied with the recommendation list. We empirically evaluate the performance of the system using the EachMovie collaborative filtering database.

A Hybrid Approach to Reasoning with Partially Elicited Preference Models

arXiv.org Artificial Intelligence

Classical Decision Theory provides a normative framework for representing and reasoning about complex preferences. Straightforward application of this theory to automate decision making is difficult due to high elicitation cost. In response to this problem, researchers have recently developed a number of qualitative, logic-oriented approaches for representing and reasoning about references. While effectively addressing some expressiveness issues, these logics have not proven powerful enough for building practical automated decision making systems. In this paper we present a hybrid approach to preference elicitation and decision making that is grounded in classical multi-attribute utility theory, but can make effective use of the expressive power of qualitative approaches. Specifically, assuming a partially specified multilinear utility function, we show how comparative statements about classes of decision alternatives can be used to further constrain the utility function and thus identify sup-optimal alternatives. This work demonstrates that quantitative and qualitative approaches can be synergistically integrated to provide effective and flexible decision support.

Similarity Measures on Preference Structures, Part II: Utility Functions

arXiv.org Artificial Intelligence

In previous work cite{Ha98:Towards} we presented a case-based approach to eliciting and reasoning with preferences. A key issue in this approach is the definition of similarity between user preferences. We introduced the probabilistic distance as a measure of similarity on user preferences, and provided an algorithm to compute the distance between two partially specified {em value} functions. This is for the case of decision making under {em certainty}. In this paper we address the more challenging issue of computing the probabilistic distance in the case of decision making under{em uncertainty}. We provide an algorithm to compute the probabilistic distance between two partially specified {em utility} functions. We demonstrate the use of this algorithm with a medical data set of partially specified patient preferences,where none of the other existing distancemeasures appear definable. Using this data set, we also demonstrate that the case-based approach to preference elicitation isapplicable in domains with uncertainty. Finally, we provide a comprehensive analytical comparison of the probabilistic distance with some existing distance measures on preferences.

The AAAI Spring Symposia

AI Magazine

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, held the 1998 Spring Symposium Series on 23 to 25 March at Stanford University. The topics of the eight symposia were (1) Applying Machine Learning to Discourse Processing, (2) Integrating Robotic Research: Taking the Next Leap, (3) Intelligent Environments, (4) Intelligent Text Summarization, (5) Interactive and Mixed-Initiative Decision-Theoretic Systems, (6) Multimodal Reasoning, (7) Prospects for a Common-Sense Theory of Causation, and (8) Satisficing Models.