Geffner, Hector


Qualitative Numeric Planning

AAAI Conferences

We consider a new class of planning problems involving a set of non-negative real variables, and a set of non-deterministic actions that increase or decrease the values of these variables by some arbitrary amount. The formulas specifying the initial state, goal state, or action preconditions can only assert whether certain variables are equal to zero or not. Assuming that the state of the variables is fully observable, we obtain two results. First, the solution to the problem can be expressed as a policy mapping qualitative states into actions, where a qualitative state includes a Boolean variable for each original variable, indicating whether its value is zero or not. Second, testing whether any such policy, that may express nested loops of actions, is a solution to the problem, can be determined in time that is polynomial in the qualitative state space, which is much smaller than the original infinite state space. We also report experimental results using a simple generate-and-test planner to illustrate these findings.


Computing Infinite Plans for LTL Goals Using a Classical Planner

AAAI Conferences

Classical planning has been notably successful in synthesizing finite plans to achieve states where propositional goals hold. In the last few years, classical planning has also been extended to incorporate temporally extended goals, expressed in temporal logics such as LTL, to impose restrictions on the state sequences generated by finite plans. In this work, we take the next step and consider the computation of infinite plans for achieving arbitrary LTL goals. We show that infinite plans can also be obtained efficiently by calling a classical planner once over a classical planning encoding that represents and extends the composition of the planning domain and the Buchi automaton representing the goal. This compilation scheme has been implemented and a number of experiments are reported.


Planning Under Partial Observability by Classical Replanning: Theory and Experiments

AAAI Conferences

Planning with partial observability can be formulated as a non-deterministic search problem in belief space. The problem is harder than classical planning as keeping track of beliefs is harder than keeping track of states, and searching for action policies is harder than searching for action sequences. In this work, we develop a framework for partial observability that avoids these limitations and leads to a planner that scales up to larger problems. For this, the class of problems is restricted to those in which 1) the non-unary clauses representing the uncertainty about the initial situation are nvariant, and 2) variables that are hidden in the initial situation do not appear in the body of conditional effects, which are all assumed to be deterministic. We show that such problems can be translated in linear time into equivalent fully observable non-deterministic planning problems, and that an slight extension of this translation renders the problem solvable by means of classical planners. The whole approach is sound and complete provided that in addition, the state-space is connected. Experiments are also reported.


Goal Recognition over POMDPs: Inferring the Intention of a POMDP Agent

AAAI Conferences

Plan recognition is the problem of inferring the goals and plans of an agent from partial observations of her behavior. Recently, it has been shown that the problem can be formulated and solved using planners, reducing plan recognition to plan generation. In this work, we extend this model-based approach to plan recognition to the POMDP setting, where actions are stochastic and states are partially observable. The task is to infer a probability distribution over the possible goals of an agent whose behavior results from a POMDP model. The POMDP model is shared between agent and observer except for the true goal of the agent that is hidden to the observer. The observations are action sequences O that may contain gaps as some or even most of the actions done by the agent may not be observed. We show that the posterior goal distribution P ( G | O ) can be computed from the value function V G ( b ) over beliefs b  generated by the POMDP planner for each possible goal G. Some extensions of the basic framework are discussed, and a number of experiments are reported.


Solving POMDPs: RTDP-Bel Versus Point-based Algorithms

AAAI Conferences

Point-based algorithms and RTDP-Bel are approximate methods for solving POMDPs that replace the full updates of parallel value iteration by faster and more effective updates at selected beliefs. An important difference between the two methods is that the former adopt  Sondik's representation of the  value function, while the latter uses a tabular representation and a discretization function. The algorithms, however, have not been compared up to now, because  they target different POMDPs: discounted POMDPs on the one hand, and Goal POMDPs on the other. In this paper, we bridge this representational gap, showing how to transform discounted POMDPs into Goal POMDPs, and use the transformation to compare RTDP-Bel with point-based algorithms over the existing discounted benchmarks. The results appear to contradict the conventional wisdom in the area showing that RTDP-Bel is competitive, and sometimes superior to point-based algorithms in both quality and time.


Reports on the Twenty-First National Conference on Artificial Intelligence (AAAI-06) Workshop Program

AI Magazine

The Workshop program of the Twenty-First Conference on Artificial Intelligence was held July 16-17, 2006 in Boston, Massachusetts. The program was chaired by Joyce Chai and Keith Decker. The titles of the 17 workshops were AIDriven Technologies for Service-Oriented Computing; Auction Mechanisms for Robot Coordination; Cognitive Modeling and Agent-Based Social Simulations, Cognitive Robotics; Computational Aesthetics: Artificial Intelligence Approaches to Beauty and Happiness; Educational Data Mining; Evaluation Methods for Machine Learning; Event Extraction and Synthesis; Heuristic Search, Memory- Based Heuristics, and Their Applications; Human Implications of Human-Robot Interaction; Intelligent Techniques in Web Personalization; Learning for Search; Modeling and Retrieval of Context; Modeling Others from Observations; and Statistical and Empirical Approaches for Spoken Dialogue Systems.


Reports on the Twenty-First National Conference on Artificial Intelligence (AAAI-06) Workshop Program

AI Magazine

The Workshop program of the Twenty-First Conference on Artificial Intelligence was held July 16-17, 2006 in Boston, Massachusetts. The program was chaired by Joyce Chai and Keith Decker. The titles of the 17 workshops were AIDriven Technologies for Service-Oriented Computing; Auction Mechanisms for Robot Coordination; Cognitive Modeling and Agent-Based Social Simulations, Cognitive Robotics; Computational Aesthetics: Artificial Intelligence Approaches to Beauty and Happiness; Educational Data Mining; Evaluation Methods for Machine Learning; Event Extraction and Synthesis; Heuristic Search, Memory- Based Heuristics, and Their Applications; Human Implications of Human-Robot Interaction; Intelligent Techniques in Web Personalization; Learning for Search; Modeling and Retrieval of Context; Modeling Others from Observations; and Statistical and Empirical Approaches for Spoken Dialogue Systems.


Heuristic Search Planner 2.0

AI Magazine

This general planner implements a scheduler that tries different variants concurrently with different (time) resource bounds. We also describe how hsp2.0 can be used as an optimal (and near-optimal) planning algorithm and compare its performance with two other optimal planners, stan and blackbox.


The AIPS-98 Planning Competition

AI Magazine

In 1998, the international planning community was invited to take part in the first planning competition, hosted by the Artificial Intelligence Planning Systems Conference, to provide a new impetus for empirical evaluation and direct comparison of automatic domain-independent planning systems. This article describes the systems that competed in the event, examines the results, and considers some of the implications for the future of the field.