pddl 3
Collaborative Planning with Encoding of Users' High-Level Strategies
Kim, Joseph (Massachusetts Institute of Technology) | Banks, Christopher J. (Norfolk State University) | Shah, Julie A. (Massachusetts Institute of Technology)
The generation of near-optimal plans for multi-agent systems with numerical states and temporal actions is computationally challenging. Current off-the-shelf planners can take a very long time before generating a near-optimal solution. In an effort to reduce plan computation time, increase the quality of the resulting plans, and make them more interpretable by humans, we explore collaborative planning techniques that actively involve human users in plan generation. Specifically, we explore a framework in which users provide high-level strategies encoded as soft preferences to guide the low-level search of the planner. Through human subject experimentation, we empirically demonstrate that this approach results in statistically significant improvements to plan quality, without substantially increasing computation time. We also show that the resulting plans achieve greater similarity to those generated by humans with regard to the produced sequences of actions, as compared to plans that do not incorporate user-provided strategies.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Virginia > Norfolk City County > Norfolk (0.04)
Temporal Planning with Preferences and Time-Dependent Continuous Costs
Benton, J. (Arizona State University) | Coles, Amanda (King's College London) | Coles, Andrew (King's College London)
Temporal planning methods usually focus on the objective of minimizing makespan. Unfortunately, this misses a large class of planning problems where it is important to consider a wider variety of temporal and non-temporal preferences, making makespan lower-order concern. In this paper we consider modeling and reasoning with plan quality metrics that are not directly correlated with plan makespan, building on the planner POPF. We begin with the preferences defined in PDDL3, and present a mixed integer programming encoding to manage the the interaction between the hard temporal constraints for plan steps, and soft temporal constraints for preferences. To widen the support of metrics that can be expressed directly in PDDL, we then discuss an extension to soft-deadlines with continuous cost functions, avoiding the need to approximate these with several PDDL3 discrete-cost preferences. We demonstrate the success of our new planner on the benchmark temporal planning problems with preferences, showing that it is the state-of-the-art for such problems. We then analyze the benefits of reasoning with continuous (versus discretized) models of domains with continuous cost functions, showing the improvement in solution quality afforded through making the continuous cost function directly available to the planner.
- North America > United States > Oklahoma > Payne County > Cushing (0.04)
- North America > United States > Arizona > Maricopa County > Tempe (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Transportation (0.46)
- Government (0.46)
LPRPG-P: Relaxed Plan Heuristics for Planning with Preferences
Coles, Amanda (University of Strathclyde) | Coles, Andrew (University of Strathclyde)
In this paper we present a planner, LPRPG-P, capable of reasoning with the non-temporal subset of PDDL 3 preferences. Our focus is on computation of relaxed plan based heuristics that effectively guide a planner towards good solutions satisfying preferences. We build on the planner LPRPG, a hybrid relaxed planning graph (RPG)--linear programming (LP) approach. We make extensions to the RPG to reason with propositional preferences, and to the LP to reason with numeric preferences. LPRPG-P is the first planner with direct guidance for numeric preference satisfaction, exploiting the strong numeric reasoning of the LP. We introduce an anytime search approach for use with our new heuristic, and present results showing that LPRPG-P extends the state of the art in domain-independent planning with preferences.
- Europe > United Kingdom (0.04)
- Europe > Germany > Baden-Württemberg > Freiburg (0.04)