Goto

Collaborating Authors

 Beck, J. Christopher


Reinforcement Learning-based Heuristics to Guide Domain-Independent Dynamic Programming

arXiv.org Artificial Intelligence

Domain-Independent Dynamic Programming (DIDP) is a state-space search paradigm based on dynamic programming for combinatorial optimization. In its current implementation, DIDP guides the search using user-defined dual bounds. Reinforcement learning (RL) is increasingly being applied to combinatorial optimization problems and shares several key structures with DP, being represented by the Bellman equation and state-based transition systems. We propose using reinforcement learning to obtain a heuristic function to guide the search in DIDP. We develop two RL-based guidance approaches: value-based guidance using Deep Q-Networks and policy-based guidance using Proximal Policy Optimization. Our experiments indicate that RL-based guidance significantly outperforms standard DIDP and problem-specific greedy heuristics with the same number of node expansions. Further, despite longer node evaluation times, RL guidance achieves better run-time performance than standard DIDP on three of four benchmark domains.


Domain-Independent Dynamic Programming

arXiv.org Artificial Intelligence

For combinatorial optimization problems, model-based paradigms such as mixed-integer programming (MIP) and constraint programming (CP) aim to decouple modeling and solving a problem: the `holy grail' of declarative problem solving. We propose domain-independent dynamic programming (DIDP), a new model-based paradigm based on dynamic programming (DP). While DP is not new, it has typically been implemented as a problem-specific method. We introduce Dynamic Programming Description Language (DyPDL), a formalism to define DP models based on a state transition system, inspired by AI planning. We show that heuristic search algorithms can be used to solve DyPDL models and propose seven DIDP solvers. We experimentally compare our DIDP solvers with commercial MIP and CP solvers (solving MIP and CP models, respectively) on common benchmark instances of eleven combinatorial optimization problem classes. We show that DIDP outperforms MIP in nine problem classes, CP also in nine problem classes, and both MIP and CP in seven.


PRP Rebooted: Advancing the State of the Art in FOND Planning

arXiv.org Artificial Intelligence

Fully Observable Non-Deterministic (FOND) planning is a variant of classical symbolic planning in which actions are nondeterministic, with an action's outcome known only upon execution. It is a popular planning paradigm with applications ranging from robot planning to dialogue-agent design and reactive synthesis. Over the last 20 years, a number of approaches to FOND planning have emerged. In this work, we establish a new state of the art, following in the footsteps of some of the most powerful FOND planners to date. Our planner, PR2, decisively outperforms the four leading FOND planners, at times by a large margin, in 17 of 18 domains that represent a comprehensive benchmark suite. Ablation studies demonstrate the impact of various techniques we introduce, with the largest improvement coming from our novel FOND-aware heuristic.


Domain-Independent Dynamic Programming: Generic State Space Search for Combinatorial Optimization

arXiv.org Artificial Intelligence

For combinatorial optimization problems, model-based approaches such as mixed-integer programming (MIP) and constraint programming (CP) aim to decouple modeling and solving a problem: the 'holy grail' of declarative problem solving. We propose domain-independent dynamic programming (DIDP), a new model-based paradigm based on dynamic programming (DP). While DP is not new, it has typically been implemented as a problem-specific method. We propose Dynamic Programming Description Language (DyPDL), a formalism to define DP models, and develop Cost-Algebraic A* Solver for DyPDL (CAASDy), a generic solver for DyPDL using state space search. We formalize existing problem-specific DP and state space search methods for combinatorial optimization problems as DP models in DyPDL. Using CAASDy and commercial MIP and CP solvers, we experimentally compare the DP models with existing MIP and CP models, showing that, despite its nascent nature, CAASDy outperforms MIP and CP on a number of common problem classes.


The LM-Cut Heuristic Family for Optimal Numeric Planning with Simple Conditions

Journal of Artificial Intelligence Research

The LM-cut heuristic, both alone and as part of the operator counting framework, represents one of the most successful heuristics for classical planning. In this paper, we generalize LM-cut and its use in operator counting to optimal numeric planning with simple conditions and simple numeric effects, i.e., linear expressions over numeric state variables and actions that increase or decrease such variables by constant quantities. We introduce a variant of hmaxhbd (a previously proposed numeric hmax heuristic) based on the delete-relaxed version of such planning tasks and show that, although inadmissible by itself, our variant yields a numeric version of the classical LM-cut heuristic which is admissible. We classify the three existing families of heuristics for this class of numeric planning tasks and introduce the LM-cut family, proving dominance or incomparability between all pairs of existing max and LM-cut heuristics for numeric planning with simple conditions. Our extensive empirical evaluation shows that the new LM-cut heuristic, both on its own and as part of the operator counting framework, is the state-of-the-art for this class of numeric planning problem.


Autonomous Target Search with Multiple Coordinated UAVs

Journal of Artificial Intelligence Research

Search and tracking is the problem of locating a moving target and following it to its destination. In this work, we consider a scenario in which the target moves across a large geographical area by following a road network and the search is performed by a team of unmanned aerial vehicles (UAVs). We formulate search and tracking as a combinatorial optimization problem and prove that the objective function is submodular. We exploit this property to devise a greedy algorithm. Although this algorithm does not offer strong theoretical guarantees because of the presence of temporal constraints that limit the feasibility of the solutions, it presents remarkably good performance, especially when several UAVs are available for the mission. As the greedy algorithm suffers when resources are scarce, we investigate two alternative optimization techniques: Constraint Programming (CP) and AI planning. Both approaches struggle to cope with large problems, and so we strengthen them by leveraging the greedy algorithm. We use the greedy solution to warm start the CP model and to devise a domain-dependent heuristic for planning. Our extensive experimental evaluation studies the scalability of the different techniques and identifies the conditions under which one approach becomes preferable to the others.


Compiling Optimal Numeric Planning to Mixed Integer Linear Programming

AAAI Conferences

Compilation techniques in planning reformulate a problem into an alternative encoding for which efficient, off-the-shelf solvers are available. In this work, we present a novel mixed-integer linear programming (MILP) compilation for cost-optimal numeric planning with instantaneous actions. While recent works on the problem are restricted to actions that modify variables present in simple numeric conditions, our MILP formulation, in addition, handles linear conditions and linear action effects on numeric state variables. Such problems are particularly challenging due to the state-dependency of the action effects. Experiments show that our approach, in addition to being the state of the art for the more general problem class, is competitive with heuristic search-based planners on domains with only simple numeric conditions.


Comparing and Integrating Constraint Programming and Temporal Planning for Quantum Circuit Compilation

AAAI Conferences

Recently, the makespan-minimization problem of compiling a general class of quantum algorithms into near-term quantum processors has been introduced to the AI community. The research demonstrated that temporal planning is a strong approach for a class of quantum circuit compilation (QCC) problems. In this paper, we explore the use of constraint programming (CP) as an alternative and complementary approach to temporal planning. We extend previous work by introducing two new problem variations that incorporate important characteristics identified by the quantum computing community. We apply temporal planning and CP to the baseline and extended QCC problems as both stand-alone and hybrid approaches. Our hybrid methods use solutions found by temporal planning to warm start CP, leveraging the ability of the former to find satisficing solutions to problems with a high degree of task optionality, an area that CP typically struggles with. The CP model, benefiting from inferred bounds on planning horizon length and task counts provided by the warm start, is then used to find higher quality solutions. Our empirical evaluation indicates that while stand-alone CP is only competitive for the smallest problems, CP in our hybridization with temporal planning out-performs stand-alone temporal planning in the majority of problem classes.


Linear and Integer Programming-Based Heuristics for Cost-Optimal Numeric Planning

AAAI Conferences

Linear programming has been successfully used to compute admissible heuristics for cost-optimal classical planning. Although one of the strengths of linear programming is the ability to express and reason about numeric variables and constraints, their use in numeric planning is limited. In this work, we extend linear programming-based heuristics for classical planning to support numeric state variables. In particular, we propose a model for the interval relaxation, coupled with landmarks and state equation constraints. We consider both linear programming models and their harder-to-solve, yet more informative, integer programming versions. Our experimental analysis shows that considering an NP-Hard heuristic often pays off and that A* search using our integer programming heuristics establishes a new state of the art in cost-optimal numeric planning.


Fat- and Heavy-Tailed Behavior in Satisficing Planning

AAAI Conferences

In this work, we study the runtime distribution of satisficing planning in ensembles of random planning problems and in multiple runs of a randomized heuristic search on a single planning instance. Using common heuristic functions (such as FF) and six benchmark problem domains from the IPC, we find a heavy-tailed behavior, similar to that found in CSP and SAT. We investigate two notions of constrainedness, often used in the modeling of planning problems, and show that the heavy-tailed behavior tends to appear in relatively relaxed problems, where the required effort is, on average, low. Finally, we show that as with randomized restarts in CSP and SAT solving, recent search enhancements that incorporate randomness in the search process can help mitigate the effect of the heavy tail.