Yoon, Sungwook
Anticipatory On-Line Planning
Burns, Ethan (University of New Hampshire) | Benton, J. (Graduate Student, Arizona State University) | Ruml, Wheeler (University of New Hampshire) | Yoon, Sungwook (Palo Alto Research Center) | Do, Minh B. (NASA Ames Research Center)
It assumes that the Consider the problem faced by a unmanned aerial vehicle probability distribution over incoming goals is either known (UAV) dispatcher who must plan for a set of UAVs to service or learn-able and employs the technique of optimization a set of observation requests. To service a request, one of the in hindsight, previously developed for online scheduling UAVs must fly over a given strip of land with its observation and recently investigated for planning with stochastic actions equipment turned on. The dispatcher wants to minimize the (Mercier and van Hentenryck 2007; Yoon et al. 2008; time between when a request arrives and when an UAV has 2010). This technique first samples from the distribution of completed the flyover. Even when the actions of the UAV, possible future goal arrivals and then considers which next such as flying particular routes or switching on/off observational action optimizes the expected cost when averaged over the equipment, can be regarded as deterministic, the sampled futures. By using this anticipatory technique, our stochastic arrival of new requests can make for a challenging planner is able to take future goals into account.
A Survey of the Seventh International Planning Competition
Coles, Amanda (Kingโs College London) | Coles, Andrew (Kingโs College London) | Olaya, Angel Garcรญa (Universidad Carlos III de Madrid) | Jimรฉnez, Sergio (Universidad Carlos III de Madrid) | Lรณpez, Carlos Linares (Universidad Carlos III de Madrid) | Sanner, Scott (NICTA and Australian National University) | Yoon, Sungwook (Palo Alto Research Center)
In this article we review the 2011 International Planning Competition. We give an overview of the history of the competition, discussing how it has developed since its first edition in 1998. The 2011 competition was run in three main separate tracks: the deterministic (classical) track; the learning track; and the uncertainty track. Each track proposed its own distinct set of new challenges and the participants rose to these admirably, the results of each track showing promising progress in each area. The competition attracted a record number of participants this year, showing its continued and strong position as a major central pillar of the international planning research community.
Learning Probabilistic Hierarchical Task Networks to Capture User Preferences
Li, Nan, Cushing, William, Kambhampati, Subbarao, Yoon, Sungwook
We propose automatically learning probabilistic Hierarchical Task Networks (pHTNs) in order to capture a user's preferences on plans, by observing only the user's behavior. HTNs are a common choice of representation for a variety of purposes in planning, including work on learning in planning. Our contributions are (a) learning structure and (b) representing preferences. In contrast, prior work employing HTNs considers learning method preconditions (instead of structure) and representing domain physics or search control knowledge (rather than preferences). Initially we will assume that the observed distribution of plans is an accurate representation of user preference, and then generalize to the situation where feasibility constraints frequently prevent the execution of preferred plans. In order to learn a distribution on plans we adapt an Expectation-Maximization (EM) technique from the discipline of (probabilistic) grammar induction, taking the perspective of task reductions as productions in a context-free grammar over primitive actions. To account for the difference between the distributions of possible and preferred plans we subsequently modify this core EM technique, in short, by rescaling its input.
Iterative Learning of Weighted Rule Sets for Greedy Search
Xu, Yuehua (Oregon State University) | Fern, Alan (Oregon State University) | Yoon, Sungwook (Palo Alto Research Center)
Greedy search is commonly used in an attempt to generate solutions quickly at the expense of completeness and optimality. In this work, we consider learning sets of weighted action-selection rules for guiding greedy search with application to automated planning. We make two primary contributions over prior work on learning for greedy search. First, we introduce weighted sets of action-selection rules as a new form of control knowledge for greedy search. Prior work has shown the utility of action-selection rules for greedy search, but has treated the rules as hard constraints, resulting in brittleness. Our weighted rule sets allow multiple rules to vote, helping to improve robustness to noisy rules. Second, we give a new iterative learning algorithm for learning weighted rule sets based on RankBoost, an efficient boosting algorithm for ranking. Each iteration considers the actual performance of the current rule set and directs learning based on the observed search errors. This is in contrast to most prior approaches, which learn control knowledge independently of the search process. Our empirical results have shown significant promise for this approach in a number of domains.
Continual On-line Planning as Decision-Theoretic Incremental Heuristic Search
Lemons, Seth (University of New Hampshire) | Benton, J. (University of Arizona) | Ruml, Wheeler (University of New Hampshire) | Do, Minh (Palo Alto Research Center) | Yoon, Sungwook (Palo Alto Research Center)
This paper presents an approach to integrating planning and execution in time-sensitive environments. We present a simple setting in which to consider the issue, that we call continual on-line planning. New goals arrive stochastically during execution, the agent issues actions for execution one at a time, and the environment is otherwise deterministic. We take the objective to be a form of time-dependent partial satisfaction planning reminiscent of discounted MDPs: goals offer reward that decays over time, actions incur fixed costs, and the agent attempts to maximize net utility. We argue that this setting highlights the central challenge of time-aware planning while excluding the complexity of non-deterministic actions. Our approach to this problem is based on real-time heuristic search. We view the two central issues as the decision of which partial plans to elaborate during search and the decision of when to issue an action for execution. We propose an extension of Russell and Wefald's decision-theoretic A* algorithm that can cope with our inadmissible heuristic. Our algorithm, DTOCS, handles the complexities of the on-line setting by balancing deliberative planning and real-time response.
An Ensemble Learning and Problem Solving Architecture for Airspace Management
Zhang, Xiaoqin (Shelly) (University of Massachusetts) | Yoon, Sungwook (Arizona State University) | DiBona, Phillip (Lockheed Martin ย Advanced Technology Laboratories) | Appling, Darren (Georgia Institute of Technology) | Ding, Li (Rensselaer Polytechnic Institute) | Doppa, Janardhan (Oregon State University) | Green, Derek (University of Wyoming) | Guo, Jinhong (Lockheed Martin Advanced Technology Laboratories) | Kuter, Ugur (University of Maryland) | Levine, Geoff (University of Illinois at Urbana) | MacTavish, Reid (Georgia Institute of Technology) | McFarlane, Daniel (Lockheed Martin Advanced Technology Laboratories) | Michaelis, James (Rensselaer Polytechnic Institute) | Mostafa, Hala (University of Massachusetts) | Ontanon, Santiago (Georgia Institute of Technology) | Parker, Charles (Georgia Institute of Technology) | Radhakrishnan, Jainarayan (University of Wyoming) | Rebguns, Anton (University of Massachusetts) | Shrestha, Bhavesh (Fujitsu Laboratories of America) | Song, Zhexuan (Georgia Institute of Technology) | Trewhitt, Ethan (University of Massachusetts) | Zafar, Huzaifa (University of Massachusetts) | Zhang, Chongjie (University of Massachusetts) | Corkill, Daniel (University of Illinois at Urbana-Champaign) | DeJong, Gerald (Oregon State University) | Dietterich, Thomas (Arizona State University) | Kambhampati, Subbarao (University of Massachusetts) | Lesser, Victor (Rensselaer Polytechnic Institute) | McGuinness, Deborah L. (Georgia Institute of Technology) | Ram, Ashwin (University of Wyoming) | Spears, Diana (Oregon State University) | Tadepalli, Prasad (Georgia Institute of Technology) | Whitaker, Elizabeth (Oregon State University) | Wong, Weng-Keen (Rensselaer Polytechnic Institute) | Hendler, James (Lockheed Martin Advanced Technology Laboratories) | Hofmann, Martin (Lockheed Martin Advanced Technology Laboratories) | Whitebread, Kenneth
In this paper we describe the application of a novel learning and problem solving architecture to the domain of airspace management, where multiple requests for the use of airspace need to be reconciled and managed automatically. The key feature of our "Generalized Integrated Learning Architecture" (GILA) is a set of integrated learning and reasoning (ILR) systems coordinated by a central meta-reasoning executive (MRE). Each ILR learns independently from the same training example and contributes to problem-solving in concert with other ILRs as directed by the MRE. Formal evaluations show that our system performs as well as or better than humans after learning from the same training data. Further, GILA outperforms any individual ILR run in isolation, thus demonstrating the power of the ensemble architecture for learning and problem solving.
Approximate Policy Iteration with a Policy Language Bias
Fern, Alan, Yoon, Sungwook, Givan, Robert
We explore approximate policy iteration, replacing the usual costfunction learning step with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous technique can solve. In particular, we induce high-quality domain-specific planners for classical planning domains (both deterministic and stochastic variants) by solving such domains as extremely large MDPs.
Approximate Policy Iteration with a Policy Language Bias
Fern, Alan, Yoon, Sungwook, Givan, Robert
We explore approximate policy iteration, replacing the usual costfunction learning step with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous technique can solve. In particular, we induce high-quality domain-specific planners for classical planning domains (both deterministic and stochastic variants) by solving such domains as extremely large MDPs.
Approximate Policy Iteration with a Policy Language Bias
Fern, Alan, Yoon, Sungwook, Givan, Robert
We explore approximate policy iteration, replacing the usual costfunction learningstep with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous technique can solve. In particular, we induce high-quality domain-specific planners for classical planningdomains (both deterministic and stochastic variants) by solving such domains as extremely large MDPs.