If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Ivankovic, Franc (Australian National University and NICTA) | Haslum, Patrik (Australian National University and NICTA) | Thiebaux, Sylvie (Australian National University and NICTA) | Shivashankar, Vikas (University of Maryland) | Nau, Dana (University of Maryland)
Automating the operations of infrastructure networks such as energy grids and oil pipelines requires a range of planning and optimisation technologies. However, current planners face significant challenges in responding to this need. Notably, they are unable to model and reason about the global numerical state constraints necessary to capture flows and similar physical phenomena occurring in these networks. A single discrete control action can affect the flow throughout the network in a way that may depend on the entire network topology. Determining whether preconditions, goals and invariant conditions are satisfied requires solving a system of numerical constraints after each action application. This paper extends domain-independent optimal planning to this kind of reasoning. We present extensions of the formalism, relaxed plans, and heuristics, as well as new search variants and experimental results on two problem domains.
In classical planning, the polynomial-time computability of propositional delete-free planning (planning with only positive effects and preconditions) led to the highly successful Relaxed Graphplan heuristic. We present a hierarchy of new computational complexity results for different classes of propositional delete-free HTN planning, with two main results: We prove that finding a plan for the delete-relaxation of a propositional HTN problem is NP-complete: hence unless P=NP, there is no directly analogous GraphPlan heuristic for HTN planning. However, a further relaxation of HTN planning (delete-free HTN planning with task insertion) is polynomial-time computable. Thus, there may be a possibility of using this or other relaxations to develop search heuristics for HTN planning.
Shivashankar, Vikas (University of Maryland at College Park) | Alford, Ron (University of Maryland at College Park) | Kuter, Ugur (Smart Information Flow Technologies, LLC Minneapolis) | Nau, Dana (University of Maryland at College Park)
One drawback of Hierarchical Task Network (HTN) planning is the difficulty of providing complete domain knowledge, i.e., a complete and correct set of HTN methods for every task. To provide a principled way to overcome this difficulty, we define a simple formalism that extends classical planning to include problem decomposition using methods, and a planning algorithm based on this formalism. In our formalism, the methods specify ways to achieve goals (rather than tasks as in conventional HTN planning), and goals may be achieved even when no methods are available. Our planning algorithm, GoDeL (Goal Decomposition with Landmarks), is sound and complete irrespective of whether the domain knowledge (i.e., the set of methods given to the planner) is complete. By comparing GoDeL's performance with varying amounts of domain knowledge across three benchmark planning domains, we show experimentally that (1) GoDeL works correctly with partial planning knowledge, (2) GoDeL's performance improves as more planning knowledge is given, and (3) when given full domain knowledge, GoDeL matches the performance of a state-of-the-art hierarchical planner.
Chi, Ping-Chung, Nau, Dana
The discovery that the minimax decision rule performs poorly in some games has sparked interest in possible alternatives to minimax. Until recently, the only games in which minimax was known to perform poorly were games which were mainly of theoretical interest. However, this paper reports results showing poor performance of minimax in a more common game called kalah. For the kalah games tested, a non-minimax decision rule called the product rule performs significantly better than minimax. This paper also discusses a possible way to predict whether or not minimax will perform well in a game when compared to product. A parameter called the rate of heuristic flaw (rhf) has been found to correlate positively with the. performance of product against minimax. Both analytical and experimental results are given that appear to support the predictive power of rhf.
We describe a heuristic search technique for multi-agent pursuit-evasion games in partially observable Euclidean space where a team of trackers attempt to minimize their uncertainty about an evasive target. Agents' movement and observation capabilities are restricted by polygonal obstacles, while each agent's knowledge of the other agents is limited to direct observation or periodic updates from team members. Our polynomial-time algorithm is able to generate strategies for games in continuous two-dimensional Euclidean space, an improvement over past algorithms that were only applicable to simple gridworld domains. We demonstrate that our algorithm is tolerant of interruptions in communication between agents, continuing to generate good strategies despite long periods of time where agents are unable to communicate directly. Experiments also show that our technique generates effective strategies quickly, with decision times of less than a second for reasonably sized domains with six or more agents.
We consider the problem of planning, on the fly, a path whereby a robotic vehicle will cover every point in an initially unknown spatial environment. We describe four strategies (Iterated WaveFront, Greedy-Scan, Delayed Greedy-Scan and Closest-First Scan) for generating cost-effective coverage plans in real time for unknown environments. We give theorems showing the correctness of our planning strategies. Our experiments demonstrate that some of these strategies work significantly better than others, and that the best ones work very well; e.g., in environments having an average of 64,000 locations for the robot to cover, the best strategy returned plans with less than 6% redundant coverage, and took only an average of 0.1 milliseconds per action.
We provide an algorithm to compute near-optimal strategies for the Cultaptation social learning game. We show that the strategies produced by our algorithm are near-optimal, both in their expected utility and their expected reproductive success. We show how our algorithm can be used to provide insight into evolutionary conditions under which learning is best done by copying others, versus the conditions under which learning is best done by trial-and-error.
We show how to translate HTN domain descriptions (if they satisfy certain restrictions) into PDDL so that they can be used by classical planners. We provide correctness results for our translation algorithm, and show that it runs in linear time and space. We also show that even small and incomplete amounts of HTN knowledge, when translated into PDDL using our algorithm, can greatly improve a classical planner's performance. In experiments on several thousand randomly generated problems in three different planning domains, such knowledge speeded up the well-known Fast-Forward planner by several orders of magnitude, and enabled it to solve much larger problems than it could otherwise solve.
Alford, Ron (University of Maryland, College Park) | Kuter, Ugur (University of Maryland, College Park) | Nau, Dana (University of Maryland, College Park) | Reisner, Elnatan (University of Maryland, College Park) | Goldman, Robert (Smart Information Flow Technologies)
In our experiments with four well-known systems for solving partially observable planning problems (Contingent-FF, MBP, PKS, and POND), we were greatly surprised to find that they could only solve problems with a small number of contingencies. Apparently they were repeatedly trying to solve many combinations of contingencies at once, thus unnecessarily using up huge amounts of time and space. This difficulty can be alleviated if the planner can maintain focus on the contingency that it is currently trying to solve. We provide a way to accomplish this by incorporating focusing information directly into the planning domain's operators, without any need to modify the planning algorithm itself. This enables the above planners to solve larger problems and to solve them much more quickly. We also provide a new planner, FOCUS, in which focusing information can be provided as a separate input. This provides even better performance by allowing the planner to utilize more extensive focusing information.
The 2003International Conference on Automated Planning and Scheduling (ICAPS-03) was held 9 to 13 June 2003 in Trento, Italy. It was chaired by Enrico Giunchiglia (University of Genova), Nicola Muscettola (NASA Ames), and Dana Nau (University of Maryland). Piergiorgio Bertoli and Marco Benedetti (both from ITC-IRST) were the local chair and the workshop-tutorial coordination chair, respectively.