If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Planning as satisfiability (SAT-Plan) is one of the best approaches to optimal planning, which has been shown effective on problems in many different domains. However, the potential of the SAT-Plan strategy has not been fully exploited. Following the SAT-Plan paradigm, in this paper we formulate a STRIPS planning problem as a maximum SAT (max-SAT) and develop a general two-phase algorithm for planning. Our method first represents a planning problem as a combinatorial optimization in the form of a SAT compounded with an objective function to be maximized. It then uses a goal-oriented variable selection to force goal-oriented search and a accumulative learnt strategy to avoid to learn a learnt clause multiple times. We integrate our new methods with SATPLAN04. We evaluate and demonstrate the efficacy of our new formulation and algorithm with SATPLAN04 on many well-known real-world benchmark planning problems. Our experimental results show that our algorithm significantly outperforms SATPLAN04 on most of these problems, sometimes with an order of magnitude of improvement in running time.
In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.
There is a great interest in the planning community to apply all developments already achieved in the area to real applications. Such scenario makes the community focus on Knowledge Engineering (KE) applied in modeling of planning problems and domains. In this paper, we propose the use of UML for Planning Approach, denominated UML.P, during planning domain modeling process. We also discuss the exposure of UML.P to a real application, e.g., the sequencing car problems in an assembly line. This modeling experience, using a classical manufacturing problem, provides some insights and considerations that can contribute to a general KE process for planning.
Many scheduling problems are posed as optimization problems where the goal is to find a feasible schedule that maximizes the utilization of some resource. In some domains it is also necessary to consider the quality of the resulting schedule. In most research these two quantities are independent. This paper introduces a real world problem in which radar tasks must be allocated to track objects in space. We explore the tradeoff between off-line task resource utilization and a measure of task quality that correlates to whether tasks are actually successfully executed. We develop two general types of algorithms that differ in the way they reason about quality and explore the tradeoff between high quality solutions and solutions with high resource utilization.
We view dynamic scheduling as a sequential decision problem. Firstly, we introduce a generalized planning operator, the stochastic task model (STM), which predicts the effects of executing a particular task on state, time and reward using a general procedural format (pure stochastic function). Secondly, we show that effective planning under uncertainty can be obtained by combining adaptive horizon stochastic planning with reinforcement learning (RL) in a hybrid system. The bene£ts of the hybrid approach are evaluated using a repeatable job shop scheduling task.
We present an architecture that provides a robust, scalable and flexible software framework for planning and scheduling systems through the use of standardized industrial-strength middleware and multi-agent technology. It utilizes knowledgebased components that dynamically perform and verify the system's configuration. The system is based on a proper formal account of hybrid planning, the integration of HTN and POCL planning, which allows to decouple flaw detection, modification computation, and search control. In adopting this methodology, planning and scheduling capabilities can be easily combined by orchestrating respective elementary modules and strategies without jeopardizing system consistency through interfering module activity.
This paper presents an adaptation of the standard quantum search technique to enable application within Dynamic Programming, in order to optimise a Markov Decision Process. This is applicable to problems arising from typical planning domains that are intractable due to computational complexity when using classical computation. The proposed method is able to balance state-space exploration with greedy selection of the local minima by setting appropriate thresholds via quantum counting. A quantum walk is used to propogate through a graphical representation of the state space.
We investigate the problem of temporal planning with concurrent actions having stochastic durations, especially in the context of extended-state-space based planners. The problem is challenging because stochastic durations lead to an explosion in the space of possible decision-epochs, which exacerbates the familiar challenge of growth in executable action combinations caused by concurrency.
Researchers often express probabilistic planning problems as Markov decision process models and then maximize the expected total reward. However, it is often rational to maximize the expected utility of the total reward for a given nonlinear utility function, for example, to model attitudes towards risk in high-stake decision situations. In this paper, we give an overview of basic techniques for probabilistic planning with nonlinear utility functions, including functional value iteration and a backward induction method for one-switch utility functions.