Results


Distributed Continual Planning for Unmanned Ground Vehicle Teams

AI Magazine

Some application domains highlight the importance of distributed continual planning concepts; coordinating teams of unmanned ground vehicles in dynamic environments is an example of such a domain. In this article, I illustrate the ideas in, and promises of, distributed continual planning by showing how acquiring and distributing operator intent among multiple semiautonomous vehicles supports ongoing, cooperative mission elaboration and revision.


A Survey of Research in Distributed, Continual Planning

AI Magazine

Planning and executing the resulting plans in a dynamic environment implies a continual approach in which planning and execution are interleaved, uncertainty in the current and projected world state is recognized and handled appropriately, and replanning can be performed when the situation changes or planned actions fail. Furthermore, complex planning and execution problems may require multiple computational agents and human planners to collaborate on a solution. In this article, we describe a new paradigm for planning in complex, dynamic environments, which we term distributed, continual planning (DCP). We give a historical overview of research leading to the current state of the art in DCP and describe research in distributed and continual planning.


Distributed Continual Planning for Unmanned Ground Vehicle Teams

AI Magazine

Some application domains highlight the importance of distributed continual planning concepts; coordinating teams of unmanned ground vehicles in dynamic environments is an example of such a domain. In this article, I illustrate the ideas in, and promises of, distributed continual planning by showing how acquiring and distributing operator intent among multiple semiautonomous vehicles supports ongoing, cooperative mission elaboration and revision.


A Survey of Research in Distributed, Continual Planning

AI Magazine

Complex, real-world domains require rethinking traditional approaches to AI planning. Planning and executing the resulting plans in a dynamic environment implies a continual approach in which planning and execution are interleaved, uncertainty in the current and projected world state is recognized and handled appropriately, and replanning can be performed when the situation changes or planned actions fail. Furthermore, complex planning and execution problems may require multiple computational agents and human planners to collaborate on a solution. In this article, we describe a new paradigm for planning in complex, dynamic environments, which we term distributed, continual planning (DCP). We argue that developing DCP systems will be necessary for planning applications to be successful in these environments. We give a historical overview of research leading to the current state of the art in DCP and describe research in distributed and continual planning.


Practically Coordinating

AI Magazine

To coordinate, intelligent agents might need to know something about themselves, about each other, about how others view themselves and others, about how others think others view themselves and others, and so on. Taken to an extreme, the amount of knowledge an agent might possess to coordinate its interactions with others might outstrip the agent's limited reasoning capacity (its available time, memory, and so on). Much of the work in studying and building multiagent systems has thus been devoted to developing practical techniques for achieving coordination, typically by limiting the knowledge available to, or necessary for, agents. This article categorizes techniques for keeping agents suitably ignorant so that they can practically coordinate and gives a selective survey of examples of these techniques for illustration.


Practically Coordinating

AI Magazine

To coordinate, intelligent agents might need to know something about themselves, about each other, about how others view themselves and others, about how others think others view themselves and others, and so on. Taken to an extreme, the amount of knowledge an agent might possess to coordinate its interactions with others might outstrip the agent's limited reasoning capacity (its available time, memory, and so on). Much of the work in studying and building multiagent systems has thus been devoted to developing practical techniques for achieving coordination, typically by limiting the knowledge available to, or necessary for, agents. This article categorizes techniques for keeping agents suitably ignorant so that they can practically coordinate and gives a selective survey of examples of these techniques for illustration.