Durfee, Edmund H.


Multiagent Metareasoning through Organizational Design

AAAI Conferences

We formulate an approach to multiagent metareasoning that uses organizational design to focus each agent's reasoning on the aspects of its local problem that let it make the most worthwhile contributions to joint behavior. By employing the decentralized Markov decision process framework, we characterize an organizational design problem that explicitly considers the quantitative impact that a design has on both the quality of the agents' behaviors and their reasoning costs. We describe an automated organizational design process that can approximately solve our organizational design problem via incremental search, and present techniques that efficiently estimate the incremental impact of a candidate organizational influence. Our empirical evaluation confirms that our process generates organizational designs that impart a desired metareasoning regime upon the agents.


Plan Development using Local Probabilistic Models

arXiv.org Artificial Intelligence

Approximate models of world state transitions are necessary when building plans for complex systems operating in dynamic environments. External event probabilities can depend on state feature values as well as time spent in that particular state. We assign temporally -dependent probability functions to state transitions. These functions are used to locally compute state probabilities, which are then used to select highly probable goal paths and eliminate improbable states. This probabilistic model has been implemented in the Cooperative Intelligent Real-time Control Architecture (CIRCA), which combines an AI planner with a separate real-time system such that plans are developed, scheduled, and executed with real-time guarantees. We present flight simulation tests that demonstrate how our probabilistic model may improve CIRCA performance.


Distributed Continual Planning for Unmanned Ground Vehicle Teams

AI Magazine

Some application domains highlight the importance of distributed continual planning concepts; coordinating teams of unmanned ground vehicles in dynamic environments is an example of such a domain. In this article, I illustrate the ideas in, and promises of, distributed continual planning by showing how acquiring and distributing operator intent among multiple semiautonomous vehicles supports ongoing, cooperative mission elaboration and revision.


A Survey of Research in Distributed, Continual Planning

AI Magazine

Planning and executing the resulting plans in a dynamic environment implies a continual approach in which planning and execution are interleaved, uncertainty in the current and projected world state is recognized and handled appropriately, and replanning can be performed when the situation changes or planned actions fail. Furthermore, complex planning and execution problems may require multiple computational agents and human planners to collaborate on a solution. In this article, we describe a new paradigm for planning in complex, dynamic environments, which we term distributed, continual planning (DCP). We give a historical overview of research leading to the current state of the art in DCP and describe research in distributed and continual planning.


Distributed Continual Planning for Unmanned Ground Vehicle Teams

AI Magazine

Some application domains highlight the importance of distributed continual planning concepts; coordinating teams of unmanned ground vehicles in dynamic environments is an example of such a domain. In this article, I illustrate the ideas in, and promises of, distributed continual planning by showing how acquiring and distributing operator intent among multiple semiautonomous vehicles supports ongoing, cooperative mission elaboration and revision.


A Survey of Research in Distributed, Continual Planning

AI Magazine

Complex, real-world domains require rethinking traditional approaches to AI planning. Planning and executing the resulting plans in a dynamic environment implies a continual approach in which planning and execution are interleaved, uncertainty in the current and projected world state is recognized and handled appropriately, and replanning can be performed when the situation changes or planned actions fail. Furthermore, complex planning and execution problems may require multiple computational agents and human planners to collaborate on a solution. In this article, we describe a new paradigm for planning in complex, dynamic environments, which we term distributed, continual planning (DCP). We argue that developing DCP systems will be necessary for planning applications to be successful in these environments. We give a historical overview of research leading to the current state of the art in DCP and describe research in distributed and continual planning.


Practically Coordinating

AI Magazine

To coordinate, intelligent agents might need to know something about themselves, about each other, about how others view themselves and others, about how others think others view themselves and others, and so on. Taken to an extreme, the amount of knowledge an agent might possess to coordinate its interactions with others might outstrip the agent's limited reasoning capacity (its available time, memory, and so on). Much of the work in studying and building multiagent systems has thus been devoted to developing practical techniques for achieving coordination, typically by limiting the knowledge available to, or necessary for, agents. This article categorizes techniques for keeping agents suitably ignorant so that they can practically coordinate and gives a selective survey of examples of these techniques for illustration.


Practically Coordinating

AI Magazine

To coordinate, intelligent agents might need to know something about themselves, about each other, about how others view themselves and others, about how others think others view themselves and others, and so on. Taken to an extreme, the amount of knowledge an agent might possess to coordinate its interactions with others might outstrip the agent's limited reasoning capacity (its available time, memory, and so on). Much of the work in studying and building multiagent systems has thus been devoted to developing practical techniques for achieving coordination, typically by limiting the knowledge available to, or necessary for, agents. This article categorizes techniques for keeping agents suitably ignorant so that they can practically coordinate and gives a selective survey of examples of these techniques for illustration.