Collaborating Authors

Competition of Distributed and Multiagent Planners (CoDMAP)

AAAI Conferences

As a part of the workshop on Distributed and Multiagent Planning (DMAP) at the International Conference on Automated Planning and Scheduling (ICAPS) 2015, we have organized a competition in distributed and multiagent planning. The main aims of the competition were to consolidate the planners in terms of input format; to promote development of multiagent planners both inside and outside of the multiagent research community; and to provide a proof-of-concept of a potential future multiagent planning track of the International Planning Competition (IPC). In this paper we summarize course and highlights of the competition.

An AI Planning-Based Approach to the Multi-Agent Plan Recognition Problem (Preliminary Report)

AAAI Conferences

Plan Recognition is the problem of inferring the goals and plans of an agent given a set of observations. In Multi-Agent Plan Recognition (MAPR) the task is extended to inferring the goals and plans of multiple agents. Previous MAPR approaches have largely focused on recognizing team structures and behaviors, given perfect and complete observations of the actions of individual agents. However, in many real-world applications of MAPR, observations are unreliable or missing; they are often over properties of the world rather than actions; and the observations that are made may not be explainable by the agents' goals and plans. Moreover, the actions of the agents could be durative or concurrent. In this paper, we address the problem of MAPR with temporal actions and with observations that can be unreliable, missing or unexplainable. To this end, we propose a multi-step compilation technique that enables the use of AI planning for the computation of the posterior probabilities of the possible goals. In addition, we propose a set of novel benchmarks that enable a standard evaluation of solutions that address the MAPR problem with temporal actions and such observations. We present results of an experimental evaluation on this set of benchmarks, using several temporal and diverse planners.

Optimized Directed Roadmap Graph for Multi-Agent Path Finding Using Stochastic Gradient Descent Artificial Intelligence

We present a novel approach called Optimized Directed Roadmap Graph (ODRM). It is a method to build a directed roadmap graph that allows for collision avoidance in multi-robot navigation. This is a highly relevant problem, for example for industrial autonomous guided vehicles. The core idea of ODRM is, that a directed roadmap can encode inherent properties of the environment which are useful when agents have to avoid each other in that same environment. Like Probabilistic Roadmaps (PRMs), ODRM's first step is generating samples from C-space. In a second step, ODRM optimizes vertex positions and edge directions by Stochastic Gradient Descent (SGD). This leads to emergent properties like edges parallel to walls and patterns similar to two-lane streets or roundabouts. Agents can then navigate on this graph by searching their path independently and solving occurring agent-agent collisions at run-time. Using the graphs generated by ODRM compared to a non-optimized graph significantly fewer agent-agent collisions happen. We evaluate our roadmap with both, centralized and decentralized planners. Our experiments show that with ODRM even a simple centralized planner can solve problems with high numbers of agents that other multi-agent planners can not solve. Additionally, we use simulated robots with decentralized planners and online collision avoidance to show how agents are a lot faster on our roadmap than on standard grid maps.

Decentralized Multi-agent Plan Repair in Dynamic Environments Artificial Intelligence

Achieving joint objectives by teams of cooperative planning agents requires significant coordination and communication efforts. For a single-agent system facing a plan failure in a dynamic environment, arguably, attempts to repair the failed plan in general do not straightforwardly bring any benefit in terms of time complexity. However, in multi-agent settings the communication complexity might be of a much higher importance, possibly a high communication overhead might be even prohibitive in certain domains. We hypothesize that in decentralized systems, where coordination is enforced to achieve joint objectives, attempts to repair failed multi-agent plans should lead to lower communication overhead than replanning from scratch. The contribution of the presented paper is threefold. Firstly, we formally introduce the multi-agent plan repair problem and formally present the core hypothesis underlying our work. Secondly, we propose three algorithms for multi-agent plan repair reducing the problem to specialized instances of the multi-agent planning problem. Finally, we present results of experimental validation confirming the core hypothesis of the paper.

Automated Verification of Social Law Robustness in STRIPS

AAAI Conferences

Agents operating in a multi-agent environment must consider not just their own actions, but also those of the other agents in the system. Artificial social systems are a well known means for coordinating a set of agents, without requiring centralized planning or online negotiation between agents. Artificial social systems enact a social law which restricts the agents from performing some actions under some circumstances. A good social law prevents the agents from interfering with each other, but does not prevent them from achieving their goals. However, designing good social laws, or even checking whether a proposed social law is good, are hard questions. In this paper, we take a first step towards automating these processes, by formulating criteria for good social laws in a multi-agent planning framework. We then describe an automated technique for verifying if a proposed social law meets these criteria, based on a compilation to classical planning.