Deploying fleets of autonomous robots in real-world applications requires addressing three problems: motion planning, coordination, and control. Application-specific features of the environment and robots often narrow down the possible motion planning and control methods that can be used. This paper proposes a lightweight coordination method that implements a high-level controller for a fleet of potentially heterogeneous robots. Very few assumptions are made on robot controllers, which are required only to be able to accept set point updates and to report their current state. The approach can be used with any motion planning method for computing kinematically-feasible paths. Coordination uses heuristics to update priorities while robots are in motion, and a simple model of robot dynamics to guarantee dynamic feasibility. The approach avoids a priori discretization of the environment or of robot paths, allowing robots to "follow each other" through critical sections. We validate the method formally and experimentally with different motion planners and robot controllers, in simulation and with real robots.
Cirillo, Marcello (Örebro University) | Pecora, Federico (Örebro University) | Andreasson, Henrik (Örebro University) | Uras, Tansel (University of Southern California) | Koenig, Sven (University of Southern California)
A growing interest in the industrial sector for autonomous ground vehicles has prompted significant investment in fleet management systems. Such systems need to accommodate on-line externally imposed temporal and spatial requirements, and to adhere to them even in the presence of contingencies. Moreover, a fleet management system should ensure correctness, i.e., refuse to commit to requirements that cannot be satisfied. We present an approach to obtain sets of alternative execution patterns (called trajectory envelopes) which provide these guarantees. The approach relies on a constraint-based representation shared among multiple solvers, each of which progressively refines trajectory envelopes following a least commitment principle.
Designers of robotic groups are faced with the formidable task of creating effective coordination architectures that can plan and replan trajectories even when faced with changing environment conditions and hardware failures. Communication between robots is one mechanism that can at times be helpful in such systems, but can also create a time and energy overhead that reduces performance. In dealing with this issue, various communication schemes have been proposed ranging from centralized and localized algorithms, to noncommunicative methods. In this paper we argue that using a coordination cost measure can be useful for selecting the appropriate level of communication within such groups. We show that this measure can be used to create adaptive communication methods that switch between various communication approaches. Robotic team members that implemented these approaches were able to increase their productivity in a statistically significant fashion over methods that only used one type of communication scheme.
Achieving effective cooperation in a multi-agent system is a difficult problem for a number of reasons llke local views of problem-solvlng task and uncertainty about the outcomes of interacting non-local tasks. Algorithms llke Generalized Partial Global Planning (GPGP) have responded to these problems by cresting sophiticated coordination mechanisms trigerred in response to the characteristics of particular task environments. In this paper, we present a learning algorithm that endows agents with the capability to choose a suitable subset of the coordination mechanisms based on the present problem solving situation. Introduction Achieving effective cooperation in a multi-agent system is a difficult problem for a number of reasons. The first is that an agent's control decisions, based only on its local view of problem-solving task structures, may lead to inappropriate decisions about which activity it should do next, what results it should transmit to other agents and what results it should ask other agents to produce(Durfee & Lesser 1987; Decker & Lesser 1993).