Limbu, Manshi
Heterogeneous Team Coordination on Partially Observable Graphs with Realistic Communication
Zhou, Yanlin, Limbu, Manshi, Wang, Xuan, Shishika, Daigo, Xiao, Xuesu
Team Coordination on Graphs with Risky Edges (\textsc{tcgre}) is a recently proposed problem, in which robots find paths to their goals while considering possible coordination to reduce overall team cost. However, \textsc{tcgre} assumes that the \emph{entire} environment is available to a \emph{homogeneous} robot team with \emph{ubiquitous} communication. In this paper, we study an extended version of \textsc{tcgre}, called \textsc{hpr-tcgre}, with three relaxations: Heterogeneous robots, Partial observability, and Realistic communication. To this end, we form a new combinatorial optimization problem on top of \textsc{tcgre}. After analysis, we divide it into two sub-problems, one for robots moving individually, another for robots in groups, depending on their communication availability. Then, we develop an algorithm that exploits real-time partial maps to solve local shortest path(s) problems, with a A*-like sub-goal(s) assignment mechanism that explores potential coordination opportunities for global interests. Extensive experiments indicate that our algorithm is able to produce team coordination behaviors in order to reduce overall cost even with our three relaxations.
Learning Coordinated Maneuver in Adversarial Environments
Hu, Zechen, Limbu, Manshi, Shishika, Daigo, Xiao, Xuesu, Wang, Xuan
This paper aims to solve the coordination of a team of robots traversing a route in the presence of adversaries with random positions. Our goal is to minimize the overall cost of the team, which is determined by (i) the accumulated risk when robots stay in adversary-impacted zones and (ii) the mission completion time. During traversal, robots can reduce their speed and act as a `guard' (the slower, the better), which will decrease the risks certain adversary incurs. This leads to a trade-off between the robots' guarding behaviors and their travel speeds. The formulated problem is highly non-convex and cannot be efficiently solved by existing algorithms. Our approach includes a theoretical analysis of the robots' behaviors for the single-adversary case. As the scale of the problem expands, solving the optimal solution using optimization approaches is challenging, therefore, we employ reinforcement learning techniques by developing new encoding and policy-generating methods. Simulations demonstrate that our learning methods can efficiently produce team coordination behaviors. We discuss the reasoning behind these behaviors and explain why they reduce the overall team cost.
Team Coordination on Graphs: Problem, Analysis, and Algorithms
Limbu, Manshi, Zhou, Yanlin, Stein, Gregory, Wang, Xuan, Shishika, Daigo, Xiao, Xuesu
Team Coordination on Graphs with Risky Edges (TCGRE) is a recently emerged problem, in which a robot team collectively reduces graph traversal cost through support from one robot to another when the latter traverses a risky edge. Resembling the traditional Multi-Agent Path Finding (MAPF) problem, both classical and learning-based methods have been proposed to solve TCGRE, however, they lacked either computation efficiency or optimality assurance. In this paper, we reformulate TCGRE as a constrained optimization and perform rigorous mathematical analysis. Our theoretical analysis shows the NP-hardness of TCGRE by reduction from the Maximum 3D Matching problem and that efficient decomposition is a key to tackle this combinatorial optimization problem. Further more, we design three classes of algorithms to solve TCGRE, i.e., Joint State Graph (JSG) based, coordination based, and receding-horizon sub-team based solutions. Each of these proposed algorithms enjoy different provable optimality and efficiency characteristics that are demonstrated in our extensive experiments.
Scaling Team Coordination on Graphs with Reinforcement Learning
Limbu, Manshi, Hu, Zechen, Wang, Xuan, Shishika, Daigo, Xiao, Xuesu
This paper studies Reinforcement Learning (RL) techniques to enable team coordination behaviors in graph environments with support actions among teammates to reduce the costs of traversing certain risky edges in a centralized manner. While classical approaches can solve this non-standard multi-agent path planning problem by converting the original Environment Graph (EG) into a Joint State Graph (JSG) to implicitly incorporate the support actions, those methods do not scale well to large graphs and teams. To address this curse of dimensionality, we propose to use RL to enable agents to learn such graph traversal and teammate supporting behaviors in a data-driven manner. Specifically, through a new formulation of the team coordination on graphs with risky edges problem into Markov Decision Processes (MDPs) with a novel state and action space, we investigate how RL can solve it in two paradigms: First, we use RL for a team of agents to learn how to coordinate and reach the goal with minimal cost on a single EG. We show that RL efficiently solves problems with up to 20/4 or 25/3 nodes/agents, using a fraction of the time needed for JSG to solve such complex problems; Second, we learn a general RL policy for any $N$-node EGs to produce efficient supporting behaviors. We present extensive experiments and compare our RL approaches against their classical counterparts.
Team Coordination on Graphs with State-Dependent Edge Cost
Oughourli, Sara, Limbu, Manshi, Hu, Zechen, Wang, Xuan, Xiao, Xuesu, Shishika, Daigo
This paper studies a team coordination problem in a graph environment. Specifically, we incorporate "support" action which an agent can take to reduce the cost for its teammate to traverse some edges that have higher costs otherwise. Due to this added feature, the graph traversal is no longer a standard multi-agent path planning problem. To solve this new problem, we propose a novel formulation by posing it as a planning problem in the joint state space: the joint state graph (JSG). Since the edges of JSG implicitly incorporate the support actions taken by the agents, we are able to now optimize the joint actions by solving a standard single-agent path planning problem in JSG. One main drawback of this approach is the curse of dimensionality in both the number of agents and the size of the graph. To improve scalability in graph size, we further propose a hierarchical decomposition method to perform path planning in two levels. We provide complexity analysis as well as a statistical analysis to demonstrate the efficiency of our algorithm.