Learning a Distributed Hierarchical Locomotion Controller for Embodied Cooperation
Hong, Chuye, Huang, Kangyao, Liu, Huaping
–arXiv.org Artificial Intelligence
Cooperatively accomplishing embodied tasks by multiple robots has consistently been a highly challenging area of research. Recent studies mainly focus on embodied manipulation cooperation among robotic arms or formation control over the upper level within a group of mobile robots [1, 2]. Nevertheless, multi-agent cooperation via whole-body and end-to-end locomotion control is rarely studied. Some previous works showcase the manipulation via locomotion [3] but are only tested on two agent systems, and the scalability of this method is still agnostic for migration to any number of agent populations. In this work, we aim to realize more complex embodied multi-agent cooperation by learning a distributed hierarchical locomotion control system, decomposing the complex and coupled behaviours while maintaining the potential for unlimited expansion on the swarm. As the foundation for implementation and validation, we construct three scenarios in IsaacSim/Gym [4] as benchmarks for embodied cooperation study. Concurrently, training a robot for a specific function can be effectively achieved through reinforcement learning (RL), like learning movement patterns [5], interactive behaviours [6], as well as logical inference in games [7]. Although RL provides a recognized powerful exploration capability and tremendous progress has been made in sampling efficiency [4], finding and mastering a sequence of sophisticated tasks through searching remains a challenging problem. Hierarchical reinforcement learning (HRL) alleviates this to a certain extent, aiming to understand the logical relationships among "control, action, behaviour, dynamic outcomes, and feedback" in a segmented manner.
arXiv.org Artificial Intelligence
Jul-8-2024