Zhu, Qiuguo
A Hierarchical Region-Based Approach for Efficient Multi-Robot Exploration
Meng, Di, Zhao, Tianhao, Xue, Chaoyu, Wu, Jun, Zhu, Qiuguo
Multi-robot autonomous exploration in an unknown environment is an important application in robotics.Traditional exploration methods only use information around frontier points or viewpoints, ignoring spatial information of unknown areas. Moreover, finding the exact optimal solution for multi-robot task allocation is NP-hard, resulting in significant computational time consumption. To address these issues, we present a hierarchical multi-robot exploration framework using a new modeling method called RegionGraph. The proposed approach makes two main contributions: 1) A new modeling method for unexplored areas that preserves their spatial information across the entire space in a weighted graph called RegionGraph. 2) A hierarchical multi-robot exploration framework that decomposes the global exploration task into smaller subtasks, reducing the frequency of global planning and enabling asynchronous exploration. The proposed method is validated through both simulation and real-world experiments, demonstrating a 20% improvement in efficiency compared to existing methods.
MOVE: Multi-skill Omnidirectional Legged Locomotion with Limited View in 3D Environments
Li, Songbo, Luo, Shixin, Wu, Jun, Zhu, Qiuguo
Legged robots possess inherent advantages in traversing complex 3D terrains. However, previous work on low-cost quadruped robots with egocentric vision systems has been limited by a narrow front-facing view and exteroceptive noise, restricting omnidirectional mobility in such environments. While building a voxel map through a hierarchical structure can refine exteroception processing, it introduces significant computational overhead, noise, and delays. In this paper, we present MOVE, a one-stage end-to-end learning framework capable of multi-skill omnidirectional legged locomotion with limited view in 3D environments, just like what a real animal can do. When movement aligns with the robot's line of sight, exteroceptive perception enhances locomotion, enabling extreme climbing and leaping. When vision is obstructed or the direction of movement lies outside the robot's field of view, the robot relies on proprioception for tasks like crawling and climbing stairs. We integrate all these skills into a single neural network by introducing a pseudo-siamese network structure combining supervised and contrastive learning which helps the robot infer its surroundings beyond its field of view. Experiments in both simulations and real-world scenarios demonstrate the robustness of our method, broadening the operational environments for robotics with egocentric vision.
Walking with Terrain Reconstruction: Learning to Traverse Risky Sparse Footholds
Yu, Ruiqi, Wang, Qianshi, Wang, Yizhen, Wang, Zhicheng, Wu, Jun, Zhu, Qiuguo
Traversing risky terrains with sparse footholds presents significant challenges for legged robots, requiring precise foot placement in safe areas. Current learning-based methods often rely on implicit feature representations without supervising physically significant estimation targets. This limits the policy's ability to fully understand complex terrain structures, which is critical for generating accurate actions. In this paper, we utilize end-to-end reinforcement learning to traverse risky terrains with high sparsity and randomness. Our approach integrates proprioception with single-view depth images to reconstruct robot's local terrain, enabling a more comprehensive representation of terrain information. Meanwhile, by incorporating implicit and explicit estimations of the robot's state and its surroundings, we improve policy's environmental understanding, leading to more precise actions. We deploy the proposed framework on a low-cost quadrupedal robot, achieving agile and adaptive locomotion across various challenging terrains and demonstrating outstanding performance in real-world scenarios. Video at: http://youtu.be/ReQAR4D6tuc.
Multi-expert learning of adaptive legged locomotion
Yang, Chuanyu, Yuan, Kai, Zhu, Qiuguo, Yu, Wanming, Li, Zhibin
Achieving versatile robot locomotion requires motor skills which can adapt to previously unseen situations. We propose a Multi-Expert Learning Architecture (MELA) that learns to generate adaptive skills from a group of representative expert skills. During training, MELA is first initialised by a distinct set of pre-trained experts, each in a separate deep neural network (DNN). Then by learning the combination of these DNNs using a Gating Neural Network (GNN), MELA can acquire more specialised experts and transitional skills across various locomotion modes. During runtime, MELA constantly blends multiple DNNs and dynamically synthesises a new DNN to produce adaptive behaviours in response to changing situations. This approach leverages the advantages of trained expert skills and the fast online synthesis of adaptive policies to generate responsive motor skills during the changing tasks. Using a unified MELA framework, we demonstrated successful multi-skill locomotion on a real quadruped robot that performed coherent trotting, steering, and fall recovery autonomously, and showed the merit of multi-expert learning generating behaviours which can adapt to unseen scenarios.
Bound Controller for a Quadruped Robot using Pre-Fitting Deep Reinforcement Learning
Li, Anqiao, Wang, Zhicheng, Wu, Jun, Zhu, Qiuguo
The bound gait is an important gait in quadruped robot locomotion. It can be used to cross obstacles and often serves as transition mode between trot and gallop. However, because of the complexity of the models, the bound gait built by the conventional control method is often unnatural and slow to compute. In the present work, we introduce a method to achieve the bound gait based on model-free pre-fit deep reinforcement learning (PF-DRL). We first constructed a net with the same structure as an actor net in the PPO2 and pre-fit it using the data collected from a robot using conventional model-based controller. Next, the trained weights are transferred into the PPO2 and be optimized further. Moreover, target on the symmetrical and periodic characteristic during bounding, we designed a reward function based on contact points. We also used feature engineering to improve the input features of the DRL model and improve performance on flat ground. Finally, we trained the bound controller in simulation and successfully deployed it on the Jueying Mini robot. It performs better than the conventional method with higher computational efficiency and more stable center-of-mass height in our experiments.