Fan, Tingxiang
S$^2$MAT: Simultaneous and Self-Reinforced Mapping and Tracking in Dynamic Urban Scenariosorcing Framework for Simultaneous Mapping and Tracking in Unbounded Urban Environments
Fan, Tingxiang, Shen, Bowen, Zhang, Yinqiang, Zhang, Chuye, Yang, Lei, Chen, Hua, Zhang, Wei, Pan, Jia
Despite the increasing prevalence of robots in daily life, their navigation capabilities are still limited to environments with prior knowledge, such as a global map. To fully unlock the potential of robots, it is crucial to enable them to navigate in large-scale unknown and changing unstructured scenarios. This requires the robot to construct an accurate static map in real-time as it explores, while filtering out moving objects to ensure mapping accuracy and, if possible, achieving high-quality pedestrian tracking and collision avoidance. While existing methods can achieve individual goals of spatial mapping or dynamic object detection and tracking, there has been limited research on effectively integrating these two tasks, which are actually coupled and reciprocal. In this work, we propose a solution called S$^2$MAT (Simultaneous and Self-Reinforced Mapping and Tracking) that integrates a front-end dynamic object detection and tracking module with a back-end static mapping module. S$^2$MAT leverages the close and reciprocal interplay between these two modules to efficiently and effectively solve the open problem of simultaneous tracking and mapping in highly dynamic scenarios. We conducted extensive experiments using widely-used datasets and simulations, providing both qualitative and quantitative results to demonstrate S$^2$MAT's state-of-the-art performance in dynamic object detection, tracking, and high-quality static structure mapping. Additionally, we performed long-range robotic navigation in real-world urban scenarios spanning over 7 km, which included challenging obstacles like pedestrians and other traffic agents. The successful navigation provides a comprehensive test of S$^2$MAT's robustness, scalability, efficiency, quality, and its ability to benefit autonomous robots in wild scenarios without pre-built maps.
DiffSRL: Learning Dynamic-aware State Representation for Deformable Object Control with Differentiable Simulator
Chen, Sirui, Liu, Yunhao, Li, Jialong, Yao, Shang Wen, Fan, Tingxiang, Pan, Jia
Dynamic state representation learning is an important task in robot learning. Latent space that can capture dynamics related information has wide application in areas such as accelerating model free reinforcement learning, closing the simulation to reality gap, as well as reducing the motion planning complexity. However, current dynamic state representation learning methods scale poorly on complex dynamic systems such as deformable objects, and cannot directly embed well defined simulation function into the training pipeline. We propose DiffSRL, a dynamic state representation learning pipeline utilizing differentiable simulation that can embed complex dynamics models as part of the end-to-end training. We also integrate differentiable dynamic constraints as part of the pipeline which provide incentives for the latent state to be aware of dynamical constraints. We further establish a state representation learning benchmark on a soft-body simulation system, PlasticineLab, and our model demonstrates superior performance in terms of capturing long-term dynamics as well as reward prediction.
DeepMNavigate: Deep Reinforced Multi-Robot Navigation Unifying Local & Global Collision Avoidance
Tan, Qingyang, Fan, Tingxiang, Pan, Jia, Manocha, Dinesh
We present a novel algorithm (DeepMNavigate) for global multi-agent navigation in dense scenarios using deep reinforcement learning. Our approach uses local and global information for each robot based on motion information maps. We use a three-layer CNN that uses these maps as input and generate a suitable action to drive each robot to its goal position. Our approach is general, learns an optimal policy using a multi-scenario, multi-state training algorithm, and can directly handle raw sensor measurements for local observations. We demonstrate the performance on complex, dense benchmarks with narrow passages on environments with tens of agents. We highlight the algorithm's benefits over prior learning methods and geometric decentralized algorithms in complex scenarios.
Learning Resilient Behaviors for Navigation Under Uncertainty Environments
Fan, Tingxiang, Long, Pinxin, Liu, Wenxi, Pan, Jia, Yang, Ruigang, Manocha, Dinesh
-- Deep reinforcement learning has great potential to acquire complex, adaptive behaviors for autonomous agents automatically. However, the underlying neural network polices have not been widely deployed in real-world applications, especially in these safety-critical tasks (e.g., autonomous driving). One of the reasons is that the learned policy cannot perform flexible and resilient behaviors as traditional methods to adapt to diverse environments. In this paper, we consider the problem that a mobile robot learns adaptive and resilient behaviors for navigating in unseen uncertain environments while avoiding collisions. We present a novel approach for uncertainty-aware navigation by introducing an uncertainty-aware predictor to model the environmental uncertainty, and we propose a novel uncertainty-aware navigation network to learn resilient behaviors in the prior unknown environments. T o train the proposed uncertainty-aware network more stably and efficiently, we present the temperature decay training paradigm, which balances exploration and exploitation during the training process. Our experimental evaluation demonstrates that our approach can learn resilient behaviors in diverse environments and generate adaptive trajectories according to environmental uncertainties. Videos of the experiments are available at https://sites.google.com/view/resilient-nav/ . With the recent progress of machine learning techniques, deep reinforcement learning has been seen as a promising technique for autonomous systems to learn intelligent and complex behaviors in manipulation and motion planning tasks [1]-[3].
Intervention Aided Reinforcement Learning for Safe and Practical Policy Optimization in Navigation
Wang, Fan, Zhou, Bo, Chen, Ke, Fan, Tingxiang, Zhang, Xi, Li, Jiangyong, Tian, Hao, Pan, Jia
In contrast to the intense studies of deep Reinforcement Learning(RL) in games and simulations [1], employing deep RL to real world robots remains challenging, especially in high risk scenarios. Though there has been some progresses in RL based control in realistic robotics [2, 3, 4, 5], most of those previous works does not specifically deal with the safety concerns in the RL training process. For majority of high risk scenarios in real world, deep RL still suffer from bottlenecks both in cost and safety. As an example, collisions are extremely dangerous for UAV, while RL training requires thousands of times of collisions. Other works contributes to building simulation environments and bridging the gap between reality and simulation [4, 5]. However, building such simulation environment is arduous, not to mention that the gap can not be totally made up. To address the safety issue in real-world RL training, we present the Intervention Aided Reinforcement Learning (IARL) framework. Intervention is commonly used in many automatic control systems in real world for safety insurance. It is also regarded as an important evaluation criteria for autonomous navigation systems, e.g. the disengagement ratio in autonomous driving
Safe Navigation with Human Instructions in Complex Scenes
Hu, Zhe, Pan, Jia, Fan, Tingxiang, Yang, Ruigang, Manocha, Dinesh
In this paper, we present a robotic navigation algorithm with natural language interfaces, which enables a robot to safely walk through a changing environment with moving persons by following human instructions such as "go to the restaurant and keep away from people". We first classify human instructions into three types: the goal, the constraints, and uninformative phrases. Next, we provide grounding for the extracted goal and constraint items in a dynamic manner along with the navigation process, to deal with the target objects that are too far away for sensor observation and the appearance of moving obstacles like humans. In particular, for a goal phrase (e.g., "go to the restaurant"), we ground it to a location in a predefined semantic map and treat it as a goal for a global motion planner, which plans a collision-free path in the workspace for the robot to follow. For a constraint phrase (e.g., "keep away from people"), we dynamically add the corresponding constraint into a local planner by adjusting the values of a local costmap according to the results returned by the object detection module. The updated costmap is then used to compute a local collision avoidance control for the safe navigation of the robot. By combining natural language processing, motion planning, and computer vision, our developed system is demonstrated to be able to successfully follow natural language navigation instructions to achieve navigation tasks in both simulated and real-world scenarios. Videos are available at https://sites.google.com/view/snhi
Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning
Long, Pinxin, Fan, Tingxiang, Liao, Xinyi, Liu, Wenxi, Zhang, Hao, Pan, Jia
Developing a safe and efficient collision avoidance policy for multiple robots is challenging in the decentralized scenarios where each robot generate its paths without observing other robots' states and intents. While other distributed multi-robot collision avoidance systems exist, they often require extracting agent-level features to plan a local collision-free action, which can be computationally prohibitive and not robust. More importantly, in practice the performance of these methods are much lower than their centralized counterparts. We present a decentralized sensor-level collision avoidance policy for multi-robot systems, which directly maps raw sensor measurements to an agent's steering commands in terms of movement velocity. As a first step toward reducing the performance gap between decentralized and centralized methods, we present a multi-scenario multi-stage training framework to find an optimal policy which is trained over a large number of robots on rich, complex environments simultaneously using a policy gradient based reinforcement learning algorithm. We validate the learned sensor-level collision avoidance policy in a variety of simulated scenarios with thorough performance evaluations and show that the final learned policy is able to find time efficient, collision-free paths for a large-scale robot system. We also demonstrate that the learned policy can be well generalized to new scenarios that do not appear in the entire training period, including navigating a heterogeneous group of robots and a large-scale scenario with 100 robots. Videos are available at https://sites.google.com/view/drlmaca