AirCapRL: Autonomous Aerial Human Motion Capture using Deep Reinforcement Learning
Tallamraju, Rahul, Saini, Nitin, Bonetto, Elia, Pabst, Michael, Liu, Yu Tang, Black, Michael J., Ahmad, Aamir
–arXiv.org Artificial Intelligence
In this letter, we introduce a deep reinforcement learning (RL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system and observation models. Such models are difficult to derive and generalize across different systems. Moreover, the non-linearity and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions. Video Link: https://bit.ly/38SJfjo Supplementary: https://bit.ly/3evfo1O
arXiv.org Artificial Intelligence
Aug-1-2020
- Country:
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Energy > Oil & Gas (0.31)
- Transportation (0.47)
- Technology: