Luu, Linda
Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Caluwaerts, Ken, Iscen, Atil, Kew, J. Chase, Yu, Wenhao, Zhang, Tingnan, Freeman, Daniel, Lee, Kuang-Huei, Lee, Lisa, Saliceti, Stefano, Zhuang, Vincent, Batchelor, Nathan, Bohez, Steven, Casarini, Federico, Chen, Jose Enrique, Cortes, Omar, Coumans, Erwin, Dostmohamed, Adil, Dulac-Arnold, Gabriel, Escontrela, Alejandro, Frey, Erik, Hafner, Roland, Jain, Deepali, Jyenis, Bauyrjan, Kuang, Yuheng, Lee, Edward, Luu, Linda, Nachum, Ofir, Oslund, Ken, Powell, Jason, Reyes, Diego, Romano, Francesco, Sadeghi, Feresteh, Sloat, Ron, Tabanpour, Baruch, Zheng, Daniel, Neunert, Michael, Hadsell, Raia, Heess, Nicolas, Nori, Francesco, Seto, Jeff, Parada, Carolina, Sindhwani, Vikas, Vanhoucke, Vincent, Tan, Jie
Abstract--Animals have evolved various agile locomotion strategies, such as sprinting, leaping, and jumping. There is a growing interest in developing legged robots that move like their biological counterparts and show various agile skills to navigate complex environments quickly. Despite the interest, the field lacks systematic benchmarks to measure the performance of control policies and hardware in agility. We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots. Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism. This encourages researchers to develop controllers that not only move fast, but do so in a controllable and versatile way. To set strong baselines, we present two methods for tackling the benchmark. In the first approach, we train specialist locomotion skills using on-policy reinforcement learning methods and combine them with a highlevel navigation controller. In the second approach, we distill the specialist skills into a Transformer-based generalist locomotion policy, named Locomotion-Transformer, that can handle various terrains and adjust the robot's gait based on the perceived There has been a proliferation of legged robot development inspired by animal mobility. An important research question in this field is how to develop a controller that enables legged robots to exhibit animal-level agility while also being able to generalize environments, such as up and down stairs, through bushes, across various obstacles and terrains. Through the exploration and over unpaved roads and rocky or even sandy beaches. of both learning and traditional control-based methods, there Despite advances in robot hardware and control, a major has been significant progress in enabling robots to walk across challenge in the field is the lack of standardized and intuitive a wide range of terrains [10, 21, 20, 1, 27]. These robots are methods for evaluating the effectiveness of locomotion now capable of walking in a variety of indoor and outdoor controllers.
Learning and Adapting Agile Locomotion Skills by Transferring Experience
Smith, Laura, Kew, J. Chase, Li, Tianyu, Luu, Linda, Peng, Xue Bin, Ha, Sehoon, Tan, Jie, Levine, Sergey
Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running. However, designing robust controllers for highly agile dynamic motions remains a substantial challenge for roboticists. Reinforcement learning (RL) offers a promising data-driven approach for automatically training such controllers. However, exploration in these high-dimensional, underactuated systems remains a significant hurdle for enabling legged robots to learn performant, naturalistic, and versatile agility skills. We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks. To leverage controllers we can acquire in practice, we design this framework to be flexible in terms of their source -- that is, the controllers may have been optimized for a different objective under different dynamics, or may require different knowledge of the surroundings -- and thus may be highly suboptimal for the target task. We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments. We also demonstrate that the agile behaviors learned in this way are graceful and safe enough to deploy in the real world.