SLoMo: A General System for Legged Robot Motion Imitation from Casual Videos
Zhang, John Z., Yang, Shuo, Yang, Gengshan, Bishop, Arun L., Ramanan, Deva, Manchester, Zachary
–arXiv.org Artificial Intelligence
We present SLoMo: a first-of-its-kind framework for transferring skilled motions from casually captured "in the wild" video footage of humans and animals to legged robots. SLoMo works in three stages: 1) synthesize a physically plausible reconstructed key-point trajectory from monocular videos; 2) optimize a dynamically feasible reference trajectory for the robot offline that includes body and foot motion, as well as contact sequences that closely tracks the key points; 3) track the reference trajectory online using a general-purpose model-predictive controller on robot hardware. Traditional motion imitation for legged motor skills often requires expert animators, collaborative demonstrations, and/or expensive motion capture equipment, all of which limits scalability. Instead, SLoMo only relies on easy-to-obtain monocular video footage, readily available in online repositories such as YouTube. It converts videos into motion primitives that can be executed reliably by real-world robots. We demonstrate our approach by transferring the motions of cats, dogs, and humans to example robots including a quadruped (on hardware) and a humanoid (in simulation). To the best knowledge of the authors, this is the first attempt at a general-purpose motion transfer framework that imitates animal and human motions on legged robots directly from casual videos without artificial markers or labels.
arXiv.org Artificial Intelligence
Sep-5-2023
- Country:
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Genre:
- Research Report (0.50)
- Technology:
- Information Technology > Artificial Intelligence > Robots > Locomotion (1.00)