Recent progress in legged locomotion research has produced robots that can perform agile blind-walking with robustness comparable to a blindfolded human. However, this walking approach has not yet been integrated with planners for high-level activities. In this paper, we take a step towards high-level task planning for these robots by studying a planar simulated biped that captures their essential dynamics. We investigate variants of Monte-Carlo Tree Search (MCTS) for selecting an appropriate blind-walking controller at each decision cycle. In particular, we consider UCT with an intelligently selected rollout policy, which is shown to be capable of guiding the biped through treacherous terrain. In addition, we develop a new MCTS variant, called Monte-Carlo Discrepancy Search (MCDS), which is shown to make more effective use of limited planning time than UCT for this domain. We demonstrate the effectiveness of these planners in both deterministic and stochastic environments across a range of algorithm parameters. In addition, we present results for using these planners to control a full-order 3D simulation of Cassie, an agile bipedal robot, through complex terrain.
What looks like a tiny mechanical ostrich chasing after a car is actually a significant leap forward for robot-kind. The clever and simple two-legged robot, known as the Planar Elliptical Runner, was developed at the Institute for Human and Machine Cognition in Ocala, Florida, to explore how mechanical design can be used to enable sophisticated legged locomotion. A video produced by the researchers shows the robot being tested in a number of situations, including on a treadmill and running behind and alongside a car with a helping hand from an engineer. In contrast to many other legged robots, this one doesn't use sensors and a computer to help balance itself. Instead, its mechanical design provides dynamic stability as it runs.
Learning controllers that reproduce legged locomotion in nature have been a long-time goal in robotics and computer graphics. While yielding promising results, recent approaches are not yet flexible enough to be applicable to legged systems of different morphologies. This is partly because they often rely on precise motion capture references or elaborate learning environments that ensure the naturality of the emergent locomotion gaits but prevent generalization. This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference. Leveraging on the exploration capacities of Reinforcement Learning (RL), we learn a control policy that fills in the information gap between the template model and full-body dynamics required to maintain stable and periodic locomotion. The proposed approach can be applied to robots of different sizes and morphologies and adapted to any RL technique and control architecture. We present experimental results showing that even in a model-free setup and with a simple reactive control architecture, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot. And most importantly, this is achieved without using motion capture, strong constraints in the dynamics or kinematics of the robot, nor prescribing limb coordination. We provide supplemental videos for qualitative analysis of the naturality of the learned gaits.
A few years ago, we wrote about the tiniest little quadruped robot we'd ever seen--a mere 20 millimeters long, with a hip height of 5.6 mm and weight of about 1.6 gram. The designer, Ryan St. Pierre from Sarah Bergbreiter's lab at the University of Maryland, also showed us a picture of an even smaller version that weighed just 100 mg. "It's always a fun challenge to try to make robots as small as possible," he told us. "Currently, I am working on making a robot, of the same design, that would be 2.5 mm long, an order of magnitude smaller than the ones we presented at ICRA. Smaller robots can more easily go places that later robots can't, and having them in various sizes would increase their utility."
In this episode, Audrow Nash speaks with Monica Daley about learning from birds about legged locomotion. To do this, Daley analyzes the gaits of guineafowl in various experiments to understand the mechanical principles underlying gaits, such as energetic economy, mechanical limits, and how the birds avoid injury. She then tests her ideas about legged locomotion on legged robots with collaborators, including Jonathan Hurst from Oregon State University. Daley also speaks about her experience with interdisciplinary collaborations.