mini cheetah
HACL: History-Aware Curriculum Learning for Fast Locomotion
Mishra, Prakhar, Raj, Amir Hossain, Xiao, Xuesu, Manocha, Dinesh
We address the problem of agile and rapid locomotion, a key characteristic of quadrupedal and bipedal robots. We present a new algorithm that maintains stability and generates high-speed trajectories by considering the temporal aspect of locomotion. Our formulation takes into account past information based on a novel history-aware curriculum Learning (HACL) algorithm. We model the history of joint velocity commands with respect to the observed linear and angular rewards using a recurrent neural net (RNN). The hidden state helps the curriculum learn the relationship between the forward linear velocity and angular velocity commands and the rewards over a given time-step. We validate our approach on the MIT Mini Cheetah,Unitree Go1, and Go2 robots in a simulated environment and on a Unitree Go1 robot in real-world scenarios. In practice, HACL achieves peak forward velocity of 6.7 m/s for a given command velocity of 7m/s and outperforms prior locomotion algorithms by nearly 20%.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > Massachusetts (0.04)
Zero-Shot Retargeting of Learned Quadruped Locomotion Policies Using Hybrid Kinodynamic Model Predictive Control
Li, He, Zhang, Tingnan, Yu, Wenhao, Wensing, Patrick M.
Reinforcement Learning (RL) has witnessed great strides for quadruped locomotion, with continued progress in the reliable sim-to-real transfer of policies. However, it remains a challenge to reuse a policy on another robot, which could save time for retraining. In this work, we present a framework for zero-shot policy retargeting wherein diverse motor skills can be transferred between robots of different shapes and sizes. The new framework centers on a planning-and-control pipeline that systematically integrates RL and Model Predictive Control (MPC). The planning stage employs RL to generate a dynamically plausible trajectory as well as the contact schedule, avoiding the combinatorial complexity of contact sequence optimization. This information is then used to seed the MPC to stabilize and robustify the policy roll-out via a new Hybrid Kinodynamic (HKD) model that implicitly optimizes the foothold locations. Hardware results show an ability to transfer policies from both the A1 and Laikago robots to the MIT Mini Cheetah robot without requiring any policy re-tuning.
GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots
Feng, Gilbert, Zhang, Hongbo, Li, Zhongyu, Peng, Xue Bin, Basireddy, Bhuvan, Yue, Linzhu, Song, Zhitao, Yang, Lizhi, Liu, Yunhui, Sreenath, Koushil, Levine, Sergey
Recent years have seen a surge in commercially-available and affordable quadrupedal robots, with many of these platforms being actively used in research and industry. As the availability of legged robots grows, so does the need for controllers that enable these robots to perform useful skills. However, most learning-based frameworks for controller development focus on training robot-specific controllers, a process that needs to be repeated for every new robot. In this work, we introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots. Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots with similar morphologies. We present a simple but effective morphology randomization method that procedurally generates a diverse set of simulated robots for training. We show that by training a controller on this large set of simulated robots, our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots with diverse morphologies, which were not observed during training.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > China > Hong Kong (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (3 more...)
MIT researchers use simulation to train a robot to run at high speeds
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Four-legged robots are nothing novel -- Boston Dynamics' Spot has been making the rounds for some time, as have countless alternative open source designs. But with theirs, researchers at MIT claim to have broken the record for the fastest robot run recorded. Working out of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), the team says that they developed a system that allows the MIT-designed Mini Cheetah to learn to run by trial and error in simulation. While the speedy Mini Cheetah has limited direct applications in the enterprise, the researchers believe that their technique could be used to improve the capabilities of other robotics systems -- including those used in factories to assemble products before they're shipped to customers.
MIT's Robotic Cheetah Taught Itself How to Run and Set a New Speed Record in the Process
To make their Mini Cheetah better equipped to skillfully scramble across varying terrains, robotics researchers at MIT's CSAIL used AI-powered simulations to quickly teach the bot to adapt its walking style as needed. That included learning how to run, which resulted in a new gait that allows the robot to move faster than it ever has before. As much as robot designers strive to engineer and program a robot to handle any situation it might experience in the real world, it's an impossible task. The world is endlessly chaotic. And when simply walking down a sidewalk, a robot could face a myriad of obstacles from smooth pavement to slippery patches of ice to areas covered in loose gravel to all of the above one after the other.
MIT used simulations to teach a robot to run, and the results are hilarious
Scientists at MIT managed to teach a robot to run using machine learning. Normally robots are taught how to move across difficult terrain by preprogramming it into their code. This time, though, the scientists at MIT used simulations to teach the Mini Cheetah to run fast and adapt to walking on different terrain. The researchers showcased the results in a video last week, and they are both intriguing and hilarious. This isn't the first time that MIT has taught the robot new tricks, either.
Machine learning helped MIT's cheetah robot break its own speed record
The fleet-footed quadruped robot called Mini Cheetah… well, doesn't move like anything in the animal kingdom. A cross between a scramble and a scamper, its gait is desperately chaotic and comically ungraceful. In fact, its particular style is dubbed "gait-free." And this brandless bound is what makes it fast. A team of researchers from the Massachusetts Institute of Technology (MIT) created a computer algorithm that spurs this artificially intelligent robot to maximize its speed, thereby breaking its own sprint records.
This Cheetah Robot Taught Itself How to Sprint in a Weird Way
It takes years of practice to crawl and then walk well, during which time mothers don't have to worry about their children legging it out of the county. Roboticists don't have that kind of time to spare, however, so they're developing ways for machines to learn to move through trial and error--just like babies, only way, way faster. But MIT scientists announced last week that they got this research platform, a four-legged machine known as Mini Cheetah, to hit its fastest speed ever--nearly 13 feet per second, or 9 miles per hour--not by meticulously hand-coding its movements line by line, but by encouraging digital versions of the machine to experiment with running in a simulated world. What the system landed on is … unconventional. But the researchers were able to port what the virtual robot learned into this physical machine that could then bolt across all kinds of terrain without falling on its, um, face. This technique is known as reinforcement learning.
Want to make robots run faster? Try letting AI take control
Quadrupedal robots are becoming a familiar sight, but engineers are still working out the full capabilities of these machines. Now, a group of researchers from MIT says one way to improve their functionality might be to use AI to help teach the bots how to walk and run. Usually, when engineers are creating the software that controls the movement of legged robots, they write a set of rules about how the machine should respond to certain inputs. So, if a robot's sensors detect x amount of force on leg y, it will respond by powering up motor a to exert torque b, and so on. Coding these parameters is complicated and time-consuming, but it gives researchers precise and predictable control over the robots.
One giant leap for the mini cheetah
MIT researchers have developed a system that improves the speed and agility of legged robots as they jump across gaps in the terrain. The movement may look effortless, but getting a robot to move this way is an altogether different prospect. In recent years, four-legged robots inspired by the movement of cheetahs and other animals have made great leaps forward, yet they still lag behind their mammalian counterparts when it comes to traveling across a landscape with rapid elevation changes. "In those settings, you need to use vision in order to avoid failure. For example, stepping in a gap is difficult to avoid if you can't see it. Although there are some existing methods for incorporating vision into legged locomotion, most of them aren't really suitable for use with emerging agile robotic systems," says Gabriel Margolis, a PhD student in the lab of Pulkit Agrawal, professor in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.
- North America > United States > Massachusetts (0.05)
- North America > United States > Arizona (0.05)