anymal
Sampling Strategies for Robust Universal Quadrupedal Locomotion Policies
Rytz, David, Ly, Kim Tien, Havoutis, Ioannis
This work focuses on sampling strategies of configuration variations for generating robust universal locomotion policies for quadrupedal robots. We investigate the effects of sampling physical robot parameters and joint proportional-derivative gains to enable training a single reinforcement learning policy that generalizes to multiple parameter configurations. Three fundamental joint gain sampling strategies are compared: parameter sampling with (1) linear and polynomial function mappings of mass-to-gains, (2) performance-based adaptive filtering, and (3) uniform random sampling. We improve the robustness of the policy by biasing the configurations using nominal priors and reference models. All training was conducted on RaiSim, tested in simulation on a range of diverse quadrupeds, and zero-shot deployed onto hardware using the ANYmal quadruped robot. Compared to multiple baseline implementations, our results demonstrate the need for significant joint controller gains randomization for robust closing of the sim-to-real gap.
AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model
Moon, Seungwhan, Madotto, Andrea, Lin, Zhaojiang, Nagarajan, Tushar, Smith, Matt, Jain, Shashank, Yeh, Chun-Fu, Murugesan, Prakash, Heidari, Peyman, Liu, Yue, Srinet, Kavya, Damavandi, Babak, Kumar, Anuj
We present Any-Modality Augmented Language Model (AnyMAL), a unified model that reasons over diverse input modality signals (i.e. text, image, video, audio, IMU motion sensor), and generates textual responses. AnyMAL inherits the powerful text-based reasoning abilities of the state-of-the-art LLMs including LLaMA-2 (70B), and converts modality-specific signals to the joint textual space through a pre-trained aligner module. To further strengthen the multimodal LLM's capabilities, we fine-tune the model with a multimodal instruction set manually collected to cover diverse topics and tasks beyond simple QAs. We conduct comprehensive empirical analysis comprising both human and automatic evaluations, and demonstrate state-of-the-art performance on various multimodal tasks.
Learning-based Design and Control for Quadrupedal Robots with Parallel-Elastic Actuators
Bjelonic, Filip, Lee, Joonho, Arm, Philip, Sako, Dhionis, Tateo, Davide, Peters, Jan, Hutter, Marco
Parallel-elastic joints can improve the efficiency and strength of robots by assisting the actuators with additional torques. For these benefits to be realized, a spring needs to be carefully designed. However, designing robots is an iterative and tedious process, often relying on intuition and heuristics. We introduce a design optimization framework that allows us to co-optimize a parallel elastic knee joint and locomotion controller for quadrupedal robots with minimal human intuition. We design a parallel elastic joint and optimize its parameters with respect to the efficiency in a model-free fashion. In the first step, we train a design-conditioned policy using model-free Reinforcement Learning, capable of controlling the quadruped in the predefined range of design parameters. Afterwards, we use Bayesian Optimization to find the best design using the policy. We use this framework to optimize the parallel-elastic spring parameters for the knee of our quadrupedal robot ANYmal together with the optimal controller. We evaluate the optimized design and controller in real-world experiments over various terrains. Our results show that the new system improves the torque-square efficiency of the robot by 33% compared to the baseline and reduces maximum joint torque by 30% without compromising tracking performance. The improved design resulted in 11% longer operation time on flat terrain.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- North America > United States > Oregon (0.04)
Locomotion Policy Guided Traversability Learning using Volumetric Representations of Complex Environments
Frey, Jonas, Hoeller, David, Khattak, Shehryar, Hutter, Marco
Despite the progress in legged robotic locomotion, autonomous navigation in unknown environments remains an open problem. Ideally, the navigation system utilizes the full potential of the robots' locomotion capabilities while operating within safety limits under uncertainty. The robot must sense and analyze the traversability of the surrounding terrain, which depends on the hardware, locomotion control, and terrain properties. It may contain information about the risk, energy, or time consumption needed to traverse the terrain. To avoid hand-crafted traversability cost functions we propose to collect traversability information about the robot and locomotion policy by simulating the traversal over randomly generated terrains using a physics simulator. Thousand of robots are simulated in parallel controlled by the same locomotion policy used in reality to acquire 57 years of real-world locomotion experience equivalent. For deployment on the real robot, a sparse convolutional network is trained to predict the simulated traversability cost, which is tailored to the deployed locomotion policy, from an entirely geometric representation of the environment in the form of a 3D voxel-occupancy map. This representation avoids the need for commonly used elevation maps, which are error-prone in the presence of overhanging obstacles and multi-floor or low-ceiling scenarios. The effectiveness of the proposed traversability prediction network is demonstrated for path planning for the legged robot ANYmal in various indoor and natural environments.
How robots learn to hike
Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-metre-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical metres effortlessly in a 31-minute hike. That's 4 minutes faster than the estimated duration for human hikers -- and with no falls or missteps. This is made possible by a new control technology, which researchers at ETH Zurich led by robotics professor Marco Hutter recently presented in the journal Science Robotics. "The robot has learned to combine visual perception of its environment with proprioception -- its sense of touch -- based on direct leg contact. This allows it to tackle rough terrain faster, more efficiently and, above all, more robustly," Hutter says.
How robots learn to hike
ETH Zurich researchers led by Marco Hutter developed a new control approach that enables a legged robot, called ANYmal, to move quickly and robustly over difficult terrain. Thanks to machine learning, the robot can combine its visual perception of the environment with its sense of touch for the first time. Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-meter-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical meters effortlessly in a 31-minute hike. That's 4 minutes faster than the estimated duration for human hikers--and with no falls or missteps.
How robots learn to hike
The legged robot ANYmal on the rocky path to the summit of Mount Etzel, which stands 1,098 metres above sea level. Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-metre-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical metres effortlessly in a 31-minute hike. That's 4 minutes faster than the estimated duration for human hikers – and with no falls or missteps. This is made possible by a new control technology, which researchers at ETH Zurich led by robotics professor Marco Hutter recently presented in the journal Science Robotics.
Tech: Incredible two-wheeled autonomous robot can climb stairs and drive at speeds of up to 7.5mph
An incredible two-wheeled and fully autonomous robot is capable of climbing up flights of stairs and can drive along at speeds of up to 7.5 miles per hour. The'Ascento Pro' is the brainchild of Swiss Federal Institute of Technology Zürich (ETH Zürich) spin-off firm Ascento Robotics, building on their previous designs. The cute robot -- which looks like the wheeled baby of an AT-ST Walker from Star Wars -- could find applications in inspection, surveillance and'last mile' delivery. It is unclear how much the robot might retail for commercially -- and when -- with MailOnline having reached out to Ascento Robotics to enquire. An incredible two-wheeled and fully autonomous robot is capable of climbing up flights of stairs and can drive along at speeds of up to 7.5 miles per hour.
Tech: Incredible four-wheeled robot can drive at speeds of up to 14mph or stand up on two legs
Forget about Optimus Prime and Megatron! Swiss experts have developed a four-wheeled robot that can rear up on its hind legs and spin like a performing poodle. Developed by ETH Zürich spin-off Swiss-Mile, the agile bot that can reach speeds of up to 14 mph (23 kph) is the latest iteration of the'ANYMal' robot concept. The design -- which superficially resembles Boston Dynamics' robot dog, Spot -- has previously been shown using its AI to get back up after being kicked over. In a newly-released video, the robot is shown not only performing its standing trick, but also wheeling along and taking ascending and descending steps in its stride. Forget about Optimus Prime and Megatron!
These Virtual Obstacle Courses Help Real Robots Learn to Walk
An army of more than 4,000 marching doglike robots is a vaguely menacing sight, even in a simulation. But it may point the way for machines to learn new tricks. The virtual robot army was developed by researchers from ETH Zurich in Switzerland and chipmaker Nvidia. They used the wandering bots to train an algorithm that was then used to control the legs of a real-world robot. In the simulation, the machines--called ANYmals--confront challenges like slopes, steps, and steep drops in a virtual landscape.
- Instructional Material > Online (0.40)
- Instructional Material > Course Syllabus & Notes (0.40)