parkour
ES-Parkour: Advanced Robot Parkour with Bio-inspired Event Camera and Spiking Neural Network
Zhang, Qiang, Cao, Jiahang, Sun, Jingkai, Shao, Yecheng, Han, Gang, Zhao, Wen, Guo, Yijie, Xu, Renjing
In recent years, quadruped robotics has advanced significantly, particularly in perception and motion control via reinforcement learning, enabling complex motions in challenging environments. Visual sensors like depth cameras enhance stability and robustness but face limitations, such as low operating frequencies relative to joint control and sensitivity to lighting, which hinder outdoor deployment. Additionally, deep neural networks in sensor and control systems increase computational demands. To address these issues, we introduce spiking neural networks (SNNs) and event cameras to perform a challenging quadruped parkour task. Event cameras capture dynamic visual data, while SNNs efficiently process spike sequences, mimicking biological perception. Experimental results demonstrate that this approach significantly outperforms traditional models, achieving excellent parkour performance with just 11.7% of the energy consumption of an artificial neural network (ANN)-based model, yielding an 88.3% energy reduction. By integrating event cameras with SNNs, our work advances robotic reinforcement learning and opens new possibilities for applications in demanding environments.
The Download: parkour for robot dogs, and Africa's AI ambitions
Teaching robots to navigate new environments is tough. You can train them on physical, real-world data taken from recordings made by humans, but that's scarce, and expensive to collect. Digital simulations are a rapid, scalable way to teach them to do new things, but the robots often fail when they're pulled out of virtual worlds and asked to do the same tasks in the real one. Now, there's potentially a better option: a new system that uses generative AI models in conjunction with a physics simulator to develop virtual training grounds that more accurately mirror the physical world. Robots trained using this method worked with a higher success rate than those trained using more traditional techniques during real-world tests.
PIE: Parkour with Implicit-Explicit Learning Framework for Legged Robots
Luo, Shixin, Li, Songbo, Yu, Ruiqi, Wang, Zhicheng, Wu, Jun, Zhu, Qiuguo
Parkour presents a highly challenging task for legged robots, requiring them to traverse various terrains with agile and smooth locomotion. This necessitates comprehensive understanding of both the robot's own state and the surrounding terrain, despite the inherent unreliability of robot perception and actuation. Current state-of-the-art methods either rely on complex pre-trained high-level terrain reconstruction modules or limit the maximum potential of robot parkour to avoid failure due to inaccurate perception. In this paper, we propose a one-stage end-to-end learning-based parkour framework: Parkour with Implicit-Explicit learning framework for legged robots (PIE) that leverages dual-level implicit-explicit estimation. With this mechanism, even a low-cost quadruped robot equipped with an unreliable egocentric depth camera can achieve exceptional performance on challenging parkour terrains using a relatively simple training process and reward function. While the training process is conducted entirely in simulation, our real-world validation demonstrates successful zero-shot deployment of our framework, showcasing superior parkour performance on harsh terrains.
Model Predictive Parkour Control of a Monoped Hopper in Dynamically Changing Environments
Albracht, Maximilian, Kumar, Shivesh, Vyas, Shubham, Kirchner, Frank
A great advantage of legged robots is their ability to operate on particularly difficult and obstructed terrain, which demands dynamic, robust, and precise movements. The study of obstacle courses provides invaluable insights into the challenges legged robots face, offering a controlled environment to assess and enhance their capabilities. Traversing it with a one-legged hopper introduces intricate challenges, such as planning over contacts and dealing with flight phases, which necessitates a sophisticated controller. A novel model predictive parkour controller is introduced, that finds an optimal path through a real-time changing obstacle course with mixed integer motion planning. The execution of this optimized path is then achieved through a state machine employing a PD control scheme with feedforward torques, ensuring robust and accurate performance.
- Europe > Germany > Bremen > Bremen (0.14)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Sweden > Vaestra Goetaland > Gothenburg (0.04)
OGMP: Oracle Guided Multimodal Policies for Agile and Versatile Robot Control
Krishna, Lokesh, Sobanbabu, Nikhil, Nguyen, Quan
The efficacy of model-free learning for robot control relies on the tailored integration of task-specific priors and heuristics, hence calling for a unified approach. In this paper, we define a general class for priors called oracles and propose bounding the permissible state around the oracle's ansatz, resulting in task-agnostic oracle-guided policy optimization. Additionally, to enhance modularity, we introduce the notion of task-vital modes. A policy mastering a compact set of modes and intermediate transitions can then solve perpetual tasks. The proposed approach is validated in challenging biped control tasks: parkour and diving on a 16-DoF dynamic bipedal robot, Hector. OGMP results in a single policy per task, solving indefinite parkour over diverse tracks and omnidirectional diving from varied heights, exhibiting versatile agility. Finally, we introduce a novel latent mode space reachability analysis to study our policy's mode generalization by computing a feasible mode set function through which we certify a set of failure-free modes for our policy to perform at any given state.
WoCoCo: Learning Whole-Body Humanoid Control with Sequential Contacts
Zhang, Chong, Xiao, Wenli, He, Tairan, Shi, Guanya
Humanoid activities involving sequential contacts are crucial for complex robotic interactions and operations in the real world and are traditionally solved by model-based motion planning, which is time-consuming and often relies on simplified dynamics models. Although model-free reinforcement learning (RL) has become a powerful tool for versatile and robust whole-body humanoid control, it still requires tedious task-specific tuning and state machine design and suffers from long-horizon exploration issues in tasks involving contact sequences. In this work, we propose WoCoCo (Whole-Body Control with Sequential Contacts), a unified framework to learn whole-body humanoid control with sequential contacts by naturally decomposing the tasks into separate contact stages. Such decomposition facilitates simple and general policy learning pipelines through task-agnostic reward and sim-to-real designs, requiring only one or two task-related terms to be specified for each task. We demonstrated that end-to-end RL-based controllers trained with WoCoCo enable four challenging whole-body humanoid tasks involving diverse contact sequences in the real world without any motion priors: 1) versatile parkour jumping, 2) box loco-manipulation, 3) dynamic clap-and-tap dancing, and 4) cliffside climbing. We further show that WoCoCo is a general framework beyond humanoid by applying it in 22-DoF dinosaur robot loco-manipulation tasks.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
Extreme Parkour with Legged Robots
Cheng, Xuxin, Shi, Kexin, Agarwal, Ananye, Pathak, Deepak
Humans can perform parkour by traversing obstacles in a highly dynamic fashion requiring precise eye-muscle coordination and movement. Getting robots to do the same task requires overcoming similar challenges. Classically, this is done by independently engineering perception, actuation, and control systems to very low tolerances. This restricts them to tightly controlled settings such as a predetermined obstacle course in labs. In contrast, humans are able to learn parkour through practice without significantly changing their underlying biology. In this paper, we take a similar approach to developing robot parkour on a small low-cost robot with imprecise actuation and a single front-facing depth camera for perception which is low-frequency, jittery, and prone to artifacts. We show how a single neural net policy operating directly from a camera image, trained in simulation with large-scale RL, can overcome imprecise sensing and actuation to output highly precise control behavior end-to-end. We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps, and generalize to novel obstacle courses with different physical properties. Parkour videos at https://extreme-parkour.github.io/
Top tweets: Senpower Transformer toy - and more
Verdict lists five of the top tweets on robotics in Q2 2022 based on data from GlobalData's Technology Influencer Platform. The top tweets are based on total engagements (likes and retweets) received on tweets from more than 375 robotics experts tracked by GlobalData's Technology Influencer platform during the second quarter (Q2) of 2022. Massimo, a technology expert, shared an article on the Chinese robot manufacturer Senpower building a self-transforming Transformer model, allowing them to convert between a standing toy and truck on its own. The makers of the Optimus Prime developed this concept and evolved it into the Robosen T9 robot toy, the article detailed. This version can walk, dance, drive, pose, and has 22 programmable servo motors that allows it to learn new skills.
- South America > Chile (0.05)
- North America > United States > California (0.05)
- Asia > India (0.05)
- Asia > China (0.05)
Boston Dynamics' two-legged robot takes on parkour
Leaping around an obstacle course and pulling off backflips, this eerily human-like robot is only too happy to show off its parkour skills. Named Atlas, the humanoid was filmed by Boston Dynamics -- the firm behind the famous robotic dog Spot. The incredible footage shows the two-legged robot impressively maintaining its balance as it takes on a series of jumps, vaults and balance beams. They were set up by Boston Dynamics engineers to experiment with new behaviours for Atlas, as well as developing its whole-body athletics through a variety of rapidly changing, high-energy activities. Boston Dynamics engineers created the obstacle course to develop Atlas' whole-body athletics through a variety of rapidly changing, high-energy activities The humanoid, which was first unveiled to the public in July 2013, measures 1.5m (4.9ft) tall and weighs 75kg (11.8st).
- Leisure & Entertainment (0.31)
- Energy (0.31)
- Information Technology > Artificial Intelligence > Robots > Locomotion (0.62)
- Information Technology > Artificial Intelligence > Robots > Humanoid Robots (0.43)
'Game of Thrones with parkour': how will Netflix adapt Assassin's Creed?
Few video games have endured like Assassin's Creed. Twelve different versions have been released since the game was introduced in 2007, each of them more or less clinging to the same highly enjoyable formula. Like spending the final hour of any pursuit genuinely confused about why an alien has come out of nowhere to instruct you to murder everyone with a sort of glowing death apple? Then Assassin's Creed is for you. So the news that Netflix has just commissioned a live-action Assassin's Creed series should be cause for celebration.
- Media > Television (1.00)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)