Goto

Collaborating Authors

locomotion


Real-Time Style Modelling of Human Locomotion via Feature-Wise Transformations and Local Motion Phases

arXiv.org Artificial Intelligence

Controlling the manner in which a character moves in a real-time animation system is a challenging task with useful applications. Existing style transfer systems require access to a reference content motion clip, however, in real-time systems the future motion content is unknown and liable to change with user input. In this work we present a style modelling system that uses an animation synthesis network to model motion content based on local motion phases. An additional style modulation network uses feature-wise transformations to modulate style in real-time. To evaluate our method, we create and release a new style modelling dataset, 100STYLE, containing over 4 million frames of stylised locomotion data in 100 different styles that present a number of challenges for existing systems. To model these styles, we extend the local phase calculation with a contact-free formulation. In comparison to other methods for real-time style modelling, we show our system is more robust and efficient in its style representation while improving motion quality.


LEONARDO, the Bipedal Robot, Can Ride a Skateboard and Walk a Slackline

#artificialintelligence

Researchers at Caltech have built a bipedal robot that combines walking with flying to create a new type of locomotion, making it exceptionally nimble and capable of complex movements. Part walking robot, part flying drone, the newly developed LEONARDO (short for LEgs ONboARD drOne, or LEO for short) can walk a slackline, hop, and even ride a skateboard. Developed by a team at Caltech's Center for Autonomous Systems and Technologies (CAST), LEO is the first robot that uses multi-joint legs and propeller-based thrusters to achieve a fine degree of control over its balance. A paper about the LEO robot was published online on October 6 and was featured on the October 2021 cover of Science Robotics. "We drew inspiration from nature. Think about the way birds are able to flap and hop to navigate telephone lines," says Soon-Jo Chung, corresponding author and Bren Professor of Aerospace and Control and Dynamical Systems.


Learning Control Policies for Fall prevention and safety in bipedal locomotion

arXiv.org Artificial Intelligence

The ability to recover from an unexpected external perturbation is a fundamental motor skill in bipedal locomotion. An effective response includes the ability to not just recover balance and maintain stability but also to fall in a safe manner when balance recovery is physically infeasible. For robots associated with bipedal locomotion, such as humanoid robots and assistive robotic devices that aid humans in walking, designing controllers which can provide this stability and safety can prevent damage to robots or prevent injury related medical costs. This is a challenging task because it involves generating highly dynamic motion for a high-dimensional, non-linear and under-actuated system with contacts. Despite prior advancements in using model-based and optimization methods, challenges such as requirement of extensive domain knowledge, relatively large computational time and limited robustness to changes in dynamics still make this an open problem. In this thesis, to address these issues we develop learning-based algorithms capable of synthesizing push recovery control policies for two different kinds of robots : Humanoid robots and assistive robotic devices that assist in bipedal locomotion. Our work can be branched into two closely related directions : 1) Learning safe falling and fall prevention strategies for humanoid robots and 2) Learning fall prevention strategies for humans using a robotic assistive devices. To achieve this, we introduce a set of Deep Reinforcement Learning (DRL) algorithms to learn control policies that improve safety while using these robots.


Learning Free Gait Transition for Quadruped Robots via Phase-Guided Controller

arXiv.org Artificial Intelligence

Gaits and transitions are key components in legged locomotion. For legged robots, describing and reproducing gaits as well as transitions remain longstanding challenges. Reinforcement learning has become a powerful tool to formulate controllers for legged robots. Learning multiple gaits and transitions, nevertheless, is related to the multi-task learning problems. In this work, we present a novel framework for training a simple control policy for a quadruped robot to locomote in various gaits. Four independent phases are used as the interface between the gait generator and the control policy, which characterizes the movement of four feet. Guided by the phases, the quadruped robot is able to locomote according to the generated gaits, such as walk, trot, pacing and bounding, and to make transitions among those gaits. More general phases can be used to generate complex gaits, such as mixed rhythmic dancing. With the control policy, the Black Panther robot, a medium-dog-sized quadruped robot, can perform all learned motor skills while following the velocity commands smoothly and robustly in natural environment.


Coupling Vision and Proprioception for Navigation of Legged Robots

arXiv.org Artificial Intelligence

We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot. Legged systems are capable of traversing more complex terrain than wheeled robots, but to fully exploit this capability, we need the high-level path planner in the navigation system to be aware of the walking capabilities of the low-level locomotion policy on varying terrains. We achieve this by using proprioceptive feedback to estimate the safe operating limits of the walking policy, and to sense unexpected obstacles and terrain properties like smoothness or softness of the ground that may be missed by vision. The navigation system uses onboard cameras to generate an occupancy map and a corresponding cost map to reach the goal. The FMM (Fast Marching Method) planner then generates a target path. The velocity command generator takes this as input to generate the desired velocity for the locomotion policy using as input additional constraints, from the safety advisor, of unexpected obstacles and terrain determined speed limits. We show superior performance compared to wheeled robot (LoCoBot) baselines, and other baselines which have disjoint high-level planning and low-level control. We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute. Videos at https://navigation-locomotion.github.io/camera-ready


Learning to Jump from Pixels

arXiv.org Artificial Intelligence

One of the grand challenges in robotics is to construct legged systems that can successfully navigate novel and complex landscapes. Recent work has made impressive strides toward the blind traversal of a wide diversity of natural and man-made terrains [1, 2]. Blind walkers primarily rely on proprioception and robust control schemes to achieve sturdy locomotion in challenging conditions including snow, thick vegetation, and slippery mud. The downside of blindness is the inability to execute motions that anticipate the land surface in front of the robot. This is especially prohibitive on terrains with significant elevation discontinuities. For instance, crossing a wide gap requires the robot to jump, which cannot be initiated without knowing where and how wide the gap is. Without vision, even the most robust system would either step in the gap and fall or otherwise treat the gap as an obstacle and stop. This inability to plan results in conservative behavior that is unable to achieve the energy efficiency or the speed afforded by advanced hardware.


An Adaptable Approach to Learn Realistic Legged Locomotion without Examples

arXiv.org Artificial Intelligence

Learning controllers that reproduce legged locomotion in nature have been a long-time goal in robotics and computer graphics. While yielding promising results, recent approaches are not yet flexible enough to be applicable to legged systems of different morphologies. This is partly because they often rely on precise motion capture references or elaborate learning environments that ensure the naturality of the emergent locomotion gaits but prevent generalization. This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference. Leveraging on the exploration capacities of Reinforcement Learning (RL), we learn a control policy that fills in the information gap between the template model and full-body dynamics required to maintain stable and periodic locomotion. The proposed approach can be applied to robots of different sizes and morphologies and adapted to any RL technique and control architecture. We present experimental results showing that even in a model-free setup and with a simple reactive control architecture, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot. And most importantly, this is achieved without using motion capture, strong constraints in the dynamics or kinematics of the robot, nor prescribing limb coordination. We provide supplemental videos for qualitative analysis of the naturality of the learned gaits.


Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots

arXiv.org Artificial Intelligence

Legged locomotion is commonly studied and expressed as a discrete set of gait patterns, like walk, trot, gallop, which are usually treated as given and pre-programmed in legged robots for efficient locomotion at different speeds. However, fixing a set of pre-programmed gaits limits the generality of locomotion. Recent animal motor studies show that these conventional gaits are only prevalent in ideal flat terrain conditions while real-world locomotion is unstructured and more like bouts of intermittent steps. What principles could lead to both structured and unstructured patterns across mammals and how to synthesize them in robots? In this work, we take an analysis-by-synthesis approach and learn to move by minimizing mechanical energy. We demonstrate that learning to minimize energy consumption plays a key role in the emergence of natural locomotion gaits at different speeds in real quadruped robots. The emergent gaits are structured in ideal terrains and look similar to that of horses and sheep. The same approach leads to unstructured gaits in rough terrains which is consistent with the findings in animal motor control. We validate our hypothesis in both simulation and real hardware across natural terrains. Videos at https://energy-locomotion.github.io


Reinforcement Learning with Evolutionary Trajectory Generator: A General Approach for Quadrupedal Locomotion

arXiv.org Artificial Intelligence

Recently reinforcement learning (RL) has emerged as a promising approach for quadrupedal locomotion, which can save the manual effort in conventional approaches such as designing skill-specific controllers. However, due to the complex nonlinear dynamics in quadrupedal robots and reward sparsity, it is still difficult for RL to learn effective gaits from scratch, especially in challenging tasks such as walking over the balance beam. To alleviate such difficulty, we propose a novel RL-based approach that contains an evolutionary foot trajectory generator. Unlike prior methods that use a fixed trajectory generator, the generator continually optimizes the shape of the output trajectory for the given task, providing diversified motion priors to guide the policy learning. The policy is trained with reinforcement learning to output residual control signals that fit different gaits. We then optimize the trajectory generator and policy network alternatively to stabilize the training and share the exploratory data to improve sample efficiency. As a result, our approach can solve a range of challenging tasks in simulation by learning from scratch, including walking on a balance beam and crawling through the cave. To further verify the effectiveness of our approach, we deploy the controller learned in the simulation on a 12-DoF quadrupedal robot, and it can successfully traverse challenging scenarios with efficient gaits.


Learning Quadruped Locomotion Policies with Reward Machines

arXiv.org Artificial Intelligence

Legged robots have been shown to be effective in navigating unstructured environments. Although there has been much success in learning locomotion policies for quadruped robots, there is little research on how to incorporate human knowledge to facilitate this learning process. In this paper, we demonstrate that human knowledge in the form of LTL formulas can be applied to quadruped locomotion learning within a Reward Machine (RM) framework. Experimental results in simulation show that our RM-based approach enables easily defining diverse locomotion styles, and efficiently learning locomotion policies of the defined styles.