Ijspeert, Auke J.
Fast ground-to-air transition with avian-inspired multifunctional legs
Shin, Won Dong, Phan, Hoang-Vu, Daley, Monica A., Ijspeert, Auke J., Floreano, Dario
Most birds can navigate seamlessly between aerial and terrestrial environments. Whereas the forelimbs evolved into wings primarily for flight, the hindlimbs serve diverse functions such as walking, hopping, and leaping, and jumping take-off for transitions into flight. These capabilities have inspired engineers to aim for similar multi-modality in aerial robots, expanding their range of applications across diverse environments. However, challenges remain in reproducing multi-modal locomotion, across gaits with distinct kinematics and propulsive characteristics, such as walking and jumping, while preserving lightweight mass for flight. This tradeoff between mechanical complexity and versatility limits most existing aerial robots to only one additional locomotor mode. Here, we overcome the complexity-versatility tradeoff with RAVEN (Robotic Avian-inspired Vehicle for multiple ENvironments), which uses its bird-inspired multi-functional legs to jump rapidly into flight, walk on ground and hop over obstacles and gaps similar to the multi-modal locomotion of birds. We show that jumping for take-off contributes substantially to initial flight take-off speed and, remarkably, that it is more energy-efficient than solely propeller-based take-off. Our analysis suggests an important tradeoff in mass distribution between legs and body among birds adapted for different locomotor strategies, with greater investment in leg mass among terrestrial birds with multi-modal gait demands. Multi-functional robot legs expand opportunities to deploy traditional fixed-wing aircraft in complex terrains through autonomous take-offs and multi-modal gaits.
Crash-perching on vertical poles with a hugging-wing robot
Askari, Mohammad, Benciolini, Michele, Phan, Hoang-Vu, Stewart, William, Ijspeert, Auke J., Floreano, Dario
Perching with winged Unmanned Aerial Vehicles has often been solved by means of complex control or intricate appendages. Here, we present a simple yet novel method that relies on passive wing morphing for crash-landing on trees and other types of vertical poles. Inspired by the adaptability of animals' and bats' limbs in gripping and holding onto trees, we design dual-purpose wings that enable both aerial gliding and perching on poles. With an upturned nose design, the robot can passively reorient from horizontal flight to vertical upon a head-on crash with a pole, followed by hugging with its wings to perch. We characterize the performance of reorientation and perching in terms of impact speed and angle, pole material, and size. The robot robustly reorients at impact angles above 15{\deg} and speeds of 3 m/s to 9 m/s, and can hold onto various pole types larger than 28% of its wingspan in diameter. We demonstrate crash-perching on tree trunks with an overall success rate of 71%. The method opens up new possibilities for the use of aerial robots in applications such as inspection, maintenance, and biodiversity conservation.
Learning Attractor Landscapes for Learning Motor Primitives
Ijspeert, Auke J., Nakanishi, Jun, Schaal, Stefan
Many control problems take place in continuous state-action spaces, e.g., as in manipulator robotics, where the control objective is often defined as finding a desired trajectory that reaches a particular goal state. While reinforcement learning offers a theoretical framework to learn such control policies from scratch, its applicability to higher dimensional continuous state-action spaces remains rather limited to date. Instead of learning from scratch, in this paper we suggest to learn a desired complex control policy by transforming an existing simple canonical control policy. For this purpose, we represent canonical policies in terms of differential equations with well-defined attractor properties. By nonlinearly transforming the canonical attractor dynamics using techniques from nonparametric regression, almost arbitrary new nonlinear policies can be generated without losing the stability properties of the canonical system. We demonstrate our techniques in the context of learning a set of movement skills for a humanoid robot from demonstrations of a human teacher. Policies are acquired rapidly, and, due to the properties of well formulated differential equations, can be reused and modified online under dynamic changes of the environment. The linear parameterization of nonparametric regression moreover lends itself to recognize and classify previously learned movement skills.
Learning Attractor Landscapes for Learning Motor Primitives
Ijspeert, Auke J., Nakanishi, Jun, Schaal, Stefan
Many control problems take place in continuous state-action spaces, e.g., as in manipulator robotics, where the control objective is often defined as finding a desired trajectory that reaches a particular goal state. While reinforcement learning offers a theoretical framework to learn such control policies from scratch, its applicability to higher dimensional continuous state-action spaces remains rather limited to date. Instead of learning from scratch, in this paper we suggest to learn a desired complex control policy by transforming an existing simple canonical control policy. For this purpose, we represent canonical policies in terms of differential equations with well-defined attractor properties. By nonlinearly transforming the canonical attractor dynamics using techniques from nonparametric regression, almost arbitrary new nonlinear policies can be generated without losing the stability properties of the canonical system. We demonstrate our techniques in the context of learning a set of movement skills for a humanoid robot from demonstrations of a human teacher. Policies are acquired rapidly, and, due to the properties of well formulated differential equations, can be reused and modified online under dynamic changes of the environment. The linear parameterization of nonparametric regression moreover lends itself to recognize and classify previously learned movement skills.
Learning Attractor Landscapes for Learning Motor Primitives
Ijspeert, Auke J., Nakanishi, Jun, Schaal, Stefan
Many control problems take place in continuous state-action spaces, e.g., as in manipulator robotics, where the control objective is often definedas finding a desired trajectory that reaches a particular goal state. While reinforcement learning offers a theoretical framework tolearn such control policies from scratch, its applicability to higher dimensional continuous state-action spaces remains rather limited to date. Instead of learning from scratch, in this paper we suggest to learn a desired complex control policy by transforming an existing simple canonical control policy. For this purpose, we represent canonical policies in terms of differential equations with well-defined attractor properties. By nonlinearly transforming the canonical attractor dynamics using techniques from nonparametric regression, almost arbitrary new nonlinear policies can be generated withoutlosing the stability properties of the canonical system.