Meixner, Andre
Learning Spatial Bimanual Action Models Based on Affordance Regions and Human Demonstrations
Plonka, Björn S., Dreher, Christian, Meixner, Andre, Kartmann, Rainer, Asfour, Tamim
In this paper, we present a novel approach for learning bimanual manipulation actions from human demonstration by extracting spatial constraints between affordance regions, termed affordance constraints, of the objects involved. Affordance regions are defined as object parts that provide interaction possibilities to an agent. For example, the bottom of a bottle affords the object to be placed on a surface, while its spout affords the contained liquid to be poured. We propose a novel approach to learn changes of affordance constraints in human demonstration to construct spatial bimanual action models representing object interactions. To exploit the information encoded in these spatial bimanual action models, we formulate an optimization problem to determine optimal object configurations across multiple execution keypoints while taking into account the initial scene, the learned affordance constraints, and the robot's kinematics. We evaluate the approach in simulation with two example tasks (pouring drinks and rolling dough) and compare three different definitions of affordance constraints: (i) component-wise distances between affordance regions in Cartesian space, (ii) component-wise distances between affordance regions in cylindrical space, and (iii) degrees of satisfaction of manually defined symbolic spatial affordance constraints.
Incremental Learning of Full-Pose Via-Point Movement Primitives on Riemannian Manifolds
Daab, Tilman, Jaquier, Noémie, Dreher, Christian, Meixner, Andre, Krebs, Franziska, Asfour, Tamim
Movement primitives (MPs) are compact representations of robot skills that can be learned from demonstrations and combined into complex behaviors. However, merely equipping robots with a fixed set of innate MPs is insufficient to deploy them in dynamic and unpredictable environments. Instead, the full potential of MPs remains to be attained via adaptable, large-scale MP libraries. In this paper, we propose a set of seven fundamental operations to incrementally learn, improve, and re-organize MP libraries. To showcase their applicability, we provide explicit formulations of the spatial operations for libraries composed of Via-Point Movement Primitives (VMPs). By building on Riemannian manifold theory, our approach enables the incremental learning of all parameters of position and orientation VMPs within a library. Moreover, our approach stores a fixed number of parameters, thus complying with the essential principles of incremental learning. We evaluate our approach to incrementally learn a VMP library from motion capture data provided sequentially.
On the Design of Region-Avoiding Metrics for Collision-Safe Motion Generation on Riemannian Manifolds
Klein, Holger, Jaquier, Noémie, Meixner, Andre, Asfour, Tamim
The generation of energy-efficient and dynamic-aware robot motions that satisfy constraints such as joint limits, self-collisions, and collisions with the environment remains a challenge. In this context, Riemannian geometry offers promising solutions by identifying robot motions with geodesics on the so-called configuration space manifold. While this manifold naturally considers the intrinsic robot dynamics, constraints such as joint limits, self-collisions, and collisions with the environment remain overlooked. In this paper, we propose a modification of the Riemannian metric of the configuration space manifold allowing for the generation of robot motions as geodesics that efficiently avoid given regions. We introduce a class of Riemannian metrics based on barrier functions that guarantee strict region avoidance by systematically generating accelerations away from no-go regions in joint and task space. We evaluate the proposed Riemannian metric to generate energy-efficient, dynamic-aware, and collision-free motions of a humanoid robot as geodesics and sequences thereof.
A Memory System of a Robot Cognitive Architecture and its Implementation in ArmarX
Peller-Konrad, Fabian, Kartmann, Rainer, Dreher, Christian R. G., Meixner, Andre, Reister, Fabian, Grotz, Markus, Asfour, Tamim
Cognitive agents such as humans and robots perceive their environment through an abundance of sensors producing streams of data that need to be processed to generate intelligent behavior. A key question of cognition-enabled and AI-driven robotics is how to organize and manage knowledge efficiently in a cognitive robot control architecture. We argue, that memory is a central active component of such architectures that mediates between semantic and sensorimotor representations, orchestrates the flow of data streams and events between different processes and provides the components of a cognitive architecture with data-driven services for the abstraction of semantics from sensorimotor data, the parametrization of symbolic plans for execution and prediction of action effects. Based on related work, and the experience gained in developing our ARMAR humanoid robot systems, we identified conceptual and technical requirements of a memory system as central component of cognitive robot control architecture that facilitate the realization of high-level cognitive abilities such as explaining, reasoning, prospection, simulation and augmentation. Conceptually, a memory should be active, support multi-modal data representations, associate knowledge, be introspective, and have an inherently episodic structure. Technically, the memory should support a distributed design, be access-efficient and capable of long-term data storage. We introduce the memory system for our cognitive robot control architecture and its implementation in the robot software framework ArmarX. We evaluate the efficiency of the memory system with respect to transfer speeds, compression, reproduction and prediction capabilities.
A Riemannian Take on Human Motion Analysis and Retargeting
Klein, Holger, Jaquier, Noémie, Meixner, Andre, Asfour, Tamim
Dynamic motions of humans and robots are widely driven by posture-dependent nonlinear interactions between their degrees of freedom. However, these dynamical effects remain mostly overlooked when studying the mechanisms of human movement generation. Inspired by recent works, we hypothesize that human motions are planned as sequences of geodesic synergies, and thus correspond to coordinated joint movements achieved with piecewise minimum energy. The underlying computational model is built on Riemannian geometry to account for the inertial characteristics of the body. Through the analysis of various human arm motions, we find that our model segments motions into geodesic synergies, and successfully predicts observed arm postures, hand trajectories, as well as their respective velocity profiles. Moreover, we show that our analysis can further be exploited to transfer arm motions to robots by reproducing individual human synergies as geodesic paths in the robot configuration space.