latent exploration
Latent exploration for Reinforcement Learning
In Reinforcement Learning, agents learn policies by exploring and interacting with the environment. Due to the curse of dimensionality, learning policies that map high-dimensional sensory input to motor output is particularly challenging. During training, state of the art methods (SAC, PPO, etc.) explore the environment by perturbing the actuation with independent Gaussian noise. While this unstructured exploration has proven successful in numerous tasks, it can be suboptimal for overactuated systems. When multiple actuators, such as motors or muscles, drive behavior, uncorrelated perturbations risk diminishing each other's effect, or modifying the behavior in a task-irrelevant way.
Latent exploration for Reinforcement Learning
In Reinforcement Learning, agents learn policies by exploring and interacting with the environment. Due to the curse of dimensionality, learning policies that map high-dimensional sensory input to motor output is particularly challenging. During training, state of the art methods (SAC, PPO, etc.) explore the environment by perturbing the actuation with independent Gaussian noise. While this unstructured exploration has proven successful in numerous tasks, it can be suboptimal for overactuated systems. When multiple actuators, such as motors or muscles, drive behavior, uncorrelated perturbations risk diminishing each other's effect, or modifying the behavior in a task-irrelevant way.
Exciting Action: Investigating Efficient Exploration for Learning Musculoskeletal Humanoid Locomotion
Geiß, Henri-Jacques, Al-Hafez, Firas, Seyfarth, Andre, Peters, Jan, Tateo, Davide
Abstract-- Learning a locomotion controller for a musculoskeletal system is challenging due to over-actuation and highdimensional action space. While many reinforcement learning methods attempt to address this issue, they often struggle to learn human-like gaits because of the complexity involved in engineering an effective reward function. In this paper, we demonstrate that adversarial imitation learning can address this issue by analyzing key problems and providing solutions using both current literature and novel techniques. I. INTRODUCTION Locomotion on simulated musculoskeletal humanoids requires precise muscle activation patterns. Humanoid model with 16 DOFs actuated by 92 Muscle-Tendon Units during running (left) and walking (right).