Layeghi, Daniel
Learning Long-Horizon Robot Manipulation Skills via Privileged Action
Mao, Xiaofeng, Xu, Yucheng, Sun, Zhaole, Miller, Elle, Layeghi, Daniel, Mistry, Michael
Long-horizon contact-rich tasks are challenging to learn with reinforcement learning, due to ineffective exploration of high-dimensional state spaces with sparse rewards. The learning process often gets stuck in local optimum and demands task-specific reward fine-tuning for complex scenarios. In this work, we propose a structured framework that leverages privileged actions with curriculum learning, enabling the policy to efficiently acquire long-horizon skills without relying on extensive reward engineering or reference trajectories. Specifically, we use privileged actions in simulation with a general training procedure that would be infeasible to implement in real-world scenarios. These privileges include relaxed constraints and virtual forces that enhance interaction and exploration with objects. Our results successfully achieve complex multi-stage long-horizon tasks that naturally combine non-prehensile manipulation with grasping to lift objects from non-graspable poses. We demonstrate generality by maintaining a parsimonious reward structure and showing convergence to diverse and robust behaviors across various environments. Additionally, real-world experiments further confirm that the skills acquired using our approach are transferable to real-world environments, exhibiting robust and intricate performance. Our approach outperforms state-of-the-art methods in these tasks, converging to solutions where others fail.
Neural Lyapunov and Optimal Control
Layeghi, Daniel, Tonneau, Steve, Mistry, Michael
Optimal control (OC) is an effective approach to controlling complex dynamical systems. However, typical approaches to parameterising and learning controllers in optimal control have been ad-hoc, collecting data and fitting it to neural networks. This two-step approach can overlook crucial constraints such as optimality and time-varying conditions. We introduce a unified, function-first framework that simultaneously learns Lyapunov or value functions while implicitly solving OC problems. We propose two mathematical programs based on the Hamilton-Jacobi-Bellman (HJB) constraint and its relaxation to learn time varying value and Lyapunov functions. We show the effectiveness of our approach on linear and nonlinear control-affine problems. The proposed methods are able to generate near optimal trajectories and guarantee Lyapunov condition over a compact set of initial conditions. Furthermore We compare our methods to Soft Actor Critic (SAC) and Proximal Policy Optimisation (PPO). In this comparison, we never underperform in task cost and, in the best cases, outperform SAC and PPO by a factor of 73 and 22, respectively.