Goto

Collaborating Authors

 Botteghi, Nicolò


HypeRL: Parameter-Informed Reinforcement Learning for Parametric PDEs

arXiv.org Artificial Intelligence

In this work, we devise a new, general-purpose reinforcement learning strategy for the optimal control of parametric partial differential equations (PDEs). Such problems frequently arise in applied sciences and engineering and entail a significant complexity when control and/or state variables are distributed in high-dimensional space or depend on varying parameters. Traditional numerical methods, relying on either iterative minimization algorithms or dynamic programming, while reliable, often become computationally infeasible. Indeed, in either way, the optimal control problem must be solved for each instance of the parameters, and this is out of reach when dealing with high-dimensional time-dependent and parametric PDEs. In this paper, we propose HypeRL, a deep reinforcement learning (DRL) framework to overcome the limitations shown by traditional methods. HypeRL aims at approximating the optimal control policy directly. Specifically, we employ an actor-critic DRL approach to learn an optimal feedback control strategy that can generalize across the range of variation of the parameters. To effectively learn such optimal control laws, encoding the parameter information into the DRL policy and value function neural networks (NNs) is essential. To do so, HypeRL uses two additional NNs, often called hypernetworks, to learn the weights and biases of the value function and the policy NNs. We validate the proposed approach on two PDE-constrained optimal control benchmarks, namely a 1D Kuramoto-Sivashinsky equation and a 2D Navier-Stokes equations, by showing that the knowledge of the PDE parameters and how this information is encoded, i.e., via a hypernetwork, is an essential ingredient for learning parameter-dependent control policies that can generalize effectively to unseen scenarios and for improving the sample efficiency of such policies.


Denoising Diffusion Planner: Learning Complex Paths from Low-Quality Demonstrations

arXiv.org Artificial Intelligence

Denoising Diffusion Probabilistic Models (DDPMs) are powerful generative deep learning models that have been very successful at image generation, and, very recently, in path planning and control. In this paper, we investigate how to leverage the generalization and conditional sampling capabilities of DDPMs to generate complex paths for a robotic end effector. We show that training a DDPM with synthetic and low-quality demonstrations is sufficient for generating nontrivial paths reaching arbitrary targets and avoiding obstacles. Additionally, we investigate different strategies for conditional sampling combining classifier-free and classifier-guided approaches. Eventually, we deploy the DDPM in a receding-horizon control scheme to enhance its planning capabilities. The Denoising Diffusion Planner is experimentally validated through various experiments on a Franka Emika Panda robot.


Interpretable and Efficient Data-driven Discovery and Control of Distributed Systems

arXiv.org Artificial Intelligence

Feedback control for complex physical systems is essential in many fields of Engineering and Applied Sciences, which are typically governed by Partial Differential Equations (PDEs). In these cases, the state of the systems is often challenging or even impossible to observe completely, the systems exhibit nonlinear dynamics, and require low-latency feedback control [BNK20]; [PK20]; [KJ20]. Consequently, effectively controlling these systems is a computationally intensive task. For instance, significant efforts have been devoted in the last decade to the investigation of optimal control problems governed by PDEs [Hin+08]; [MQS22]; however, classical feedback control strategies face limitations with such highly complex dynamical systems. For instance, (nonlinear) model predictive control (MPC) [GP17] has emerged as an effective and important control paradigm. MPC utilizes an internal model of the dynamics to create a feedback loop and provide optimal controls, resulting in a difficult trade-off between model accuracy and computational performance. Despite its impressive success in disciplines such as robotics [Wil+18] and controlling PDEs [Alt14], MPC struggles with real-time applicability in providing low-latency actuation, due to the need for solving complex optimization problems. In recent years, reinforcement learning (RL), particularly deep reinforcement learning (DRL) [SB18], an extension of RL relying on deep neural networks (DNN), has gained popularity as a powerful and real-time applicable control paradigm. Especially in the context of solving PDEs, DRL has demonstrated outstanding capabilities in controlling complex and high-dimensional dynamical systems at low latency [You+23]; [Pei+23]; [BF24]; [Vin24].


Invisible Servoing: a Visual Servoing Approach with Return-Conditioned Latent Diffusion

arXiv.org Artificial Intelligence

In this paper, we present a novel visual servoing (VS) approach based on latent Denoising Diffusion Probabilistic Models (DDPMs). Opposite to classical VS methods, the proposed approach allows reaching the desired target view, even when the target is initially not visible. This is possible thanks to the learning of a latent representation that the DDPM uses for planning and a dataset of trajectories encompassing target-invisible initial views. The latent representation is learned using a Cross-Modal Variational Autoencoder, and used to estimate the return for conditioning the trajectory generation of the DDPM. Given the current image, the DDPM generates trajectories in the latent space driving the robotic platform to the desired visual target. The approach is applicable to any velocity-based controlled platform. We test our method with simulated and real-world experiments using generic multi-rotor Uncrewed Aerial Vehicles (UAVs). A video of our experiments can be found at https://youtu.be/yu-aTxqceOA.


Recurrent Deep Kernel Learning of Dynamical Systems

arXiv.org Machine Learning

Digital twins require computationally-efficient reduced-order models (ROMs) that can accurately describe complex dynamics of physical assets. However, constructing ROMs from noisy high-dimensional data is challenging. In this work, we propose a data-driven, non-intrusive method that utilizes stochastic variational deep kernel learning (SVDKL) to discover low-dimensional latent spaces from data and a recurrent version of SVDKL for representing and predicting the evolution of latent dynamics. The proposed method is demonstrated with two challenging examples -- a double pendulum and a reaction-diffusion system. Results show that our framework is capable of (i) denoising and reconstructing measurements, (ii) learning compact representations of system states, (iii) predicting system evolution in low-dimensional latent spaces, and (iv) quantifying modeling uncertainties.


Parametric PDE Control with Deep Reinforcement Learning and Differentiable L0-Sparse Polynomial Policies

arXiv.org Artificial Intelligence

Optimal control of parametric partial differential equations (PDEs) is crucial in many applications in engineering and science. In recent years, the progress in scientific machine learning has opened up new frontiers for the control of parametric PDEs. In particular, deep reinforcement learning (DRL) has the potential to solve high-dimensional and complex control problems in a large variety of applications. Most DRL methods rely on deep neural network (DNN) control policies. However, for many dynamical systems, DNN-based control policies tend to be over-parametrized, which means they need large amounts of training data, show limited robustness, and lack interpretability. In this work, we leverage dictionary learning and differentiable L$_0$ regularization to learn sparse, robust, and interpretable control policies for parametric PDEs. Our sparse policy architecture is agnostic to the DRL method and can be used in different policy-gradient and actor-critic DRL algorithms without changing their policy-optimization procedure. We test our approach on the challenging tasks of controlling parametric Kuramoto-Sivashinsky and convection-diffusion-reaction PDEs. We show that our method (1) outperforms baseline DNN-based DRL policies, (2) allows for the derivation of interpretable equations of the learned optimal control laws, and (3) generalizes to unseen parameters of the PDE without retraining the policies.


CORE: Towards Scalable and Efficient Causal Discovery with Reinforcement Learning

arXiv.org Artificial Intelligence

Causal discovery is the challenging task of inferring causal structure from data. Motivated by Pearl's Causal Hierarchy (PCH), which tells us that passive observations alone are not enough to distinguish correlation from causation, there has been a recent push to incorporate interventions into machine learning research. Reinforcement learning provides a convenient framework for such an active approach to learning. This paper presents CORE, a deep reinforcement learning-based approach for causal discovery and intervention planning. CORE learns to sequentially reconstruct causal graphs from data while learning to perform informative interventions. Our results demonstrate that CORE generalizes to unseen graphs and efficiently uncovers causal structures. Furthermore, CORE scales to larger graphs with up to 10 variables and outperforms existing approaches in structure estimation accuracy and sample efficiency. All relevant code and supplementary material can be found at https://github.com/sa-and/CORE


Trajectory Generation, Control, and Safety with Denoising Diffusion Probabilistic Models

arXiv.org Artificial Intelligence

The Control barrier functions (CBFs) (Ames et al., 2017; 2019) technology of control barrier functions (CBFs), represent a formal framework aiming to achieve safety as encoding desired safety constraints, is used in a hard constraint in an optimization problem in which the combination with DDPMs to plan actions by iteratively cost function encodes information on the nominal task to denoising trajectories through a CBFbased be executed. In particular CBF-based safety constraints are guided sampling procedure. At the same represented by forward invariance of so-called safe sets, i.e. time, the generated trajectories are also guided to subsets of the state space which the controlled system should maximize a future cumulative reward representing not leave during the task execution. We stress that within a specific task to be optimally executed. The this context, safety becomes a mathematically rigorous system proposed scheme can be seen as an offline and theoretic property and, even if unable to represent any model-based reinforcement learning algorithm resembling possible safety hazard, it is very useful to design safety constraints, in its functionalities a model-predictive e.g.


Discovering Efficient Periodic Behaviours in Mechanical Systems via Neural Approximators

arXiv.org Artificial Intelligence

It is well known that conservative mechanical systems exhibit local oscillatory behaviours due to their elastic and gravitational potentials, which completely characterise these periodic motions together with the inertial properties of the system. The classification of these periodic behaviours and their geometric characterisation are in an on-going secular debate, which recently led to the so-called eigenmanifold theory. The eigenmanifold characterises nonlinear oscillations as a generalisation of linear eigenspaces. With the motivation of performing periodic tasks efficiently, we use tools coming from this theory to construct an optimization problem aimed at inducing desired closed-loop oscillations through a state feedback law. We solve the constructed optimization problem via gradient-descent methods involving neural networks. Extensive simulations show the validity of the approach.


Unsupervised Representation Learning in Deep Reinforcement Learning: A Review

arXiv.org Artificial Intelligence

This review addresses the problem of learning abstract representations of the measurement data in the context of Deep Reinforcement Learning (DRL). While the data are often ambiguous, high-dimensional, and complex to interpret, many dynamical systems can be effectively described by a low-dimensional set of state variables. Discovering these state variables from the data is a crucial aspect for improving the data efficiency, robustness and generalization of DRL methods, tackling the \textit{curse of dimensionality}, and bringing interpretability and insights into black-box DRL. This review provides a comprehensive and complete overview of unsupervised representation learning in DRL by describing the main Deep Learning tools used for learning representations of the world, providing a systematic view of the method and principles, summarizing applications, benchmarks and evaluation strategies, and discussing open challenges and future directions.