Goto

Collaborating Authors

 Fasel, Urban


Interpretable and Efficient Data-driven Discovery and Control of Distributed Systems

arXiv.org Artificial Intelligence

Feedback control for complex physical systems is essential in many fields of Engineering and Applied Sciences, which are typically governed by Partial Differential Equations (PDEs). In these cases, the state of the systems is often challenging or even impossible to observe completely, the systems exhibit nonlinear dynamics, and require low-latency feedback control [BNK20]; [PK20]; [KJ20]. Consequently, effectively controlling these systems is a computationally intensive task. For instance, significant efforts have been devoted in the last decade to the investigation of optimal control problems governed by PDEs [Hin+08]; [MQS22]; however, classical feedback control strategies face limitations with such highly complex dynamical systems. For instance, (nonlinear) model predictive control (MPC) [GP17] has emerged as an effective and important control paradigm. MPC utilizes an internal model of the dynamics to create a feedback loop and provide optimal controls, resulting in a difficult trade-off between model accuracy and computational performance. Despite its impressive success in disciplines such as robotics [Wil+18] and controlling PDEs [Alt14], MPC struggles with real-time applicability in providing low-latency actuation, due to the need for solving complex optimization problems. In recent years, reinforcement learning (RL), particularly deep reinforcement learning (DRL) [SB18], an extension of RL relying on deep neural networks (DNN), has gained popularity as a powerful and real-time applicable control paradigm. Especially in the context of solving PDEs, DRL has demonstrated outstanding capabilities in controlling complex and high-dimensional dynamical systems at low latency [You+23]; [Pei+23]; [BF24]; [Vin24].


Parametric PDE Control with Deep Reinforcement Learning and Differentiable L0-Sparse Polynomial Policies

arXiv.org Artificial Intelligence

Optimal control of parametric partial differential equations (PDEs) is crucial in many applications in engineering and science. In recent years, the progress in scientific machine learning has opened up new frontiers for the control of parametric PDEs. In particular, deep reinforcement learning (DRL) has the potential to solve high-dimensional and complex control problems in a large variety of applications. Most DRL methods rely on deep neural network (DNN) control policies. However, for many dynamical systems, DNN-based control policies tend to be over-parametrized, which means they need large amounts of training data, show limited robustness, and lack interpretability. In this work, we leverage dictionary learning and differentiable L$_0$ regularization to learn sparse, robust, and interpretable control policies for parametric PDEs. Our sparse policy architecture is agnostic to the DRL method and can be used in different policy-gradient and actor-critic DRL algorithms without changing their policy-optimization procedure. We test our approach on the challenging tasks of controlling parametric Kuramoto-Sivashinsky and convection-diffusion-reaction PDEs. We show that our method (1) outperforms baseline DNN-based DRL policies, (2) allows for the derivation of interpretable equations of the learned optimal control laws, and (3) generalizes to unseen parameters of the PDE without retraining the policies.


SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

arXiv.org Artificial Intelligence

Much of the success of modern technology can be attributed to our ability to control dynamical systems: designing safe biomedical implants for homeostatic regulation, gimbling rocket boosters for reusable launch vehicles, operating power plants and power grids, industrial manufacturing, among many other examples. Over the past decade, advances in machine learning and optimization have rapidly accelerated our capabilities to tackle complicated data-driven tasks--particularly in the fields of computer vision [1] and natural language processing [2]. Reinforcement learning (RL) is at the intersection of both machine learning and optimal control, and the core ideas of RL date back to the infancy of both fields. By interacting with an environment and receiving feedback about its performance on a task through a reward, an RL agent iteratively improves a control policy. Deep reinforcement learning (DRL), in particular, has shown promise for uncovering control policies in complex, high-dimensional spaces [3-11]. DRL has been used to achieve super-human performance in games [12-16] and drone racing [17], to control the plasma dynamics in a tokamak fusion reactor [18], to discover novel drugs [19], and for many applications in fluid mechanics [20-30]. However, these methods rely on neural networks and typically suffer from three major drawbacks: (1) they are infeasible to train for many applications because they require millions-- or even billions [16]--of interactions with the environment; (2) they are challenging to deploy in resource-constrained environments (such as embedded devices and micro-robotic systems) due to the size of the networks and need for specialized software; and (3) they are "black-box" models that lack interpretability, making them untrustworthy to operate in safety-critical systems or high-consequence environments. In this work, we seek to create interpretable and generalizable reinforcement learning methods that are also more sample efficient via sparse dictionary learning.


Rapid Bayesian identification of sparse nonlinear dynamics from scarce and noisy data

arXiv.org Machine Learning

The pursuit of direct model equation discovery has been an ongoing and significant area of interest in scientific machine learning. The popular sparse identification of nonlinear dynamics (SINDy) framework [1] offers a promising approach to extract parsimonious equations directly from data. SINDy's promotion of parsimony by sparse regression allows for the identification of an interpretable model that balances accuracy with generalizability, while its simplicity leads to a relatively efficient and fast learning process compared to other machine learning techniques. The framework has been successfully applied in a variety of applications, such as model idenficiation in plasma physics [2], control engineering [3, 4], biological transport problems [5], socio-cognitive systems [6], epidemiology [7, 8] and turbulence modelling [9]. Furthermore, its remarkable extendibility has attracted a range of modifications, including the adaptation to discover partial differential equations [10], the extension to libraries of rational functions [11], the integration of ensembling techniques to improve data efficiency [12] and the use of weak formulations [13, 14] to avoid noise amplification when computing derivatives from discrete data. One major difficulty in using scientific machine learning methods in fields such as biophysics, ecology, and microbiology, is that measured data from these fields is often noisy and scarce.


Convergence of uncertainty estimates in Ensemble and Bayesian sparse model discovery

arXiv.org Artificial Intelligence

Sparse model identification enables nonlinear dynamical system discovery from data. However, the control of false discoveries for sparse model identification is challenging, especially in the low-data and high-noise limit. In this paper, we perform a theoretical study on ensemble sparse model discovery, which shows empirical success in terms of accuracy and robustness to noise. In particular, we analyse the bootstrapping-based sequential thresholding least-squares estimator. We show that this bootstrapping-based ensembling technique can perform a provably correct variable selection procedure with an exponential convergence rate of the error rate. In addition, we show that the ensemble sparse model discovery method can perform computationally efficient uncertainty estimation, compared to expensive Bayesian uncertainty quantification methods via MCMC. We demonstrate the convergence properties and connection to uncertainty quantification in various numerical studies on synthetic sparse linear regression and sparse model discovery. The experiments on sparse linear regression support that the bootstrapping-based sequential thresholding least-squares method has better performance for sparse variable selection compared to LASSO, thresholding least-squares, and bootstrapping-based LASSO. In the sparse model discovery experiment, we show that the bootstrapping-based sequential thresholding least-squares method can provide valid uncertainty quantification, converging to a delta measure centered around the true value with increased sample sizes. Finally, we highlight the improved robustness to hyperparameter selection under shifting noise and sparsity levels of the bootstrapping-based sequential thresholding least-squares method compared to other sparse regression methods.


Benchmarking sparse system identification with low-dimensional chaos

arXiv.org Artificial Intelligence

Sparse system identification is the data-driven process of obtaining parsimonious differential equations that describe the evolution of a dynamical system, balancing model complexity and accuracy. There has been rapid innovation in system identification across scientific domains, but there remains a gap in the literature for large-scale methodological comparisons that are evaluated on a variety of dynamical systems. In this work, we systematically benchmark sparse regression variants by utilizing the dysts standardized database of chaotic systems. In particular, we demonstrate how this open-source tool can be used to quantitatively compare different methods of system identification. To illustrate how this benchmark can be utilized, we perform a large comparison of four algorithms for solving the sparse identification of nonlinear dynamics (SINDy) optimization problem, finding strong performance of the original algorithm and a recent mixed-integer discrete algorithm. In all cases, we used ensembling to improve the noise robustness of SINDy and provide statistical comparisons. In addition, we show very compelling evidence that the weak SINDy formulation provides significant improvements over the traditional method, even on clean data. Lastly, we investigate how Pareto-optimal models generated from SINDy algorithms depend on the properties of the equations, finding that the performance shows no significant dependence on a set of dynamical properties that quantify the amount of chaos, scale separation, degree of nonlinearity, and the syntactic complexity.