Interpretable and Efficient Data-driven Discovery and Control of Distributed Systems

Wolf, Florian, Botteghi, Nicolò, Fasel, Urban, Manzoni, Andrea

arXiv.org Artificial Intelligence 

Feedback control for complex physical systems is essential in many fields of Engineering and Applied Sciences, which are typically governed by Partial Differential Equations (PDEs). In these cases, the state of the systems is often challenging or even impossible to observe completely, the systems exhibit nonlinear dynamics, and require low-latency feedback control [BNK20]; [PK20]; [KJ20]. Consequently, effectively controlling these systems is a computationally intensive task. For instance, significant efforts have been devoted in the last decade to the investigation of optimal control problems governed by PDEs [Hin+08]; [MQS22]; however, classical feedback control strategies face limitations with such highly complex dynamical systems. For instance, (nonlinear) model predictive control (MPC) [GP17] has emerged as an effective and important control paradigm. MPC utilizes an internal model of the dynamics to create a feedback loop and provide optimal controls, resulting in a difficult trade-off between model accuracy and computational performance. Despite its impressive success in disciplines such as robotics [Wil+18] and controlling PDEs [Alt14], MPC struggles with real-time applicability in providing low-latency actuation, due to the need for solving complex optimization problems. In recent years, reinforcement learning (RL), particularly deep reinforcement learning (DRL) [SB18], an extension of RL relying on deep neural networks (DNN), has gained popularity as a powerful and real-time applicable control paradigm. Especially in the context of solving PDEs, DRL has demonstrated outstanding capabilities in controlling complex and high-dimensional dynamical systems at low latency [You+23]; [Pei+23]; [BF24]; [Vin24].