Goto

Collaborating Authors

 peherstorfer


Operator Inference Aware Quadratic Manifolds with Isotropic Reduced Coordinates for Nonintrusive Model Reduction

Schwerdtner, Paul, Mohan, Prakash, Bessac, Julie, de Frahan, Marc T. Henry, Peherstorfer, Benjamin

arXiv.org Artificial Intelligence

Learning reduced models from data in a nonintrusive fashion is an important problem in science and engineering [1, 2, 3]. A typical approach is to first learn an encoder-decoder pair, embed the training snapshot trajectories with the learned encoder, and then fit a reduced dynamical-system model to the embedded trajectories. However, the decomposition of the training process into first learning an encoder-decoder pair for the embedding and only sub-sequentially learning a model of the dynamics typically means that the encoder-decoder pair are trained with the objective of accurately approximating the training data, rather than taking the reduced-model prediction error into account. Thus, the encoder-decoder pair can overfit to achieving a low reconstruction error on the training data by learning embeddings of the snapshot trajectories that are non-smooth, which means that learning a reduced model can become challenging. Correspondingly, it has been observed that learning embeddings and models together can be beneficial; see, e.g., [4, 5, 6, 7]. In the context of intrusive model reduction with linear approximations, there is work that optimizes the reduced basis with respect to the model prediction error [8], quantities of interest [9], and to achieve stability [10]; however, we focus here on the setting of nonintrusive model reduction and nonlinear approximations.


DICE: Discrete inverse continuity equation for learning population dynamics

Blickhan, Tobias, Berman, Jules, Stuart, Andrew, Peherstorfer, Benjamin

arXiv.org Machine Learning

We introduce the Discrete Inverse Continuity Equation (DICE) method, a generative modeling approach that learns the evolution of a stochastic process from given sample populations at a finite number of time points. Models learned with DICE capture the typically smooth and well-behaved population dynamics, rather than the dynamics of individual sample trajectories that can exhibit complex or even chaotic behavior. The DICE loss function is developed specifically to be invariant, even in discrete time, to spatially constant but time-varying spurious constants that can emerge during training; this invariance increases training stability and robustness. Generating a trajectory of sample populations with DICE is fast because samples evolve directly in the time interval over which the stochastic process is formulated, in contrast to approaches that condition on time and then require multiple sampling steps per time step. DICE is stable to train, in situations where other methods for learning population dynamics fail, and DICE generates representative samples with orders of magnitude lower costs than methods that have to condition on time. Numerical experiments on a wide range of problems from random waves, Vlasov-Poisson instabilities and high-dimensional chaos are included to justify these assertions.


An adaptive data sampling strategy for stabilizing dynamical systems via controller inference

Werner, Steffen W. R., Peherstorfer, Benjamin

arXiv.org Artificial Intelligence

Learning stabilizing controllers from data is an important task in engineering applications; however, collecting informative data is challenging because unstable systems often lead to rapidly growing or erratic trajectories. In this work, we propose an adaptive sampling scheme that generates data while simultaneously stabilizing the system to avoid instabilities during the data collection. Under mild assumptions, the approach provably generates data sets that are informative for stabilization and have minimal size. The numerical experiments demonstrate that controller inference with the novel adaptive sampling approach learns controllers with up to one order of magnitude fewer data samples than unguided data generation. The results show that the proposed approach opens the door to stabilizing systems in edge cases and limit states where instabilities often occur and data collection is inherently difficult.


System stabilization with policy optimization on unstable latent manifolds

Werner, Steffen W. R., Peherstorfer, Benjamin

arXiv.org Artificial Intelligence

Stability is a basic requirement when studying the behavior of dynamical systems. However, stabilizing dynamical systems via reinforcement learning is challenging because only little data can be collected over short time horizons before instabilities are triggered and data become meaningless. This work introduces a reinforcement learning approach that is formulated over latent manifolds of unstable dynamics so that stabilizing policies can be trained from few data samples. The unstable manifolds are minimal in the sense that they contain the lowest dimensional dynamics that are necessary for learning policies that guarantee stabilization. This is in stark contrast to generic latent manifolds that aim to approximate all -- stable and unstable -- system dynamics and thus are higher dimensional and often require higher amounts of data. Experiments demonstrate that the proposed approach stabilizes even complex physical systems from few data samples for which other methods that operate either directly in the system state space or on generic latent manifolds fail.


Context-aware controller inference for stabilizing dynamical systems from scarce data

Werner, Steffen W. R., Peherstorfer, Benjamin

arXiv.org Artificial Intelligence

This work introduces a data-driven control approach for stabilizing high-dimensional dynamical systems from scarce data. The proposed context-aware controller inference approach is based on the observation that controllers need to act locally only on the unstable dynamics to stabilize systems. This means it is sufficient to learn the unstable dynamics alone, which are typically confined to much lower dimensional spaces than the high-dimensional state spaces of all system dynamics and thus few data samples are sufficient to identify them. Numerical experiments demonstrate that context-aware controller inference learns stabilizing controllers from orders of magnitude fewer data samples than traditional data-driven control techniques and variants of reinforcement learning. The experiments further show that the low data requirements of context-aware controller inference are especially beneficial in data-scarce engineering problems with complex physics, for which learning complete system dynamics is often intractable in terms of data and training costs.


On the sample complexity of stabilizing linear dynamical systems from data

Werner, Steffen W. R., Peherstorfer, Benjamin

arXiv.org Artificial Intelligence

Learning controllers from data for stabilizing dynamical systems typically follows a two step process of first identifying a model and then constructing a controller based on the identified model. However, learning models means identifying generic descriptions of the dynamics of systems, which can require large amounts of data and extracting information that are unnecessary for the specific task of stabilization. The contribution of this work is to show that if a linear dynamical system has dimension (McMillan degree) $n$, then there always exist $n$ states from which a stabilizing feedback controller can be constructed, independent of the dimension of the representation of the observed states and the number of inputs. By building on previous work, this finding implies that any linear dynamical system can be stabilized from fewer observed states than the minimal number of states required for learning a model of the dynamics. The theoretical findings are demonstrated with numerical experiments that show the stabilization of the flow behind a cylinder from less data than necessary for learning a model.