Goto

Collaborating Authors

 slow variable


Data-driven Effective Modeling of Multiscale Stochastic Dynamical Systems

Chen, Yuan, Xiu, Dongbin

arXiv.org Machine Learning

We present a numerical method for learning the dynamics of slow components of unknown multiscale stochastic dynamical systems. While the governing equations of the systems are unknown, bursts of observation data of the slow variables are available. By utilizing the observation data, our proposed method is capable of constructing a generative stochastic model that can accurately capture the effective dynamics of the slow variables in distribution. We present a comprehensive set of numerical examples to demonstrate the performance of the proposed method.


Learning effective dynamics from data-driven stochastic systems

Feng, Lingyu, Gao, Ting, Dai, Min, Duan, Jinqiao

arXiv.org Machine Learning

Numerous complex systems in the areas of science, engineering, chemistry or material science have the philosophy of multiscale properties in their dynamic evolution [1-4]. By considering models at different scales simultaneously, we would like to obtain both the efficiency of the macroscopic models as well as the accuracy of the microscopic models. For example, approaches in chemistry usually involve the quantum mechanics models in the reaction region and the classical molecular models elsewhere [5]. Besides, as noisy observations always exist in all kinds of systems under internal or external factors, stochastic dynamical systems come to play an important role in modeling such phenomena. Thus, it is of great importance to study multiscale stochastic dynamical systems [5, 6]. To better understand the intrinsic nature of such complex systems, researchers usually try to investigate the effective dynamics of these systems, such as invariant manifolds, global attractors, tipping points, noise induced bifurcations, transition pathways, and so on [7-11]. These dynamical behaviors could capture the fundamental structures when the system evolves over time or parameter space.


GANs and Closures: Micro-Macro Consistency in Multiscale Modeling

Crabtree, Ellis R., Bello-Rivas, Juan M., Ferguson, Andrew L., Kevrekidis, Ioannis G.

arXiv.org Artificial Intelligence

Sampling the phase space of molecular systems -and, more generally, of complex systems effectively modeled by stochastic differential equations (SDEs)- is a crucial modeling step in many fields, from protein folding to materials discovery. These problems are often multiscale in nature: they can be described in terms of low-dimensional effective free energy surfaces parametrized by a small number of "slow" reaction coordinates; the remaining "fast" degrees of freedom populate an equilibrium measure conditioned on the reaction coordinate values. Sampling procedures for such problems are used to estimate effective free energy differences as well as ensemble averages with respect to the conditional equilibrium distributions; these latter averages lead to closures for effective reduced dynamic models. Over the years, enhanced sampling techniques coupled with molecular simulation have been developed; they often use knowledge of the system order parameters in order to sample the corresponding conditional equilibrium distributions, and estimate ensemble averages of observables. An intriguing analogy arises with the field of machine learning (ML), where generative adversarial networks (GANs) can produce high-dimensional samples from low-dimensional probability distributions. This sample generation is what is called, in equation-free multiscale modeling, a "lifting process": it returns plausible (or realistic) high-dimensional space realizations of a model state, from information about its low-dimensional representation. In this work, we elaborate on this analogy, and we present an approach that couples physics-based simulations and biasing methods for sampling conditional distributions with ML-based conditional generative adversarial networks (cGANs) for the same task. The "coarse descriptors" on which we condition the fine scale realizations can either be known a priori or learned through nonlinear dimensionality reduction (here, using diffusion maps). We suggest that this may bring out the best features of both approaches: we demonstrate that a framework that couples cGANs with physics-based enhanced sampling techniques can improve multiscale SDE dynamical systems sampling, and even shows promise for systems of increasing complexity (here, simple molecules).


Learning Subgrid-scale Models with Neural Ordinary Differential Equations

Kang, Shinhoo, Constantinescu, Emil M.

arXiv.org Artificial Intelligence

We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs) solved by the method of lines and their representation in chaotic ordinary differential equations, based on neural ordinary differential equations (NODEs). Solving systems with fine temporal and spatial grid scales is an ongoing computational challenge, and closure models are generally difficult to tune. Machine learning approaches have increased the accuracy and efficiency of computational fluid dynamics solvers. In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization. We propose a strategy that uses the NODE and partial knowledge to learn the source dynamics at a continuous level. Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers. Numerical results with the two-scale Lorenz 96 ODE, the convection-diffusion PDE, and the viscous Burgers' PDE are used to illustrate this approach.


Kernel Analog Forecasting: Multiscale Test Problems

Burov, Dmitry, Giannakis, Dimitrios, Manohar, Krithika, Stuart, Andrew

arXiv.org Machine Learning

Data-driven prediction is becoming increasingly widespread as the volume of data available grows and as algorithmic development matches this growth. The nature of the predictions made, and the manner in which they should be interpreted, depends crucially on the extent to which the variables chosen for prediction are Markovian, or approximately Markovian. Multiscale systems provide a framework in which this issue can be analyzed. In this work kernel analog forecasting methods are studied from the perspective of data generated by multiscale dynamical systems. The problems chosen exhibit a variety of different Markovian closures, using both averaging and homogenization; furthermore, settings where scale-separation is not present and the predicted variables are non-Markovian, are also considered. The studies provide guidance for the interpretation of data-driven prediction methods when used in practice.