closure term
Differentiable Physics-Neural Models enable Learning of Non-Markovian Closures for Accelerated Coarse-Grained Physics Simulations
Xue, Tingkai, Ooi, Chin Chun, Ge, Zhengwei, Leong, Fong Yew, Li, Hongying, Kang, Chang Wei
Numerical simulations provide key insights into many physical, real-world problems. However, while these simulations are solved on a full 3D domain, most analysis only require a reduced set of metrics (e.g. plane-level concentrations). This work presents a hybrid physics-neural model that predicts scalar transport in a complex domain orders of magnitude faster than the 3D simulation (from hours to less than 1 min). This end-to-end differentiable framework jointly learns the physical model parameterization (i.e. orthotropic diffusivity) and a non-Markovian neural closure model to capture unresolved, 'coarse-grained' effects, thereby enabling stable, long time horizon rollouts. This proposed model is data-efficient (learning with 26 training data), and can be flexibly extended to an out-of-distribution scenario (with a moving source), achieving a Spearman correlation coefficient of 0.96 at the final simulation time. Overall results show that this differentiable physics-neural framework enables fast, accurate, and generalizable coarse-grained surrogates for physical phenomena.
- North America > United States (0.14)
- Europe > Romania > Black Sea (0.04)
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
Symbolic Regression of Data-Driven Reduced Order Model Closures for Under-Resolved, Convection-Dominated Flows
Manti, Simone, Tsai, Ping-Hsuan, Lucantonio, Alessandro, Iliescu, Traian
High-performance computing and modern numerical algorithms have made high-fidelity fluid-thermal analysis tractable in geometries of ever increasing complexity. Despite continued advances in these areas, direct numerical (DNS), large eddy simulation (LES), and even unsteady Reynolds-averaged Navier-Stokes (URANS) simulations of turbulent thermal transport remain too costly for routine analysis and design of thermal-hydraulic systems, where hundreds of cases must be considered. Reduced order models (ROMs) offer a promising alternative by leveraging expensive high-fidelity simulations (referred to as full order models or FOMs) to first extract a low-dimensional basis that captures the principal features of the underlying flow fields, and then construct computational models whose dimensions are orders of magnitude lower than the FOM dimension. In the numerical simulation of fluid flows, Galerkin ROMs (G-ROMs), which use data-driven basis functions in a Galerkin framework, have provided efficient and accurate approximations of laminar flows, such as the two-dimensional flow past a circular cylinder at low Reynolds numbers [1, 2]. However, turbulent flows are notoriously hard for the standard G-ROM. Indeed, to capture the complex dynamics, a large number [3] of ROM basis functions is required, which yields high-dimensional ROMs that cannot be used in realistic applications. Thus, computationally efficient, low-dimensional ROMs are used instead. Unfortunately, these ROMs are inaccurate since the ROM basis functions that were not used to build the G-ROM have an important role in dissipating the energy from the system [4].
- Europe (0.28)
- North America > United States (0.28)
Toward Discretization-Consistent Closure Schemes for Large Eddy Simulation Using Reinforcement Learning
This study proposes a novel method for developing discretization-consistent closure schemes for implicitly filtered Large Eddy Simulation (LES). Here, the induced filter kernel, and thus the closure terms, are determined by the properties of the grid and the discretization operator, leading to additional computational subgrid terms that are generally unknown in a priori analysis. In this work, the task of adapting the coefficients of LES closure models is thus framed as a Markov decision process and solved in an a posteriori manner with Reinforcement Learning (RL). This optimization framework is applied to both explicit and implicit closure models. The explicit model is based on an element-local eddy viscosity model. The optimized model is found to adapt its induced viscosity within discontinuous Galerkin (DG) methods to homogenize the dissipation within an element by adding more viscosity near its center. For the implicit modeling, RL is applied to identify an optimal blending strategy for a hybrid DG and Finite Volume (FV) scheme. The resulting optimized discretization yields more accurate results in LES than either the pure DG or FV method and renders itself as a viable modeling ansatz that could initiate a novel class of high-order schemes for compressible turbulence by combining turbulence modeling with shock capturing in a single framework. All newly derived models achieve accurate results that either match or outperform traditional models for different discretizations and resolutions. Overall, the results demonstrate that the proposed RL optimization can provide discretization-consistent closures that could reduce the uncertainty in implicitly filtered LES.
Enhancing Data-Assimilation in CFD using Graph Neural Networks
Quattromini, Michele, Bucci, Michele Alessandro, Cherubini, Stefania, Semeraro, Onofrio
We present a novel machine learning approach for data assimilation applied in fluid mechanics, based on adjoint-optimization augmented by Graph Neural Networks (GNNs) models. We consider as baseline the Reynolds-Averaged Navier-Stokes (RANS) equations, where the unknown is the meanflow and a closure model based on the Reynolds-stress tensor is required for correctly computing the solution. An end-to-end process is cast; first, we train a GNN model for the closure term. Second, the GNN model is introduced in the training process of data assimilation, where the RANS equations act as a physics constraint for a consistent prediction. We obtain our results using direct numerical simulations based on a Finite Element Method (FEM) solver; a two-fold interface between the GNN model and the solver allows the GNN's predictions to be incorporated into post-processing steps of the FEM analysis. The proposed scheme provides an excellent reconstruction of the meanflow without any features selection; preliminary results show promising generalization properties over unseen flow configurations.
- Europe > France (0.15)
- Europe > Italy (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Comparison of neural closure models for discretised PDEs
Melchers, Hugo, Crommelin, Daan, Koren, Barry, Menkovski, Vlado, Sanderse, Benjamin
Neural closure models have recently been proposed as a method for efficiently approximating small scales in multiscale systems with neural networks. The choice of loss function and associated training procedure has a large effect on the accuracy and stability of the resulting neural closure model. In this work, we systematically compare three distinct procedures: "derivative fitting", "trajectory fitting" with discretise-then-optimise, and "trajectory fitting" with optimise-then-discretise. Derivative fitting is conceptually the simplest and computationally the most efficient approach and is found to perform reasonably well on one of the test problems (Kuramoto-Sivashinsky) but poorly on the other (Burgers). Trajectory fitting is computationally more expensive but is more robust and is therefore the preferred approach. Of the two trajectory fitting procedures, the discretise-then-optimise approach produces more accurate models than the optimise-then-discretise approach. While the optimise-then-discretise approach can still produce accurate models, care must be taken in choosing the length of the trajectories used for training, in order to train the models on long-term behaviour while still producing reasonably accurate gradients during training. Two existing theorems are interpreted in a novel way that gives insight into the long-term accuracy of a neural closure model based on how accurate it is in the short term.
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
Generalized Neural Closure Models with Interpretability
Gupta, Abhinav, Lermusiaux, Pierre F. J.
Improving the predictive capability and computational cost of dynamical models is often at the heart of augmenting computational physics with machine learning (ML). However, most learning results are limited in interpretability and generalization over different computational grid resolutions, initial and boundary conditions, domain geometries, and physical or problem-specific parameters. In the present study, we simultaneously address all these challenges by developing the novel and versatile methodology of unified neural partial delay differential equations. We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations. The melding of the existing models with NNs in the continuous spatiotemporal space followed by numerical discretization automatically allows for the desired generalizability. The Markovian term is designed to enable extraction of its analytical form and thus provides interpretability. The non-Markovian terms allow accounting for inherently missing time delays needed to represent the real world. We obtain adjoint PDEs in the continuous form, thus enabling direct implementation across differentiable and non-differentiable computational physics codes, different ML frameworks, and treatment of nonuniformly-spaced spatiotemporal training data. We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models. Our learned gnCMs discover missing physics, find leading numerical error terms, discriminate among candidate functional forms in an interpretable fashion, achieve generalization, and compensate for the lack of complexity in simpler models. Finally, we analyze the computational advantages of our new framework.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.28)
- North America > United States > New York (0.04)
- North America > United States > Maine (0.04)
- (3 more...)
A Perspective on Machine Learning Methods in Turbulence Modelling
This work presents a review of the current state of research in data-driven turbulence closure modeling. It offers a perspective on the challenges and open issues, but also on the advantages and promises of machine learning methods applied to parameter estimation, model identification, closure term reconstruction and beyond, mostly from the perspective of Large Eddy Simulation and related techniques. We stress that consistency of the training data, the model, the underlying physics and the discretization is a key issue that needs to be considered for a successful ML-augmented modeling strategy. In order to make the discussion useful for non-experts in either field, we introduce both the modeling problem in turbulence as well as the prominent ML paradigms and methods in a concise and self-consistent manner. Following, we present a survey of the current data-driven model concepts and methods, highlight important developments and put them into the context of the discussed challenges.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
- Overview (1.00)
- Research Report > New Finding (0.45)
- Leisure & Entertainment > Games > Chess (1.00)
- Energy (0.68)
A machine learning framework for LES closure terms
In the present work, we explore the capability of artificial neural networks (ANN) to predict the closure terms for large eddy simulations (LES) solely from coarse-scale data. To this end, we derive a consistent framework for LES closure models, with special emphasis laid upon the incorporation of implicit discretization-based filters and numerical approximation errors. We investigate implicit filter types, which are inspired by the solution representation of discontinuous Galerkin and finite volume schemes and mimic the behaviour of the discretization operator, and a global Fourier cutoff filter as a representative of a typical explicit LES filter. Within the perfect LES framework, we compute the exact closure terms for the different LES filter functions from direct numerical simulation results of decaying homogeneous isotropic turbulence. Multiple ANN with a multilayer perceptron (MLP) or a gated recurrent unit (GRU) architecture are trained to predict the computed closure terms solely from coarse-scale input data. For the given application, the GRU architecture clearly outperforms the MLP networks in terms of accuracy, whilst reaching up to 99.9% cross-correlation between the networks' predictions and the exact closure terms for all considered filter functions. The GRU networks are also shown to generalize well across different LES filters and resolutions. The present study can thus be seen as a starting point for the investigation of data-based modeling approaches for LES, which not only include the physical closure terms, but account for the discretization effects in implicitly filtered LES as well.
- North America > United States (0.28)
- Europe > Germany (0.14)
- Europe > Netherlands (0.14)
Data-Driven Discovery of Coarse-Grained Equations
Bakarji, Joseph, Tartakovsky, Daniel M.
Joseph Bakarji 1, Daniel M. Tartakovsky 1 Department of Energy Resources Engineering, Stanford University, 367 Panama Mall, Stanford, 94305 CA, USAAbstract A general method for learning probability density function (PDF) equations based on Monte Carlo simulations of random fields is proposed. Sparse linear regression is used to discover the relevant terms of a partial differential equation of the distribution. The various properties of PDF equations, like smoothness and conservation, makes them very well adapted to equation learning methods. The results show a promising direction for data-driven discovery of coarse-grained equations in general. Introduction Probabilistic models have proven to be essential in various fields of science and technology for optimizing predictability under epistemic and model uncertainty.
- North America > United States (0.28)
- North America > Panama (0.24)
- Energy > Oil & Gas > Upstream (0.46)
- Government > Regional Government (0.34)