galerkin method
Gaussian Variational Schemes on Bounded and Unbounded Domains
Actor, Jonas A., Gruber, Anthony, Cyr, Eric C., Trask, Nathaniel
A machine-learnable variational scheme using Gaussian radial basis functions (GRBFs) is presented and used to approximate linear problems on bounded and unbounded domains. In contrast to standard mesh-free methods, which use GRBFs to discretize strong-form differential equations, this work exploits the relationship between integrals of GRBFs, their derivatives, and polynomial moments to produce exact quadrature formulae which enable weak-form expressions. Combined with trainable GRBF means and covariances, this leads to a flexible, generalized Galerkin variational framework which is applied in the infinite-domain setting where the scheme is conforming, as well as the bounded-domain setting where it is not. Error rates for the proposed GRBF scheme are derived in each case, and examples are presented demonstrating utility of this approach as a surrogate modeling technique.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
- Government > Regional Government > North America Government > United States Government (0.94)
- Energy (0.93)
Nonlinear dimensionality reduction then and now: AIMs for dissipative PDEs in the ML era
Koronaki, Eleni D., Evangelou, Nikolaos, Martin-Linares, Cristina P., Titi, Edriss S., Kevrekidis, Ioannis G.
This study presents a collection of purely data-driven workflows for constructing reduced-order models (ROMs) for distributed dynamical systems. The ROMs we focus on, are data-assisted models inspired by, and templated upon, the theory of Approximate Inertial Manifolds (AIMs); the particular motivation is the so-called post-processing Galerkin method of Garcia-Archilla, Novo and Titi. Its applicability can be extended: the need for accurate truncated Galerkin projections and for deriving closed-formed corrections can be circumvented using machine learning tools. When the right latent variables are not a priori known, we illustrate how autoencoders as well as Diffusion Maps (a manifold learning scheme) can be used to discover good sets of latent variables and test their explainability. The proposed methodology can express the ROMs in terms of (a) theoretical (Fourier coefficients), (b) linear data-driven (POD modes) and/or (c) nonlinear data-driven (Diffusion Maps) coordinates. Both Black-Box and (theoretically-informed and data-corrected) Gray-Box models are described; the necessity for the latter arises when truncated Galerkin projections are so inaccurate as to not be amenable to post-processing. We use the Chafee-Infante reaction-diffusion and the Kuramoto-Sivashinsky dissipative partial differential equations to illustrate and successfully test the overall framework.
- Asia > Middle East > Qatar (0.04)
- Oceania > New Zealand (0.04)
- North America > United States > Texas > Brazos County > College Station (0.04)
- (2 more...)
- Energy (0.68)
- Government > Regional Government > North America Government > United States Government (0.46)
Going Deeper with Spectral Decompositions
Cabannes, Vivien, Bach, Francis
Eigen and singular decompositions are ubiquitous in applied mathematics. They can serve as a basis to define good features in machine learning pipelines (Belkin and Niyogi, 2003; Coifman and Lafon, 2006; Balestriero and LeCun, 2022), while a set of good features naturally define pullback distances on the original data. Those features and distances are naturally referred to as "spectral embeddings" and "spectral distances". The latter are thought to provide meaningful geometries on the data, which explain their uses for clustering (Belkin and Niyogi, 2004; Schubert et al., 2018), as well as diffusion models (Chen and Lipman, 2023). In the machine learning community, spectral decompositions are usually derived from the eigen decompositions of different graph Laplacians built on top of the data (Chung, 1997; Zhu et al., 2003; Ham et al., 2004). However, those methods are known to scale poorly with the input dimension (Bengio et al., 2006; Singer, 2006; Hein et al., 2007), although they had applications in many different fields, such as molecular simulation (Glielmo et al., 2021), acoustics (Bianco et al., 2019) or the study of gene interaction (van Dijk et al., 2018). In this paper, we suggest a different approach to approximate the spectral decompositions of a large class of operators. Our method consists in restricting the study of infinite-dimensional operators on a basis of simple functions, which is usually referred to as Galerkin, Ritz or Raleigh methods (Singer, 1962), if not Bubnov or Petrov (Fluid Dynamics, 2012), depending on the research community. We make the following contributions.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
An unsupervised machine-learning-based shock sensor for high-order supersonic flow solvers
Mateo-Gabín, Andrés, Tlales, Kenza, Valero, Eusebio, Ferrer, Esteban, Rubio, Gonzalo
We present a novel unsupervised machine-learning sock sensor based on Gaussian Mixture Models (GMMs). The proposed GMM sensor demonstrates remarkable accuracy in detecting shocks and is robust across diverse test cases with significantly less parameter tuning than other options. We compare the GMM-based sensor with state-of-the-art alternatives. All methods are integrated into a high-order compressible discontinuous Galerkin solver, where two stabilization approaches are coupled to the sensor to provide examples of possible applications. The Sedov blast and double Mach reflection cases demonstrate that our proposed sensor can enhance hybrid sub-cell flux-differencing formulations by providing accurate information of the nodes that require low-order blending. Besides, supersonic test cases including high Reynolds numbers showcase the sensor performance when used to introduce entropy-stable artificial viscosity to capture shocks, demonstrating the same effectiveness as fine-tuned state-of-the-art sensors. The adaptive nature and ability to function without extensive training datasets make this GMM-based sensor suitable for complex geometries and varied flow configurations. Our study reveals the potential of unsupervised machine-learning methods, exemplified by this GMM sensor, to improve the robustness and efficiency of advanced CFD codes.
- Europe > United Kingdom (0.28)
- Europe > Spain (0.14)
- South America > Brazil > Rio de Janeiro (0.14)
- North America > United States (0.14)
Automatic stabilization of finite-element simulations using neural networks and hierarchical matrices
Sluzalec, Tomasz, Dobija, Mateusz, Paszynska, Anna, Muga, Ignacio, Paszynski, Maciej
Petrov-Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. Nevertheless, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently, for a given set of parameters, in an online stage. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE-parameter, to the matrix of coefficients of optimal test functions (in a basis expansion) associated with that PDE-parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE-parameters). When solving online the resulting (compressed) Petrov-Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix-vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure as fast as the original (unstable) Galerkin approach. In other words, we get the stabilization with hierarchical matrices and neural networks practically for free. We illustrate our findings by means of 2D Eriksson-Johnson and Hemholtz model problems.
- Europe > Poland > Lesser Poland Province > Kraków (0.04)
- South America > Chile > Valparaíso Region > Valparaíso Province > Valparaíso (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Computing Anti-Derivatives using Deep Neural Networks
Chakraborty, D., Gopalakrishnan, S.
This paper presents a novel algorithm to obtain the closed-form anti-derivative of a function using Deep Neural Network architecture. In the past, mathematicians have developed several numerical techniques to approximate the values of definite integrals, but primitives or indefinite integrals are often non-elementary. Anti-derivatives are necessarily required when there are several parameters in an integrand and the integral obtained is a function of those parameters. There is no theoretical method that can do this for any given function. Some existing ways to get around this are primarily based on either curve fitting or infinite series approximation of the integrand, which is then integrated theoretically. Curve fitting approximations are inaccurate for highly non-linear functions and require a different approach for every problem. On the other hand, the infinite series approach does not give a closed-form solution, and their truncated forms are often inaccurate. We claim that using a single method for all integrals, our algorithm can approximate anti-derivatives to any required accuracy. We have used this algorithm to obtain the anti-derivatives of several functions, including non-elementary and oscillatory integrals. This paper also shows the applications of our method to get the closed-form expressions of elliptic integrals, Fermi-Dirac integrals, and cumulative distribution functions and decrease the computation time of the Galerkin method for differential equations.
Machine Learning based refinement strategies for polyhedral grids with applications to Virtual Element and polyhedral Discontinuous Galerkin methods
Antonietti, P. F., Dassi, F., Manuzzi, E.
We propose two new strategies based on Machine Learning techniques to handle polyhedral grid refinement, to be possibly employed within an adaptive framework. The first one employs the k-means clustering algorithm to partition the points of the polyhedron to be refined. This strategy is a variation of the well known Centroidal Voronoi Tessellation. The second one employs Convolutional Neural Networks to classify the "shape" of an element so that "ad-hoc" refinement criteria can be defined. This strategy can be used to enhance existing refinement strategies, including the k-means strategy, at a low online computational cost. We test the proposed algorithms considering two families of finite element methods that support arbitrarily shaped polyhedral elements, namely the Virtual Element Method (VEM) and the Polygonal Discontinuous Galerkin (PolyDG) method. We demonstrate that these strategies do preserve the structure and the quality of the underlaying grids, reducing the overall computational cost and mesh complexity.
Methods to Recover Unknown Processes in Partial Differential Equations Using Data
Chen, Zhen, Wu, Kailiang, Xiu, Dongbin
We study the problem of identifying unknown processes embedded in time-dependent partial differential equation (PDE) using observational data, with an application to advection-diffusion type PDE. We first conduct theoretical analysis and derive conditions to ensure the solvability of the problem. We then present a set of numerical approaches, including Galerkin type algorithm and collocation type algorithm. Analysis of the algorithms are presented, along with their implementation detail. The Galerkin algorithm is more suitable for practical situations, particularly those with noisy data, as it avoids using derivative/gradient data. Various numerical examples are then presented to demonstrate the performance and properties of the numerical methods.