Goto

Collaborating Authors

 computational domain


SupplementaryMaterials: Acomposable machine-learningapproachforsteady-state simulationsonhigh-resolutiongrids

Neural Information Processing Systems

Finally, we expand on the computational performance of CoMLSim in Section E and provide details of reproducibilityinSectionF. In this section, we will provide details about the typical network architectures used in CoMLSim followed bythetraining mechanics. CNN-based encoders and decoders are employed here toachievethis compression because subdomains consist of structured data representations. In the encoder network, we use a series of convolution and max-pooling layers to extract global features from thesolution. If the PDE conditions are uniform, the magnitude can simply be considered as an encoding for a given subdomain. Since latent vectors don't have a spatial representation, DNN-based encoder and decoders areemployedtocompress them. Thedomain isdiscretized intoafinite number ofcomputational elements, using techniques suchasFinite Difference Method (FDM), Finite Volume Method (FVM) and FiniteElementMethod(FEM). 3 Similar to traditional PDE solvers, the first step in the CoMLSim is to decompose the computational domain into smaller subdomains.


Multi-patch isogeometric neural solver for partial differential equations on computer-aided design domains

von Tresckow, Moritz, Ion, Ion Gabriel, Loukrezis, Dimitrios

arXiv.org Artificial Intelligence

This work develops a computational framework that combines physics-informed neural networks with multi-patch isogeometric analysis to solve partial differential equations on complex computer-aided design geometries. The method utilizes patch-local neural networks that operate on the reference domain of isogeometric analysis. A custom output layer enables the strong imposition of Dirichlet boundary conditions. Solution conformity across interfaces between non-uniform rational B-spline patches is enforced using dedicated interface neural networks. Training is performed using the variational framework by minimizing the energy functional derived after the weak form of the partial differential equation. The effectiveness of the suggested method is demonstrated on two highly non-trivial and practically relevant use-cases, namely, a 2D magnetostatics model of a quadrupole magnet and a 3D nonlinear solid and contact mechanics model of a mechanical holder. The results show excellent agreement to reference solutions obtained with high-fidelity finite element solvers, thus highlighting the potential of the suggested neural solver to tackle complex engineering problems given the corresponding computer-aided design models.




B-PL-PINN: Stabilizing PINN Training with Bayesian Pseudo Labeling

Innerebner, Kevin, Rohrhofer, Franz M., Geiger, Bernhard C.

arXiv.org Artificial Intelligence

Training physics-informed neural networks (PINNs) for forward problems often suffers from severe convergence issues, hindering the propagation of information from regions where the desired solution is well-defined. Haitsiukevich and Ilin (2023) proposed an ensemble approach that extends the active training domain of each PINN based on i) ensemble consensus and ii) vicinity to (pseudo-)labeled points, thus ensuring that the information from the initial condition successfully propagates to the interior of the computational domain. In this work, we suggest replacing the ensemble by a Bayesian PINN, and consensus by an evaluation of the PINN's posterior variance. Our experiments show that this mathematically principled approach outperforms the ensemble on a set of benchmark problems and is competitive with PINN ensembles trained with combinations of Adam and LBFGS.


DoMINO: A Decomposable Multi-scale Iterative Neural Operator for Modeling Large Scale Engineering Simulations

Ranade, Rishikesh, Nabian, Mohammad Amin, Tangsali, Kaustubh, Kamenev, Alexey, Hennigh, Oliver, Cherukuri, Ram, Choudhry, Sanjay

arXiv.org Artificial Intelligence

Numerical simulations play a critical role in design and development of engineering products and processes. Traditional computational methods, such as CFD, can provide accurate predictions but are computationally expensive, particularly for complex geometries. Several machine learning (ML) models have been proposed in the literature to significantly reduce computation time while maintaining acceptable accuracy. However, ML models often face limitations in terms of accuracy and scalability and depend on significant mesh downsampling, which can negatively affect prediction accuracy and generalization. In this work, we propose a novel ML model architecture, DoMINO (Decomposable Multi-scale Iterative Neural Operator) developed in NVIDIA Modulus to address the various challenges of machine learning based surrogate modeling of engineering simulations. DoMINO is a point cloudbased ML model that uses local geometric information to predict flow fields on discrete points. The DoMINO model is validated for the automotive aerodynamics use case using the DrivAerML dataset. Through our experiments we demonstrate the scalability, performance, accuracy and generalization of our model to both in-distribution and out-of-distribution testing samples. Moreover, the results are analyzed using a range of engineering specific metrics important for validating numerical simulations.

  Country: Europe (0.68)
  Genre: Research Report (0.64)
  Industry:

The Finite Element Neural Network Method: One Dimensional Study

Abda, Mohammed, Piollet, Elsa, Blake, Christopher, Gosselin, Frédérick P.

arXiv.org Artificial Intelligence

The potential of neural networks (NN) in engineering is rooted in their capacity to understand intricate patterns and complex systems, leveraging their universal nonlinear approximation capabilities and high expressivity. Meanwhile, conventional numerical methods, backed by years of meticulous refinement, continue to be the standard for accuracy and dependability. Bridging these paradigms, this research introduces the finite element neural network method (FENNM) within the framework of the Petrov-Galerkin method using convolution operations to approximate the weighted residual of the differential equations. The NN generates the global trial solution, while the test functions belong to the Lagrange test function space. FENNM introduces several key advantages. Notably, the weak-form of the differential equations introduces flux terms that contribute information to the loss function compared to VPINN, hp-VPINN, and cv-PINN. This enables the integration of forcing terms and natural boundary conditions into the loss function similar to conventional finite element method (FEM) solvers, facilitating its optimization, and extending its applicability to more complex problems, which will ease industrial adoption. This study will elaborate on the derivation of FENNM, highlighting its similarities with FEM. Additionally, it will provide insights into optimal utilization strategies and user guidelines to ensure cost-efficiency. Finally, the study illustrates the robustness and accuracy of FENNM by presenting multiple numerical case studies and applying adaptive mesh refinement techniques.


A Variational Computational-based Framework for Unsteady Incompressible Flows

Sababha, H., Elmaradny, A., Taha, H., Daqaq, M.

arXiv.org Artificial Intelligence

Advancements in computational fluid mechanics have largely relied on Newtonian frameworks, particularly through the direct simulation of Navier-Stokes equations. In this work, we propose an alternative computational framework that employs variational methods, specifically by leveraging the principle of minimum pressure gradient, which turns the fluid mechanics problem into a minimization problem whose solution can be used to predict the flow field in unsteady incompressible viscous flows. This method exhibits two particulary intriguing properties. First, it circumvents the chronic issues of pressure-velocity coupling in incompressible flows, which often dominates the computational cost in computational fluid dynamics (CFD). Second, this method eliminates the reliance on unphysical assumptions at the outflow boundary, addressing another longstanding challenge in CFD. We apply this framework to three benchmark examples across a range of Reynolds numbers: (i) unsteady flow field in a lid-driven cavity, (ii) Poiseuille flow, and (iii) flow past a circular cylinder. The minimization framework is carried out using a physics-informed neural network (PINN), which integrates the underlying physical principles directly into the training of the model. The results from the proposed method are validated against high-fidelity CFD simulations, showing an excellent agreement. Comparison of the proposed variational method to the conventional method, wherein PINNs is directly applied to solve Navier-Stokes Equations, reveals that the proposed method outperforms conventional PINNs in terms of both convergence rate and time, demonstrating its potential for solving complex fluid mechanics problems.


Shape-informed surrogate models based on signed distance function domain encoding

Zhang, Linying, Pagani, Stefano, Zhang, Jun, Regazzoni, Francesco

arXiv.org Artificial Intelligence

We propose a non-intrusive method to build surrogate models that approximate the solution of parameterized partial differential equations (PDEs), capable of taking into account the dependence of the solution on the shape of the computational domain. Our approach is based on the combination of two neural networks (NNs). The first NN, conditioned on a latent code, provides an implicit representation of geometry variability through signed distance functions. This automated shape encoding technique generates compact, low-dimensional representations of geometries within a latent space, without requiring the explicit construction of an encoder. The second NN reconstructs the output physical fields independently for each spatial point, thus avoiding the computational burden typically associated with high-dimensional discretizations like computational meshes. Furthermore, we show that accuracy in geometrical characterization can be further enhanced by employing Fourier feature mapping as input feature of the NN. The meshless nature of the proposed method, combined with the dimensionality reduction achieved through automatic feature extraction in latent space, makes it highly flexible and computationally efficient. This strategy eliminates the need for manual intervention in extracting geometric parameters, and can even be applied in cases where geometries undergo changes in their topology. Numerical tests in the field of fluid dynamics and solid mechanics demonstrate the effectiveness of the proposed method in accurately predict the solution of PDEs in domains of arbitrary shape. Remarkably, the results show that it achieves accuracy comparable to the best-case scenarios where an explicit parametrization of the computational domain is available.


A domain decomposition-based autoregressive deep learning model for unsteady and nonlinear partial differential equations

Nidhan, Sheel, Jiang, Haoliang, Ghule, Lalit, Umphrey, Clancy, Ranade, Rishikesh, Pathak, Jay

arXiv.org Artificial Intelligence

In this paper, we propose a domain-decomposition-based deep learning (DL) framework, named transient-CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs). The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers. Unlike existing state-of-the-art methods that operate on the entire computational domain, our CNN-based autoencoder computes a lower-dimensional basis for solution and condition fields represented on subdomains. Timestepping is performed entirely in the latent space, generating embeddings of the solution variables from the time history of embeddings of solution and condition variables. This approach not only reduces computational complexity but also enhances scalability, making it well-suited for large-scale simulations. Furthermore, to improve the stability of our rollouts, we employ a curriculum learning (CL) approach during the training of the autoregressive model. The domain-decomposition strategy enables scaling to out-of-distribution domain sizes while maintaining the accuracy of predictions -- a feature not easily integrated into popular DL-based approaches for physics simulations. We benchmark our model against two widely-used DL architectures, Fourier Neural Operator (FNO) and U-Net, and demonstrate that our framework outperforms them in terms of accuracy, extrapolation to unseen timesteps, and stability for a wide range of use cases.