Goto

Collaborating Authors

 Peyvan, Ahmad


Fusion DeepONet: A Data-Efficient Neural Operator for Geometry-Dependent Hypersonic Flows on Arbitrary Grids

arXiv.org Artificial Intelligence

Designing re-entry vehicles requires accurate predictions of hypersonic flow around their geometry. Rapid prediction of such flows can revolutionize vehicle design, particularly for morphing geometries. We evaluate advanced neural operator models such as Deep Operator Networks (DeepONet), parameter-conditioned U-Net, Fourier Neural Operator (FNO), and MeshGraphNet, with the objective of addressing the challenge of learning geometry-dependent hypersonic flow fields with limited data. Specifically, we compare the performance of these models for two grid types: uniform Cartesian and irregular grids. To train these models, we use 36 unique elliptic geometries for generating high-fidelity simulations with a high-order entropy-stable DGSEM solver, emphasizing the challenge of working with a scarce dataset. We evaluate and compare the four operator-based models for their efficacy in predicting hypersonic flow field around the elliptic body. Moreover, we develop a novel framework, called Fusion DeepONet, which leverages neural field concepts and generalizes effectively across varying geometries. Despite the scarcity of training data, Fusion DeepONet achieves performance comparable to parameter-conditioned U-Net on uniform grids while it outperforms MeshGraphNet and vanilla DeepONet on irregular, arbitrary grids. Fusion DeepONet requires significantly fewer trainable parameters as compared to U-Net, MeshGraphNet, and FNO, making it computationally efficient. We also analyze the basis functions of the Fusion DeepONet model using Singular Value Decomposition. This analysis reveals that Fusion DeepONet generalizes effectively to unseen solutions and adapts to varying geometries and grid points, demonstrating its robustness in scenarios with limited training data.


Transformers as Neural Operators for Solutions of Differential Equations with Finite Regularity

arXiv.org Artificial Intelligence

Neural operator learning models have emerged as very effective surrogates in data-driven methods for partial differential equations (PDEs) across different applications from computational science and engineering. Such operator learning models not only predict particular instances of a physical or biological system in real-time but also forecast classes of solutions corresponding to a distribution of initial and boundary conditions or forcing terms. % DeepONet is the first neural operator model and has been tested extensively for a broad class of solutions, including Riemann problems. Transformers have not been used in that capacity, and specifically, they have not been tested for solutions of PDEs with low regularity. % In this work, we first establish the theoretical groundwork that transformers possess the universal approximation property as operator learning models. We then apply transformers to forecast solutions of diverse dynamical systems with solutions of finite regularity for a plurality of initial conditions and forcing terms. In particular, we consider three examples: the Izhikevich neuron model, the tempered fractional-order Leaky Integrate-and-Fire (LIF) model, and the one-dimensional Euler equation Riemann problem. For the latter problem, we also compare with variants of DeepONet, and we find that transformers outperform DeepONet in accuracy but they are computationally more expensive.


RiemannONets: Interpretable Neural Operators for Riemann Problems

arXiv.org Artificial Intelligence

Developing the proper representations for simulating high-speed flows with strong shock waves, rarefactions, and contact discontinuities has been a long-standing question in numerical analysis. Herein, we employ neural operators to solve Riemann problems encountered in compressible flows for extreme pressure jumps (up to $10^{10}$ pressure ratio). In particular, we first consider the DeepONet that we train in a two-stage process, following the recent work of Lee and Shin, wherein the first stage, a basis is extracted from the trunk net, which is orthonormalized and subsequently is used in the second stage in training the branch net. This simple modification of DeepONet has a profound effect on its accuracy, efficiency, and robustness and leads to very accurate solutions to Riemann problems compared to the vanilla version. It also enables us to interpret the results physically as the hierarchical data-driven produced basis reflects all the flow features that would otherwise be introduced using ad hoc feature expansion layers. We also compare the results with another neural operator based on the U-Net for low, intermediate, and very high-pressure ratios that are very accurate for Riemann problems, especially for large pressure ratios, due to their multiscale nature but computationally more expensive. Overall, our study demonstrates that simple neural network architectures, if properly pre-trained, can achieve very accurate solutions of Riemann problems for real-time forecasting.


Real-time Inference and Extrapolation via a Diffusion-inspired Temporal Transformer Operator (DiTTO)

arXiv.org Artificial Intelligence

Extrapolation remains a grand challenge in deep neural networks across all application domains. We propose an operator learning method to solve time-dependent partial differential equations (PDEs) continuously and with extrapolation in time without any temporal discretization. The proposed method, named Diffusion-inspired Temporal Transformer Operator (DiTTO), is inspired by latent diffusion models and their conditioning mechanism, which we use to incorporate the temporal evolution of the PDE, in combination with elements from the transformer architecture to improve its capabilities. Upon training, DiTTO can make inferences in real-time. We demonstrate its extrapolation capability on a climate problem by estimating the temperature around the globe for several years, and also in modeling hypersonic flows around a double-cone. We propose different training strategies involving temporal-bundling and sub-sampling and demonstrate performance improvements for several benchmarks, performing extrapolation for long time intervals as well as zero-shot super-resolution in time.


Deep neural operators can serve as accurate surrogates for shape optimization: A case study for airfoils

arXiv.org Artificial Intelligence

Neural networks that solve regression problems map input data to output data, whereas neural operators map functions to functions. This recent paradigm shift in perspective, starting with the original paper on the deep operator network or DeepONet [1, 2], provides a new modeling capability that is very useful in engineering - that is, the ability to replace very complex and computational resource-taxing multiphysics systems with neural operators that can provide functional outputs in real-time. Specifically, unlike other physics-informed neural networks (PINNs) [3] that require optimization during training and testing, a DeepONet does not require any optimization during inference, hence it can be used in realtime forecasting, including design, autonomy, control, etc. An architectural diagram of a DeepONet with the commonly used nomenclature for its components is shown in Figure 1. DeepONets can take a multi-fidelity or multi-modal input [4, 5, 6, 7, 8] in the branch network and can use an independent network as the trunk, a network that represents the output space, e.g. in space-time coordinates or in parametric space in a continuous fashion. In some sense, DeepONets can be used as surrogates in a similar fashion as reduced order models (ROMs) [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. However, unlike ROMs, they are over-parametrized which leads to both generalizability and robustness to noise that is not possible with ROMs, see the recent work of [20].