Deep neural operators can serve as accurate surrogates for shape optimization: A case study for airfoils

Shukla, Khemraj, Oommen, Vivek, Peyvan, Ahmad, Penwarden, Michael, Bravo, Luis, Ghoshal, Anindya, Kirby, Robert M., Karniadakis, George Em

arXiv.org Artificial Intelligence 

Neural networks that solve regression problems map input data to output data, whereas neural operators map functions to functions. This recent paradigm shift in perspective, starting with the original paper on the deep operator network or DeepONet [1, 2], provides a new modeling capability that is very useful in engineering - that is, the ability to replace very complex and computational resource-taxing multiphysics systems with neural operators that can provide functional outputs in real-time. Specifically, unlike other physics-informed neural networks (PINNs) [3] that require optimization during training and testing, a DeepONet does not require any optimization during inference, hence it can be used in realtime forecasting, including design, autonomy, control, etc. An architectural diagram of a DeepONet with the commonly used nomenclature for its components is shown in Figure 1. DeepONets can take a multi-fidelity or multi-modal input [4, 5, 6, 7, 8] in the branch network and can use an independent network as the trunk, a network that represents the output space, e.g. in space-time coordinates or in parametric space in a continuous fashion. In some sense, DeepONets can be used as surrogates in a similar fashion as reduced order models (ROMs) [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. However, unlike ROMs, they are over-parametrized which leads to both generalizability and robustness to noise that is not possible with ROMs, see the recent work of [20].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found