Learning in latent spaces improves the predictive accuracy of deep neural operators
Kontolati, Katiana, Goswami, Somdatta, Karniadakis, George Em, Shields, Michael D.
–arXiv.org Artificial Intelligence
Operator regression provides a powerful means of constructing discretization-invariant emulators for partial-differential equations (PDEs) describing physical systems. Neural operators specifically employ deep neural networks to approximate mappings between infinite-dimensional Banach spaces. As data-driven models, neural operators require the generation of labeled observations, which in cases of complex high-fidelity models result in high-dimensional datasets containing redundant and noisy features, which can hinder gradient-based optimization. Mapping these high-dimensional datasets to a low-dimensional latent space of salient features can make it easier to work with the data and also enhance learning. In this work, we investigate the latent deep operator network (L-DeepONet), an extension of standard DeepONet, which leverages latent representations of high-dimensional PDE input and output functions identified with suitable autoencoders. We illustrate that L-DeepONet outperforms the standard approach in terms of both accuracy and computational efficiency across diverse time-dependent PDEs, e.g., modeling the growth of fracture in brittle materials, convective fluid flows, and large-scale atmospheric flows exhibiting multiscale dynamical features.
arXiv.org Artificial Intelligence
Apr-15-2023
- Country:
- Asia > India
- Tripura (0.04)
- North America > United States
- New Mexico > Bernalillo County > Albuquerque (0.04)
- Asia > India
- Genre:
- Research Report (0.40)
- Industry:
- Energy (0.93)
- Technology: