Anant-Net: Breaking the Curse of Dimensionality with Scalable and Interpretable Neural Surrogate for High-Dimensional PDEs

Menon, Sidharth S., Jagtap, Ameya D.

arXiv.org Artificial Intelligence 

Physics-informed deep learning (PIDL) represents a rapidly advancing framework that integrates known governing physical laws, typically formulated as PDEs, into the training process of deep neural networks. In contrast to conventional data-driven models that rely solely on observational data, PIDL incorporates physical constraints to guide learning, thereby enhancing generalization, reducing data dependence, and improving interpretability. This synthesis of physics and deep learning has demonstrated broad applicability in solving forward and inverse problems across scientific and engineering domains, particularly in scenarios involving limited, noisy, or deceptive data. Key methodologies under the PIDL umbrella include physics-informed neural networks (PINNs) [1, 2, 3, 4], which embed PDE constraints via automatic differentiation; sparse identification of nonlinear dynamics (SINDy) [5, 6], which infers governing equations by promoting sparsity in learned representations; and physics-informed neural operators [7, 8, 9, 10, 11], which approximate solution operators across function spaces to model families of PDEs. These approaches are particularly well-suited for high-dimensional problems, where traditional numerical solvers suffer from the curse of dimensionality. High-dimensional PDEs are integral to various scientific and engineering domains, including quantum mechanics, financial mathematics, and optimal control. Their solutions provide crucial insights into complex, multi-scale phenomena that cannot be accurately captured using lower-dimensional approximations. However, solving these equations efficiently remains a significant challenge due to the curse of dimensionality, the exponential growth in computational complexity and data requirements as the number of dimensions increases.