Neural Spectral Methods: Self-supervised learning in the spectral domain

Du, Yiheng, Chalapathi, Nithin, Krishnapriyan, Aditi

arXiv.org Artificial Intelligence 

We present Neural Spectral Methods, a technique to solve parametric Partial Differential Equations (PDEs), grounded in classical spectral methods. Our method uses orthogonal bases to learn PDE solutions as mappings between spectral coefficients. In contrast to current machine learning approaches which enforce PDE constraints by minimizing the numerical quadrature of the residuals in the spatiotemporal domain, we leverage Parseval's identity and introduce a new training strategy through a spectral loss. Our spectral loss enables more efficient differentiation through the neural network, and substantially reduces training complexity. At inference time, the computational cost of our method remains constant, regardless of the spatiotemporal resolution of the domain. Our experimental results demonstrate that our method significantly outperforms previous machine learning approaches in terms of speed and accuracy by one to two orders of magnitude on multiple different problems, including reaction-diffusion systems, and forced and unforced Navier-Stokes equations. When compared to numerical solvers of the same accuracy, our method demonstrates a 10 increase in performance speed. Partial differential equations (PDEs) are fundamental for describing complex systems like turbulent flow (Temam, 2001), diffusive processes (Friedman, 2008), and thermodynamics (Van Kampen, 1992). Due to their complexity, these systems frequently lack closed-form analytical solutions, prompting the use of numerical methods. These numerical techniques discretize the spatiotemporal domain of interest and solve a set of discrete equations to approximate the system's behavior. Spectral methods are one such class of numerical techniques, and are widely recognized for their effectiveness (Boyd, 2001; Gottlieb & Orszag, 1977).