Gradient Alignment in Physics-informed Neural Networks: A Second-Order Optimization Perspective

Wang, Sifan, Bhartari, Ananyae Kumar, Li, Bowen, Perdikaris, Paris

arXiv.org Artificial Intelligence 

Multi-task learning through composite loss functions has become a cornerstone of modern deep learning, from computer vision to scientific computing. However, when different loss terms compete for model capacity, they can generate conflicting gradients that impede optimization and degrade performance. While this fundamental challenge is known to the multi-task learning literature [1-3], several challenges remain open, particularly in settings where objectives are tightly coupled through complex physical constraints. In this work, we examine gradient conflicts through the lens of physics-informed neural networks (PINNs), where the challenge manifests acutely due to the inherent coupling between physical constraints and data-fitting objectives. Our key insight is that while first-order optimization methods struggle with competing objectives, appropriate preconditioning can naturally align gradients to enable efficient optimization. While our findings on gradient alignment and second-order preconditioning have broad implications for multi-task learning, here we focus on PINNs as they provide an ideal testbed: their physically-constrained objectives are mathematically precise, their solutions can be rigorously verified, and their performance bottlenecks are well-documented. Through theoretical analysis and extensive experiments on challenging partial differential equations (PDEs), we demonstrate breakthrough results in problems ranging from basic wave propagation to turbulent flows. To better motivate our approach, consider training a PINN to solve the Navier-Stokes equations. The model must simultaneously satisfy boundary conditions, conservation laws, and empirical measurements - objectives that often push a neural network's parameters in opposing directions.