Low-Rank Adaptation of Evolutionary Deep Neural Networks for Efficient Learning of Time-Dependent PDEs

Zhang, Jiahao, Zhang, Shiheng, Lin, Guang

arXiv.org Machine Learning 

A B S T R A C T We study the Evolutionary Deep Neural Network (EDNN) framework for accelerating numerical solvers of time-dependent partial differential equations (PDEs). We introduce a Low-Rank Evolutionary Deep Neural Network (LR-EDNN), which constrains parameter evolution to a low-rank subspace, thereby reducing the effective dimensionality of training while preserving solution accuracy. The low-rank tangent subspace is defined layer-wise by the singular value decomposition (SVD) of the current network weights, and the resulting update is obtained by solving a well-posed, tractable linear system within this subspace. We evaluate LR-EDNN on representative PDE problems and compare it against corresponding baselines. Across cases, LR-EDNN achieves comparable accuracy with substantially fewer trainable parameters and reduced computational cost. These results indicate that low-rank constraints on parameter velocities, rather than full-space updates, provide a practical path toward scalable, efficient, and reproducible scientific machine learning for PDEs. Introduction The application of deep learning to solving partial differential equations (PDEs) has emerged as an active and promising research area, providing a powerful alternative to traditional numerical methods. Unlike classical approaches such as finite difference, finite element, or spectral methods, which rely on discretization and iterative solvers, deep learning methods leverage neural networks to approximate solutions directly, often bypassing the need for meshes and offering flexibility in handling irregular domains and high-dimensional problems [1, 2]. Early efforts in this domain focused primarily on two paradigms.