Goto

Collaborating Authors

 pde residual


Beyond Loss Guidance: Using PDE Residuals as Spectral Attention in Diffusion Neural Operators

Sawhney, Medha, Neog, Abhilash, Khurana, Mridul, Karpatne, Anuj

arXiv.org Machine Learning

Diffusion-based solvers for partial differential equations (PDEs) are often bottle-necked by slow gradient-based test-time optimization routines that use PDE residuals for loss guidance. They additionally suffer from optimization instabilities and are unable to dynamically adapt their inference scheme in the presence of noisy PDE residuals. To address these limitations, we introduce PRISMA (PDE Residual Informed Spectral Modulation with Attention), a conditional diffusion neural operator that embeds PDE residuals directly into the model's architecture via attention mechanisms in the spectral domain, enabling gradient-descent free inference. We show that PRISMA has competitive accuracy, at substantially lower inference costs, compared to previous methods across five benchmark PDEs especially with noisy observations, while using 10x to 100x fewer denoising steps, leading to 15x to 250x faster inference. Given the ubiquitous presence of partial differential equations (PDEs) in almost every scientific discipline, there is a rapidly growing literature on using neural networks for solving PDEs (Raissi et al., 2019a; Lu et al., 2019). This includes seminal works in operator learning methods such as the Fourier Neural Operator (FNO) Li et al. (2020) that learns resolution-independent mappings between function spaces of input parameters a and solution fields u. However, a major limitation of these methods is their reliance on complete and clean observations of either a or u, a condition rarely met in real-world applications where data is inherently noisy and sparse. The rise of generative models has inspired another class of methods for solving PDEs by modeling the joint distribution of a and u using diffusion-based backbones (Huang et al., 2024; Y ao et al., 2025; Lim et al., 2023; Shu et al., 2023; Bastek et al., 2024; Jacobsen et al., 2025). These methods offer two key advantages over operator learning methods: (i) they generate full posterior distributions of a and/or u, enabling principled uncertainty quantification crucial for ill-posed inverse problems, and (ii) they naturally accommodate sparse observations during inference using likelihood-based and PDE residual-based loss guidance, termed diffusion posterior sampling or test-time optimization.

  Country: North America > United States > Virginia (0.04)
  Genre: Research Report (0.50)
  Industry: Energy (0.46)

AB-PINNs: Adaptive-Basis Physics-Informed Neural Networks for Residual-Driven Domain Decomposition

Botvinick-Greenhouse, Jonah, Ali, Wael H., Benosman, Mouhacine, Mowlavi, Saviz

arXiv.org Artificial Intelligence

We introduce adaptive-basis physics-informed neural networks (AB-PINNs), a novel approach to domain decomposition for training PINNs in which existing subdomains dynamically adapt to the intrinsic features of the unknown solution. Drawing inspiration from classical mesh refinement techniques, we also modify the domain decomposition on-the-fly throughout training by introducing new subdomains in regions of high residual loss, thereby providing additional expressive power where the solution of the differential equation is challenging to represent. Our flexible approach to domain decomposition is well-suited for multiscale problems, as different subdomains can learn to capture different scales of the underlying solution. Moreover, the ability to introduce new subdomains during training helps prevent convergence to unwanted local minima and can reduce the need for extensive hyperparameter tuning compared to static domain decomposition approaches. Throughout, we present comprehensive numerical results which demonstrate the effectiveness of AB-PINNs at solving a variety of complex multiscale partial differential equations.


Provably Accurate Adaptive Sampling for Collocation Points in Physics-informed Neural Networks

Caradot, Antoine, Emonet, Rémi, Habrard, Amaury, Mezidi, Abdel-Rahim, Sebban, Marc

arXiv.org Artificial Intelligence

Despite considerable scientific advances in numerical simulation, efficiently solving PDEs remains a complex and often expensive problem. Physics-informed Neural Networks (PINN) have emerged as an efficient way to learn surrogate solvers by embedding the PDE in the loss function and minimizing its residuals using automatic differentiation at so-called collocation points. Originally uniformly sampled, the choice of the latter has been the subject of recent advances leading to adaptive sampling refinements for PINNs. In this paper, leveraging a new quadrature method for approximating definite integrals, we introduce a provably accurate sampling method for collocation points based on the Hessian of the PDE residuals. Comparative experiments conducted on a set of 1D and 2D PDEs demonstrate the benefits of our method.


Evidential Physics-Informed Neural Networks

Tan, Hai Siong, Wang, Kuancheng, McBeth, Rafe

arXiv.org Artificial Intelligence

We present a novel class of Physics-Informed Neural Networks that is formulated based on the principles of Evidential Deep Learning, where the model incorporates uncertainty quantification by learning parameters of a higher-order distribution. The dependent and trainable variables of the PDE residual loss and data-fitting loss terms are recast as functions of the hyperparameters of an evidential prior distribution. Our model is equipped with an information-theoretic regularizer that contains the Kullback-Leibler divergence between two inverse-gamma distributions characterizing predictive uncertainty. Relative to Bayesian-Physics-Informed-Neural-Networks, our framework appeared to exhibit higher sensitivity to data noise, preserve boundary conditions more faithfully and yield empirical coverage probabilities closer to nominal ones. Toward examining its relevance for data mining in scientific discoveries, we demonstrate how to apply our model to inverse problems involving 1D and 2D nonlinear differential equations.


Physics-Informed Graph-Mesh Networks for PDEs: A hybrid approach for complex problems

Chenaud, Marien, Magoulès, Frédéric, Alves, José

arXiv.org Artificial Intelligence

The recent rise of deep learning has led to numerous applications, including solving partial differential equations using Physics-Informed Neural Networks. This approach has proven highly effective in several academic cases. However, their lack of physical invariances, coupled with other significant weaknesses, such as an inability to handle complex geometries or their lack of generalization capabilities, make them unable to compete with classical numerical solvers in industrial settings. In this work, a limitation regarding the use of automatic differentiation in the context of physics-informed learning is highlighted. A hybrid approach combining physics-informed graph neural networks with numerical kernels from finite elements is introduced. After studying the theoretical properties of our model, we apply it to complex geometries, in two and three dimensions. Our choices are supported by an ablation study, and we evaluate the generalisation capacity of the proposed approach.


Physics-Informed Graph Convolutional Networks: Towards a generalized framework for complex geometries

Chenaud, Marien, Alves, José, Magoulès, Frédéric

arXiv.org Artificial Intelligence

Since the seminal work of [9] and their Physics-Informed neural networks (PINNs), many efforts have been conducted towards solving partial differential equations (PDEs) with Deep Learning models. However, some challenges remain, for instance the extension of such models to complex three-dimensional geometries, and a study on how such approaches could be combined to classical numerical solvers. In this work, we justify the use of graph neural networks for these problems, based on the similarity between these architectures and the meshes used in traditional numerical techniques for solving partial differential equations. After proving an issue with the Physics-Informed framework for complex geometries, during the computation of PDE residuals, an alternative procedure is proposed, by combining classical numerical solvers and the Physics-Informed framework. Finally, we propose an implementation of this approach, that we test on a three-dimensional problem on an irregular geometry.


A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks

Finzi, Marc, Potapczynski, Andres, Choptuik, Matthew, Wilson, Andrew Gordon

arXiv.org Machine Learning

Unlike conventional grid and mesh based methods for solving partial differential equations (PDEs), neural networks have the potential to break the curse of dimensionality, providing approximate solutions to problems where using classical solvers is difficult or impossible. While global minimization of the PDE residual over the network parameters works well for boundary value problems, catastrophic forgetting impairs the applicability of this approach to initial value problems (IVPs). In an alternative local-in-time approach, the optimization problem can be converted into an ordinary differential equation (ODE) on the network parameters and the solution propagated forward in time; however, we demonstrate that current methods based on this approach suffer from two key issues. First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors. Second, as the ODE methods scale cubically with the number of model parameters, they are restricted to small neural networks, significantly limiting their ability to represent intricate PDE initial conditions and solutions. Building on these insights, we develop Neural IVP, an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters, enabling us to evolve the dynamics of challenging PDEs with neural networks. Partial differential equations (PDEs) are needed to describe many phenomena in the natural sciences.


Diffusion model based data generation for partial differential equations

Apte, Rucha, Nidhan, Sheel, Ranade, Rishikesh, Pathak, Jay

arXiv.org Artificial Intelligence

In a preliminary attempt to address the problem of data scarcity in physics-based machine learning, we introduce a novel methodology for data generation in physics-based simulations. Our motivation is to overcome the limitations posed by the limited availability of numerical data. To achieve this, we leverage a diffusion model that allows us to generate synthetic data samples and test them for two canonical cases: (a) the steady 2-D Poisson equation, and (b) the forced unsteady 2-D Navier-Stokes (NS) {vorticity-transport} equation in a confined box. By comparing the generated data samples against outputs from classical solvers, we assess their accuracy and examine their adherence to the underlying physics laws. In this way, we emphasize the importance of not only satisfying visual and statistical comparisons with solver data but also ensuring the generated data's conformity to physics laws, thus enabling their effective utilization in downstream tasks.


Efficient Training of Physics-Informed Neural Networks with Direct Grid Refinement Algorithm

Nilabh, Shikhar, Grandia, Fidel

arXiv.org Artificial Intelligence

This research presents the development of an innovative algorithm tailored for the adaptive sampling of residual points within the framework of Physics-Informed Neural Networks (PINNs). By addressing the limitations inherent in existing adaptive sampling techniques, our proposed methodology introduces a direct mesh refinement approach that effectively ensures both computational efficiency and adaptive point placement. Verification studies were conducted to evaluate the performance of our algorithm, showcasing reasonable agreement between the model based on our novel approach and benchmark model results. Comparative analyses with established adaptive resampling techniques demonstrated the superior performance of our approach, particularly when implemented with higher refinement factor. Overall, our findings highlight the enhancement of simulation accuracy achievable through the application of our adaptive sampling algorithm for Physics-Informed Neural Networks.


Mitigating Propagation Failures in Physics-informed Neural Networks using Retain-Resample-Release (R3) Sampling

Daw, Arka, Bu, Jie, Wang, Sifan, Perdikaris, Paris, Karpatne, Anuj

arXiv.org Artificial Intelligence

This is reflected in et al., 2021). Despite the success of PINNs, it is known that several recent studies on characterizing the "failure PINNs sometimes fail to converge to the correct solution modes" of PINNs, although a thorough understanding in problems involving complicated PDEs, as reflected in of the connection between PINN failure several recent studies on characterizing the "failure modes" modes and sampling strategies is missing. In of PINNs (Wang et al., 2021; 2022c; Krishnapriyan et al., this paper, we provide a novel perspective of failure 2021). Many of these failure modes are related to the susceptibility modes of PINNs by hypothesizing that training of PINNs in getting stuck at trivial solutions acting PINNs relies on successful "propagation" of as poor local minima, due to the unique optimization challenges solution from initial and/or boundary condition of PINNs. In particular, training PINNs is different points to interior points. We show that PINNs from conventional deep learning problems as we only have with poor sampling strategies can get stuck at access to the correct solution on the initial and/or boundary trivial solutions if there are propagation failures, points, while for all interior points, we can only compute characterized by highly imbalanced PDE residual PDE residuals. Also, minimizing PDE residuals does not fields. To mitigate propagation failures, we propose guarantee convergence to a correct solution since there are a novel Retain-Resample-Release sampling many trivial solutions of commonly observed PDEs that (R3) algorithm that can incrementally accumulate show 0 residuals. While previous studies have mainly focused collocation points in regions of high PDE on modifying network architectures or balancing loss residuals with little to no computational overhead.