An analysis of the derivative-free loss method for solving PDEs
The neural network is well known for its flexibility to represent complicated functions in a highdimensional space [3, 9]. In recent years, this strong property of the neural network has naturally led to representing the solution of partial differential equations (PDEs). Physics-informed neural network [16] and Deep Galerkin [17] use the strong form of the PDE to define the training loss, while the Deep Ritz [4] method uses a weak (or variational) formulation of PDEs to train the network. Also, a class of methods uses a stochastic representation of PDEs to train the neural network [5, 8]. All these methods have shown successful results in a wide range of problems in science and engineering, particularly for high-dimensional problems where the standard numerical PDE methods have limitations [5, 17, 2]. The goal of the current study is an analysis of the derivative-free loss method (DFLM; [8]). DFLM employs a stochastic representation of the solution for a certain class of PDEs, averaging stochastic samples as a generalized Feynman-Kac formulation. The loss formulation of DFLM directly guides a neural network to learn the point-to-neighborhood relationships of the solution. DFLM adopts bootstrapping in the context of reinforcement learning, where the neural network's target function is computed based on its current state through the point-to-neighborhood relation.
Sep-28-2023