Radial Basis Operator Networks

Kurz, Jason, Oughton, Sean, Liu, Shitao

arXiv.org Artificial Intelligence 

Scientific computing has benefited from using operator networks to enhance or replace numerical computation for the purpose of simulation and forecasting on a wide array of applications to include computational fluid dynamics and weather forecasting [3]. The two primary neural operators that demonstrated immediate success are the deep operator network (DeepONet) [4] based on the universal approximation theorem in [5], and the Fourier neural operator (FNO) [6]. The basic DeepONet approximates the operator by applying a weighted sum to the product of each of the transformed outputs from two FNN sub-networks. The upper sub-network, or branch net, is applied to the input functions while the lower trunk net is applied to the querying locations of the output function. In contrast, the FNO is a particular type of Neural Operator network [7], which accepts only input functions (not querying locations for the output) and applies a global transformation on the function input via a more intricate architecture. Motivated by fundamental solutions to partial differential equations (PDEs), the FNO network sums the output of an integral kernel transformation to the input function with the output of a linear transformation. The sum is then passed through a non-linear activation function. To accelerate the integral kernel transformation, the FNO applies a Fourier transform (FT) to the input data, with the FT of the integral kernel assumed as trainable parameters.