Locally adaptive activation functions with slope recovery term for deep and physics-informed neural networks
Jagtap, Ameya D., Kawaguchi, Kenji, Karniadakis, George Em
Locally adaptive activation functions with slope recovery term for deep and physics-informed neural networks Ameya D. Jagtap 1, Kenji Kawaguchi 2 and George Em Karniadakis 1,3, 1 Division of Applied Mathematics, Brown University, 182 George Street, Providence, RI 02912, USA. 2 Massachusetts Institute of T echnology, 77 Massachusetts Ave, Cambridge, MA 02139, USA. 3 Pacific Northwest National Laboratory, Richland, WA 99354, USA.Abstract We propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks. The local adaptation of activation function is achieved by introducing scalable hyper-parameters in each layer (layer-wise) and for every neuron separately (neuron-wise), and then optimizing it using the stochastic gradient descent algorithm. Introduction of neuron-wise activation function acts like a vector activation function as opposed to the traditional scalar activation function given by fixed, global and layer-wise activations. In order to further increase the training speed, an activation slope based slope recovery term is added in the loss function, which further accelerate convergence, thereby reducing the training cost. For numerical experiments, a nonlinear discontinuous function is approximated using a deep neural network with layer-wise and neuron-wise locally adaptive activation functions with and without the slope recovery term and compared with its global counterpart. Moreover, solution of the nonlinear Burgers equation, which exhibits steep gradients, is also obtained using the proposed methods. On the theoretical side, we prove that in the proposed method the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate. Furthermore, the proposed adaptive activation functions with the slope recovery are shown to accelerate the training process in standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets with and without data augmentation. Keywords: Machine learning, bad minima, stochastic gradients, accelerated training, PINN, deep learning benchmarks. 1. Introduction In recent years, research on neural networks (NNs) has intensified around the world due to their successful applications in many diverse fields such as speech recognition [13], computer vision [16], natural language translation [25], etc. Training of NN is performed on data sets before using it in the actual applications.
Sep-25-2019