Preventing Gradient Explosions in Gated Recurrent Units
Kanai, Sekitoshi, Fujiwara, Yasuhiro, Iwamura, Sotetsu
–Neural Information Processing Systems
A gated recurrent unit (GRU) is a successful recurrent neural network architecture for time-series data. The GRU is typically trained using a gradient-based method, which is subject to the exploding gradient problem in which the gradient increases significantly. This problem is caused by an abrupt change in the dynamics of the GRU due to a small variation in the parameters. In this paper, we find a condition under which the dynamics of the GRU changes drastically and propose a learning method to address the exploding gradient problem. Our method constrains the dynamics of the GRU so that it does not drastically change.
Neural Information Processing Systems
Feb-14-2020, 05:42:50 GMT
- Technology: