Goto

Collaborating Authors

 activation function bottleneck


Breaking the Activation Function Bottleneck through Adaptive Parameterization

Neural Information Processing Systems

Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and Wikitext-2 word-modeling tasks while using fewer parameters and converging in half as many iterations.



Reviews: Breaking the Activation Function Bottleneck through Adaptive Parameterization

Neural Information Processing Systems

Notably, an effective adaptive parameterization of LSTM is proposed. However, no detailed analysis is given to justify the effectiveness of the proposed method, leaving the source of the effectiveness not clear.


Breaking the Activation Function Bottleneck through Adaptive Parameterization

Flennerhag, Sebastian, Yin, Hujun, Keane, John, Elliot, Mark

Neural Information Processing Systems

Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and Wikitext-2 word-modeling tasks while using fewer parameters and converging in half as many iterations. Papers published at the Neural Information Processing Systems Conference.


Breaking the Activation Function Bottleneck through Adaptive Parameterization

Flennerhag, Sebastian, Yin, Hujun, Keane, John, Elliot, Mark

Neural Information Processing Systems

Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and Wikitext-2 word-modeling tasks while using fewer parameters and converging in half as many iterations.


Breaking the Activation Function Bottleneck through Adaptive Parameterization

Flennerhag, Sebastian, Yin, Hujun, Keane, John, Elliot, Mark

Neural Information Processing Systems

Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and Wikitext-2 word-modeling tasks while using fewer parameters and converging in half as many iterations.


Breaking the Activation Function Bottleneck through Adaptive Parameterization

Flennerhag, Sebastian, Yin, Hujun, Keane, John, Elliot, Mark

arXiv.org Machine Learning

Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and WikiText-2 word-modeling tasks while using fewer parameters and converging in less than half as many iterations.