Bayesian Compression for Natural Language Processing
Chirkova, Nadezhda, Lobacheva, Ekaterina, Vetrov, Dmitry
In natural language processing, a lot of the tasks are successfully solved with recurrent neural networks, but such models have a huge number of parameters. The majority of these parameters are often concentrated in the embedding layer, which size grows proportionally to the vocabulary length. We propose a Bayesian sparsification technique for RNNs which allows compressing the RNN dozens or hundreds of times without time-consuming hyperparameters tuning. We also generalize the model for vocabulary sparsification to filter out unnecessary words and compress the RNN even further. We show that the choice of the kept words is interpretable. 1 Introduction Recurrent neural networks (RNNs) are among the most powerful models for natural language processing, speech recognition, question-answering systems (Chan et al., 2016; Ha et al., 2017; Wu et al., 2016; Ren et al., 2015).
Oct-25-2018
- Country:
- Asia > Russia (0.04)
- Europe
- Italy > Sardinia (0.04)
- Russia > Central Federal District
- Moscow Oblast > Moscow (0.04)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States (0.04)
- Canada > Quebec
- Genre:
- Research Report (0.82)
- Technology: