Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning
Elesedy, Bryn, Kanade, Varun, Teh, Yee Whye
The lottery ticket hypothesis [Frankle and Carbin, 2019] asserts that a randomly initialised, densely connected feed-forward neural network contains a sparse sub-network that, when trained in isolation, attains equal or higher accuracy than the full network. The method used to find these sub-networks is iterative magnitude pruning (IMP). A network is given a random initialisation, trained by some form of gradient descent for a specified number of iterations and a proportion of its smallest weights (by absolute magnitude) are deleted. The remaining weights are then reset to their initialised values and the network is retrained. This procedure can be performed multiple times, resulting in a sequence of sparse yet trainable sub-networks.
Aug-6-2020
- Country:
- Europe > United Kingdom > England (0.14)
- Genre:
- Contests & Prizes (0.63)
- Research Report (0.64)
- Industry:
- Leisure & Entertainment > Gambling (0.63)
- Technology: