Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Malach, Eran, Yehudai, Gilad, Shalev-Shwartz, Shai, Shamir, Ohad
Neural network pruning is a popular method to reduce the size of a trained model, allowing efficient computation during inference time, with minimal loss in accura cy. However, such a method still requires the process of training an over-parameterized network, as trai ning a pruned network from scratch seems to fail (see [ 10 ]). Recently, a work by Frankle and Carbin [ 10 ] has presented a surprising phenomenon: pruned neural networks can be trained to achieve good performance, when resetting their weights to their initial values. Hence, the authors state the lottery ticket hypothesis: a randomly-initialized neural network contains a subnetwork such that, when trained in isolation, can match the performance of the original network. This observation has attracted great interest, with variou s followup works trying to understand this intriguing phenomenon. Specifically, very recent works by Z hou et al. [ 37 ], Ramanujan et al. [ 27 ] presented algorithms to find subnetworks that already achieve good per formance, without any training.
Feb-3-2020
- Country:
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Genre:
- Research Report (0.82)
- Industry:
- Leisure & Entertainment > Gambling (0.61)
- Technology: