lottery ticket initialization
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
The success of lottery ticket initializations (Frankle and Carbin, 2019) suggests that small, sparsified networks can be trained so long as the network is initialized appropriately. Unfortunately, finding these winning ticket'' initializations is computationally expensive. One potential solution is to reuse the same winning tickets across a variety of datasets and optimizers. However, the generality of winning ticket initializations remains unclear. Here, we attempt to answer this question by generating winning tickets for one training configuration (optimizer and dataset) and evaluating their performance on another configuration. Perhaps surprisingly, we found that, within the natural images domain, winning ticket initializations generalized across a variety of datasets, including Fashion MNIST, SVHN, CIFAR-10/100, ImageNet, and Places365, often achieving performance close to that of winning tickets generated on the same dataset. Moreover, winning tickets generated using larger datasets consistently transferred better than those generated using smaller datasets. We also found that winning ticket initializations generalize across optimizers with high performance. These results suggest that winning ticket initializations generated by sufficiently large datasets contain inductive biases generic to neural networks more broadly which improve training across many settings and provide hope for the development of better initialization methods.
Reviews: One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
I think that the finding that LT can generalise (I use the word "can" because it does not seem that this is true consistently) is an interesting one, and with some changes, this paper would deserve publication at a top venue like NeurIPS. However, I think we still see things differently on two points. Firstly, I do not believe that comparison to existing algorithms is orthogonal to the topic of this paper. You claim that "... we may be able to generate new initialization schemes which can substantially improve training of neural networks from scratch" and I agree, but the point I am making is that there are other ways of obtaining a better initialisation (e.g., unsupervised pretraining and/or layer-wise pretraining) which are known to improve performance and speed up converge, some of them using less computation than is required to generate a lottery ticket. I view your algorithm as yet another way of generating a good init using some data which yields good performance, potentially with other benefits like compression, after some amount of fine-tuning (the fact that LT is trained from scratch and thus require more fine-tuning than using trained weights seems like a drawback, not advantage, from this viewpoint).