swapout
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Reviews: Swapout: Learning an ensemble of deep architectures
Why can't you estimate the test time statistics empirically on a validation set? I really appreciate the tidbits on why dropout and swapout interact poorly with batch normalization. It's useful to know that you don't have to average over very many sampled dropouts (swapouts). I think this is a neat additional analysis and rather useful to the community. Why do the authors first do exactly 196 and then 224 epochs before decaying the learning rate? Normally such specific choices would arouse suspicion except in this case I expect it doesn't make much difference (e.g. between 196 and a round number like 200).
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Swapout: Learning an ensemble of deep architectures
Singh, Saurabh, Hoiem, Derek, Forsyth, David
We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR-100. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored.
Swapout: Learning an ensemble of deep architectures
Singh, Saurabh, Hoiem, Derek, Forsyth, David
We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR-100. Swapout samples from a rich set of architectures including dropout, stochastic depth and residual architectures as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
[1605.06465] Swapout: Learning an ensemble of deep architectures (samples from dropout, ResNets, stochastic depth) • /r/MachineLearning
Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model. Could this mean potentially ResNet like accuracy, available on less advanced infrastructure (e.g. This is parameter number dependent, ResNets have few parameters. Authors of this paper show that it is competitive with ResNets using (approximately) the same number of parameters, though. It is a poor advertising for a good paper.