Fast Benchmarking of Accuracy vs. Training Time with Cyclic Learning Rates
Portes, Jacob, Blalock, Davis, Stephenson, Cory, Frankle, Jonathan
–arXiv.org Artificial Intelligence
Benchmarking the tradeoff between neural network accuracy and training time is computationally expensive. Here we show how a multiplicative cyclic learning rate schedule can be used to construct a tradeoff curve in a single training run. We generate cyclic tradeoff curves for combinations of training methods such as Blurpool, Channels Last, Label Smoothing and MixUp, and highlight how these cyclic tradeoff curves can be used to efficiently evaluate the effects of algorithmic choices on network training. In order to make meaningful improvements in neural network training efficiency, ML practitioners must be able to compare between different choices of network architectures, hyperparameters, and training algorithms. One straightforward way to do this is to characterize the tradeoff between accuracy and training time with a "tradeoff curve." Tradeoff curves can be generated by varying the length of training for each model configuration; longer training runs take more time but tend to reach higher quality (Figure 1C). For a fixed model and task configuration, this method of generating tradeoff curves is an estimate of the theoretical Pareto frontier, i.e. the set of all of the best possible tradeoffs between training time and accuracy, where any further attempt to improve one of these metrics worsens the other.
arXiv.org Artificial Intelligence
Nov-10-2022