Demystifying Learning Rate Polices for High Accuracy Training of Deep Neural Networks

Wu, Yanzhao, Liu, Ling, Bae, Juhyun, Chow, Ka-Ho, Iyengar, Arun, Pu, Calton, Wei, Wenqi, Yu, Lei, Zhang, Qi

arXiv.org Machine Learning 

J. W atson Research, Y orktown Heights, NY, USA Abstract --Learning Rate (LR) is an important hyper-parameter to tune for effective training of deep neural networks (DNNs). Even for the baseline of a constant learning rate, it is nontrivial to choose a good constant value for training a DNN. Dynamic learning rates involve multi-step tuning of LR values at various stages of the training process and offer high accuracy and fast convergence. However, they are much harder to tune. In this paper, we present a comprehensive study of 13 learning rate functions and their associated LR policies by examining their range parameters, step parameters, and value update parameters. We propose a set of metrics for evaluating and selecting LR policies, including the classification confidence, variance, cost, and robustness, and implement them in LRBench, an LR benchmarking system. LRBench can assist end-users and DNN developers to select good LR policies and avoid bad LR policies for training their DNNs. We tested LRBench on Caffe, an open source deep learning framework, to showcase the tuning optimization of LR policies. Evaluated through extensive experiments, we attempt to demystify the tuning of LR policies by identifying good LR policies with effective LR value ranges and step sizes for LR update schedules. I NTRODUCTION Deep neural networks (DNNs) are widely employed to mine Big Data and gain deep insight on Big Data, ranging from image classification, voice recognition, text mining and Natural Language Processing (NLP). One of the most important performance optimization for DNNs is to train a deep learning model capable of achieving high test accuracy.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found