Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks

Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks

Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks Yanzhao Wu+, Ling Liu+, Juhyun Bae+, Ka-Ho Chow+, Arun Iyengar∗, Calton Pu+, Wenqi Wei+, Lei Yu∗, Qi Zhang∗ +School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA ∗IBM Thomas. J. Watson Research, Yorktown Heights, NY, USA Abstract—Learning Rate (LR) is an important hyper- controls the extent of the update to the parameters θ at iteration parameter to tune for effective training of deep neural networks t. Choosing a proper learning rate can be difficult. A too small (DNNs). Even for the baseline of a constant learning rate, it learning rate may lead to slow convergence, while a too large is non-trivial to choose a good constant value for training a DNN. Dynamic learning rates involve multi-step tuning of LR learning rate can deter convergence and cause the loss function values at various stages of the training process and offer high to fluctuate and get stuck in a local minimum or even to accuracy and fast convergence. However, they are much harder diverge [12]–[14]. Due to the difficulty of determining a good to tune. In this paper, we present a comprehensive study of LR policy, the constant LR is a baseline default LR policy 13 learning rate functions and their associated LR policies by for training DNNs in several deep learning (DL) frameworks examining their range parameters, step parameters, and value update parameters. We propose a set of metrics for evaluating (e.g., Caffe, TensorFlow, Torch), numerous efforts have been and selecting LR policies, including the classification confidence, engaged to enhance the constant LR by incorporating a multi- variance, cost, and robustness, and implement them in LRBench, step dynamic learning rate schedule, which attempts to adjust an LR benchmarking system. LRBench can assist end-users the learning rate during different stages of DNN training and DNN developers to select good LR policies and avoid bad by using a certain type of annealing techniques. However, LR policies for training their DNNs. We tested LRBench on Caffe, an open source deep learning framework, to showcase the good LR schedules need to adapt to the characteristics of tuning optimization of LR policies. Evaluated through extensive different datasets and/or different neural network models [15]– experiments, we attempt to demystify the tuning of LR policies [18]. Empirical approaches are used manually in practice to by identifying good LR policies with effective LR value ranges find good LR values and update schedules through trials and and step sizes for LR update schedules. errors. Moreover, due to the lack of relevant systematic studies Index Terms—Neural Networks, Learning Rates, Deep Learn- ing, Training and analyses, the large search space for LR parameters often results in huge costs for this hand-tuning process, impairing I. INTRODUCTION the efficiency and performance of DNN training. Deep neural networks (DNNs) are widely employed to mine Big Data and gain deep insight on Big Data, ranging from In this paper, we present a comprehensive study of 13 image classification, voice recognition, text mining and Nat- learning rate functions and their associated LR policies by ural Language Processing (NLP). One of the most important examining their range parameters, step parameters, and value performance optimization for DNNs is to train a deep learning update parameters. We propose a set of metrics for evaluating, model capable of achieving high test accuracy. comparing and selecting good LR policies in terms of LR Training a deep neural network (DNN) is an iterative global utility, robustness and cost, and avoiding bad LR policies. arXiv:1908.06477v2 [cs.LG] 26 Oct 2019 optimization problem. During each iteration, a loss function We develop a LR benchmarking system, called LRBench, to is employed to measure the deviation of its prediction to the support automated or semi-automated evaluation and selection ground truth and update the model parameters θ, aiming to of good LR policies for DNN developers and end-users. To minimize this loss function Lθ. Gradient Descent (GD) is a demystify LR tuning, we implement all 13 LR functions and popular class of non-linear optimization algorithms that itera- the LR policy evaluation metrics in LRBench, and provide tively learn to minimize such a loss function [1]–[5]. Example analytical and experimental guidelines for identifying and GD optimizers include Stochastic GD [6], Momentum [7], selecting good policies for each LR function and illustrate Nesterov [8], Adagrad [9], AdaDelta [10], Adam [11] and so why some LR policies are not recommended. We conduct forth. We consider SGD for updating the parameters θ of a comprehensive experiments using MNIST and CIFAR-10 and DNN by Formula (1): three popular neural networks: LeNet [1], a 3-layer CNN (CNN3) [19] and ResNet [20], to demonstrate the accuracy θt = θt−1 ηt Lθ (1) − r improvement by choosing good LR policies, avoiding bad where Lθ is the loss function, Lθ represents the gradients, ones, and the impact of datasets and DNN model specific r t is the current iteration, and ηt is the learning rate, which characteristics on LR tuning. 978-1-7281-0858-2/19/$31.00 © 2019 IEEE TABLE I DEFINITION OF THE 13 LEARNING RATE FUNCTIONS Learning Rates abbr. k0 k1 g(t) Schedule #Param Fixed FIX k0 k0 N/A fixed 1 fixed step STEP k 0 γfloor(t=l) t; l 3 size 0 variable step Given n step sizes: l0; l1; l2; :::; ln−1, NSTEP k0 0 i t; li n + 2 sizes γ ; i 2 N s:t: li−1 ≤ t < li exponential Decaying LR EXP k 0 γt t 2 function 0 inverse INV k 0 1 t 3 time 0 (1+tγ)p polynomial t; l POLY k 0 (1 − t )p 4 (3) function 0 l (l = max iter) 2 πt triangular TRI k0 k1 T RI(t) = π jarcsin(sin( 2l ))j t; l 3 triangular2 TRI2 k k 1 T RI(t) t; l 3 0 1 floor( t ) 2 2l t triangular exp TRIEXP k0 k1 γ T RI(t) t; l 4 t CLR sin SIN k0 k1 SIN(t) = jsin(π 2l )j t; l 3 1 SIN(t) sin2 SIN2 k0 k1 floor( t ) t; l 3 2 2l t sin exp SINEXP k0 k1 γ SIN(t) t; l 4 1 2t cosine COS k0 k1 COS(t) = 2 (1 + cos(π l )) t; l 3 II. CHARACTERIZATION OF LEARNING RATES g(t) [0; 1], an LR function that defines the specific rule for updating2 the LR values. We classify the 13 LR functions Learning rate (η) as a global hyperparameter determines into three categories: fixed, decaying and cyclic. (1)The fixed the size of the steps which a GD optimizer takes along the LR is defined by η k and k k , and it has only direction of the slope of the surface derived from the loss = 0 1 = 0 one parameter k with a constant value from the domain of function downhill until reaching a (local) minimum (valley). 0 ; . Although using a relatively small fixed LR value is Small LRs will slow down the training progress, but are also (0 1) a conservative approach of avoiding bad LR policies, it has more desirable when the loss keeps getting worse or oscillates the high cost of slow convergence and long training time, wildly. Large LRs will accelerate the training progress rate and combined with little guarantee of high accuracy, numerous are beneficial when the loss is dropping fairly consistently in efforts have been engaged in developing decaying LRs and several consecutive iterations. However, large LR value may cyclic LRs. (2) Decaying Learning Rates use different forms also cause the training getting stuck at a local minimum, of decay functions to improve the convergence speed of DNN preventing the model from improving, resulting in either a low training by LR annealing. The STEP function defines the LR accuracy or not able to converge. Ideally, one wishes to have a annealing schedule by a fixed step size, such as an update suitable LR policy that will gradually improve the model and every l iterations (l > ). It starts with a relatively large lead to a high accuracy at the end of the training. However, 1 LR value, defined by k , and decays the LR value over finding such an optimal LR is difficult. Typically, numerous 0 time, for example, by an exponential factor γ defined by the experiments need to be performed with trials and errors to STEP function g t . Instead of the fixed step size, the NSTEP find the best values. Thus, manually tuning to find a good LR ( ) function uses variable step sizes, say l ; l ; l ; : : : ; l . The policy is rare. Using a very small constant learning rate (e.g., 0 1 2 n−1 LR value is initialized by k (when i ) and computed by 0.01 for MNIST and 0.001 for CIFAR-10) becomes a not-bad 0 = 0 γi (when i > and l t l ) (recall Table I). Note that default LR policy. 0 i−1 i the variable step sizes are≤ pre-defined≤ annealing schedules for Several orthogonal and yet complimentary efforts have been decaying the LR value over time. For other decaying LRs, engaged to address the problems of finding good LR values. g(t) is a decaying function, such as the exponential function, Table I provides the definitions of 13 LR functions used the inverse time function, and the polynomial function. All in our study. Each LR function consists of one or more decaying LRs can be formulated simply as η(t) = k g(t), parameters.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us