
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by International Journal on Advanced Science, Engineering and Information Technology Vol.7 (2017) No. 5 ISSN: 2088-5334 An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate Nazri Mohd Nawi#1, Faridah Hamzah#, Norhamreeza Abdul Hamid#, Muhammad Zubair Rehman*, Mohammad Aamir#, Azizul Azhar Ramli# #Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia *Department of Computer Science and Information Technology, University of Lahore, Islamabad Campus, Pakistan 1E-mail: [email protected] Abstract — Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorithm. Keywords— back propagation; classification; momentum; adaptive learning rate; local minima; gradient descent errors in the hidden layers. This is the main reason in I. INTRODUCTION qualitative ability that makes BP is highly suitable to be applied on problems in which no relationship is found The research on artificial neural networks (ANN) is very between the output and the input. Despite the popularity and popular nowadays and has made considerable progress in provide successful solutions, BP also known for some recent years. Among the most popular tasks using ANN, we drawbacks. The main drawback of BP occur because it uses can mention pattern recognition, forecasting, and regression gradient descent (GD) learning which requires careful problems [1-2]. ANNs is known as diagnostic techniques selection of parameters such as network topology, initial that sculpted on the neurological functions of the human weights and biases, learning rate, activation function and brain. Basically, ANNs works by processing information value for the gain in the activation function [7]. Despite of like biological neurons in the brain which consists of small all those drawbacks, the popularity and the ability of back processing units known as Artificial Neurons. Moreover, propagating learning is still increasing. Essentially, because artificial neurons can be trained to perform complex it is robust and suitable for problems in which no calculations. In addition, an artificial neuron also can be relationship is found between the output and inputs. trained to store, recognize, estimate and adapt to new Moreover, until today many researches are still focusing on patterns without having the prior information of the function improving BP algorithm. This includes optimizing some it receives. Thus, this ability of learning and adaptation of parameters such as momentum, activation and learning rate ANNs has made it superior to the conventional methods [7]. used in the past [3,4,5]. In standard BP learning process, when the algorithm The basic structure of ANN consists of an input layer, one successfully computes the correct value of the weight, it can or more hidden layers and an output layer of neurons where converge faster to the solution; otherwise, the convergence every node in a layer is connected to every other node in the might be slower or it might cause divergence. In order to adjacent layer. The popular algorithm that uses in ANN is prevent this problem from occurring, the step of gradient Back Propagation (BP) algorithm [6]. The BP algorithm descent (GD) is controlled by a parameter called the learning learns by calculating the errors of the output layer to find the rate. Where the learning rate will determine the length of step taken by the gradient to move along the error surface [8]. 1693 Moreover, to avoid the oscillation problem that might Step 1: Randomly initialize weights happen around the steep valley, the fraction of last weight and offsets. Set all weights update is added to the current weight update and the and node offsets to small random values. magnitude is adjusted by a parameter called momentum. Step 2: Load input and desired output In this paper, instead of using a fix learning rate for the and set the desired output to whole learning process, the learning rate is changed 1. The input could be new on adaptively in order to speed up the learning process of the each trial or samples from a neural network.The paper is organized as follows: Section II training set. discusses the standard back propagation (BP) algorithm and Step 3: Calculate actual outputs by some of the improvements that been introduced by using the sigmoid nonlinearity researchers on back propagation algorithm. In addition this formulas to calculate outputs. Step 4: Adjust and Adapt weights. section also introduces the proposed method for improving BP’s training efficiency. In Section III, the robustness of the Step 5: Repeat the process by going to proposed algorithm is evaluated by comparing convergence Step 2 rates for Back Propagation Gradient Descent (BPGD) and Back Propagation Gradient Descent with Adaptive Learning Despite its popularity, back-propagation neural networks Rate (BPGD-LR) on several benchmark datasets. The paper (BPNN) have several limitations such as the slow rate of is concluded in the Section IV. convergence. Other than that, the use of BPNN will consume computation time. Moreover, there are local minimum points in the goal function of BPNN. Since the convergence rate of II. MATERIAL AND METHOD BPNN is very low, the network easily becomes unstable and One of the most popular learning algorithms for ANN is not suitable for large problems data sets. Furthermore, the back propagation algorithm. A back-propagation algorithm convergence behaviour of BPNN also depends on the choice belongs to the error connection learning type, whose of initial values of connection weights and other parameters learning process can be broken into two parts which is input used in the algorithm such as the learning rate and the feed-forward propagation and error feed-back propagation. momentum term [10]. Thus, BPNN needs improvement to The error is propagated backward when it appears between perform well and overcome those drawbacks. the input and the expected output during the feed-forward One of the most effective parameter that means to process. The best part of BP algorithm is that during the accelerate the convergence of back propagation learning is back-propagation, the connection weights values between the learning rate which values lies between [0,1]. Controlling each layer of neurons are corrected and gradually adjusted learning rate value has become a crucial factor for neural until the minimum output error is reached. network learning algorithm beside the neuron weight Gradient descent (GD) technique is expected to bring the adjustments for each iteration during the training process network closer to the minimum error without taking for because it affects the convergence rate. The learning rate granted the convergence rate of the network. Most of parameter is used to determine how fast the BP method gradient based optimization methods use the following converges to the minimum solution [11]. The larger the gradient descent rule: learning rate, the bigger the step and the faster the convergence. However, if the learning rate is made too large ∂E w n −=∆ η n)()( (1) the algorithm will become unstable. On the other hand, if the ij ∂ n)( wij learning rate is set to too small, the algorithm will take a long time to converge [12]. Many researches [13,14,15,16] where η n)( is the learning rate value at step n and the used different strategies to speed up the convergence time by gradient based search direction at step n is: varying the learning rate. The best strategies in gradient descent BP is that it utilizes larger learning rate when the ∂ n)( E n)( neural network model is far from the solution and smaller d −= = g (2) ∂ n)( learning rate when the neural net is near the solution [17]. wij Some researchers also demonstrated that using an adaptive The learning rate parameter is introduced to generate the learning rate will attempt to keep the learning step size as slope that moves downwards along the error surface to large as possible while keeping learning stable. It is done by search for the minimum point. The slow rates of making the learning rate responsive to the complexity of the convergence are due to the existence of local minima. local error surface. Furthermore, the convergence rate is relatively slow for a Another possible way to improve the convergence rate is network with more than one hidden layer. Apart from the by adding another parameter called momentum to the gradient and learning rate there are some other factors that adjustment expression. This can be accomplished by adding play an important role in the assignment of proper change to a fraction of the previous weight change to the current the weight specifically in term of its sign. These factors are weight change. Some researchers [18] demonstrated that low momentum, activation function and gain in activation momentum can causes weight oscillations and instability and function. Moreover, the momentum is used to overcome the thus preventing the network from learning and sometimes oscillation problem.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-