
A Survey on Activation Functions and their relation with Xavier and He Normal Initialization Leonid Datta Delft University of Technology Abstract In artificial neural network, the activation function and the weight ini- tialization method play important roles in training and performance of a neural network. The question arises is what properties of a function are important/necessary for being a well-performing activation function. Also, the most widely used weight initialization methods - Xavier and He normal initialization have fundamental connection with activation func- tion. This survey discusses the important/necessary properties of acti- vation function and the most widely used activation functions (sigmoid, tanh, ReLU, LReLU and PReLU). This survey also explores the relation- ship between these activation functions and the two weight initialization methods - Xavier and He normal initialization. 1 Introduction Artificial intelligence has been trying to make intelligent machines for long [1]. Artificial neural networks have played important roles in artificial intelligence to achieve its goal [2] [3]. When an artificial neural network is built to execute a task, it is programmed to perceive a pattern. The main task of an artificial neural network is to learn this pattern from data [1]. An artificial neural network is composed of large number of interconnected working units known as perceptrons or neurons. A perceptron is composed of four components: input node, weight vector, activation function and output arXiv:2004.06632v1 [cs.NE] 18 Mar 2020 node. The first component of a perceptron is the input node - it receives the in- put vector. I assume to have an m dimensional input vector x = x1 x2 : : : xm . The second component is the weight vector which has the same dimension as that of the input vector. Here, the weight vector is w = w1 w2 : : : wm . From the input vector and the weight vector, an inner product is calculated as x>w. While calculating the inner product, an extra term 1 is added to the input vector with an initial weight value (here w0). This is known as bias. This bias term is added as an intercept so that the inner product can be easily shifted using the weight w0. After the bias has been added, the input vector and the 1 weight vector look like x = 1 x1 x2 : : : xm and w = w0 w1 w2 : : : wm . > Thus, x w = w0 + w1x1 + w2x2 + ::: + wmxm. The inner product x>w is used as input to a function known as the activation function (here symbolized as f). This activation function is the third component of the perceptron. The output from the activation function is sent to the output node which is the fourth component. This is how an input vector flows from the input node to the output node. It is known as forward propagation. In mathematical terms, the output y(x1; :::; xm) can be expressed as y(x1; : : : ; xm) = f(w0 + w1x1 + w2x2 + ::: + wmxm) (1) . Figure 1: Structure of a single layer perceptron The perceptron explained above is a single layer perceptron as it has only one layer of weight vector and activation function. This structure is shown in fig 1. Neural networks typically have more than one layer. Neural networks receive the input and pass them through a series of weight vector and activation function. At the end, it is passed at the output layer. Since, the layers between the input layer and the output layer are not directly visible from outside, they are called the hidden layers. A neural network with one hidden layer is known as shallow network. When the output is produced at the output layer, the neural network calcu- lates the loss. Loss quantifies the difference between the obtained output and the desired output. Let L be the loss. The goal of the neural network is to update the weight vectors of the perceptron to reduce the loss L. To perform this task, gradient descent is used. In gradient descent, the weights are updated by moving them to the opposite direction of the gradient of the loss with respect @L to weight. The gradient of the loss with respect to the weights @w is expressed as @L @L @y @L @y @z = · = · · (2) @w @y @w @y @z @w 2 where z is the inner product z = x>w and y = f(z). This is known as the chain rule. This update is done in the opposite direction of the forward propagation- starting from the last layer and then gradually moving to the first layer. This process is known as backward propagation or backpropagation. One forward propagation and one backward propagation of all the training examples or train- ing data is termed as an epoch. Since the weight vector gets updated during training, it needs to be assigned initial values before the training starts. Assigning the initial values to the weight vector is known as weight initialization. Once the weight vector is updated, the input is again passed through the network in forward direction to generate the output and calculate the loss. This process continues till the loss reaches a satisfactory minimum value. A network is said to converge when the loss achieves the satisfactory minimum value and this process is called the training of a neural network. When an artificial neural network aims to perform a task, the neural network is first trained. One of the most interesting characteristics of artificial neural network is the possibility to adapt its behavior with the changing characteristics of the type of task. The activation function has an important role in this behaviour adaptation and learning [4]. The choice of the activation function at different layers in a neural network is important as it decides how the data will be presented to the next layer. Also, it controls how much bounded or not bounded (that is, the range of the data) the data will be to the next layer or the output layer of the network. When the network is sufficiently deep, learning the pattern becomes less difficult for common nonlinear functions [5] [6]. But when the network is not complex and deep, the choice of activation function has more significant effect on the learning pattern and the performance of the neural network than that of the complex and deep networks [4]. Weight initialization takes an important role in the speed of training a neural network [7]. More precisely it controls the speed of the backpropagation because the weights are updated during backpropagation [8]. If the weight initialization is not proper, it can lead to poor update of weight so that the weights will get saturated at early stage of training[9]. This can cause the training process to fail. In artificial neural network,Xavier and He normal weight initialization method have gained popularity among the different weight initialization methods since they have been proposed [10] [11][12] [13] [14] [15]. 1.1 Contribution The main contributions of this survey are: 1. This survey discusses the necessary/ important properties an activation function is expected to have and explore why they are necessary/important. 2. This survey discusses the four widely used functions - sigmoid, tanh, ReLU, LReLU and PReLU and why they are so widely used. It also discusses the problems faced by these activation functions and why they face the problems. 3 3. This survey discusses Xavier and He normal weight initialization method and their relation with the mentioned activation functions . 1.2 Organisation of the report In this report, section 2 puts light on the activation function in detail. Sec- tion 2.1 talks about the necessary/important properties an activation function is expected to have. Section 2.2 talks about the main two problems (vanishing gradient problem and dead neuron problem) faced by activation functions. Sec- tion 2.3 discusses the five different activation functions (sigmoid, tanh, ReLU, LReLU and PReLU), their properties and problems faced by them. Section 2.4 shows which properties which properties hold or do not hold by the mentioned activation functions and also the problems faced or not faced by the mentioned activation functions in a tabular format. Section 3 focuses on the weight initialization methods. Section 3.1 discusses the Xavier initialization method and section 3.2 discusses the He normal weight initialization method [10] [11]. Section 4 gives an overview of the insights from the survey and finally the conclusions of the survey are noted in section 5. 2 Activation function The activation function, also known as the transfer function, is the nonlinear function applied on the inner product x>w in an artificial neural network. Before discussing the properties of activation function, it is important to know what the sigmoidal activation function is. A function is called sigmoidal in nature if it has all the following properties: 1. it is continuous 2. it is bounded between a finite lower and upper bound 3. it is nonlinear 4. its domain contains all real numbers 5. its output value monotonically increases when the input value increases and as a result of this, it has got an `S' shaped curve. A feed-forward network having a single hidden layer with finite number of neu- rons and sigmoidal activation function can approximate any continuous bound- ary or functions. This is known as the Cybenko theorem [16]. 2.1 Properties of activation function The properties activation functions are expected to have are- 4 1. Nonlinear: There are two reasons why an activation function should be nonlinear. They are: (a) The boundaries or patterns in real-world problems cannot always be expected to be linear. A non-linear function can easily approximate a linear boundary whereas a linear function cannot approximate a non-linear boundary.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-