<<

Notes on

Peter Sadowski Department of Computer Science University of California Irvine Irvine, CA 92697 [email protected]

Abstract

This document derives backpropagation for some common neural networks.

1 Cross Entropy Error with Logistic Activation

In a classification task with two classes, it is standard to use a neural network architecture with a single logistic output unit and the cross-entropy (as opposed to, for example, the sum-of-squared loss function). With this combination, the output prediction is always between zero and one, and is interpreted as a . Training corresponds to maximizing the conditional log-likelihood of the data, and as we will see, the calculation simplifies nicely with this combination. We can generalize this slightly to the case where we have multiple, independent, two-class classi- fication tasks. In order to demonstrate the calculations involved in backpropagation, we consider a network with a single hidden of logistic units, multiple logistic output units, and where the total error for a given example is simply the cross-entropy error summed over the output units.

t1 t2 t3

Output yi y1 y2 y3

Hidden hj

Inputs xk

The cross entropy error for a single example with nout independent targets is given by the sum

nout X E = − (ti log(yi) + (1 − ti) log(1 − yi)) (1) i=1 where t is the target vector, y is the output vector. In this architecture, the outputs are computed by applying the to the weighted sums of the hidden layer activations, s,

1 yi = (2) 1 + e−si X si = hjwji. (3) j=1

1 We can compute the of the error with respect to each weight connecting the hidden units to the output units using the .

∂E ∂E ∂y ∂s = i i (4) ∂wji ∂yi ∂si ∂wji Examining each factor in turn, ∂E −t 1 − t = i + i , (5) ∂yi yi 1 − yi y − t = i i , (6) yi(1 − yi) ∂yi = yi(1 − yi) (7) ∂si ∂si = hj (8) ∂wji where xj is the activation of the j node in the hidden layer. Combining things back together, ∂E = yi − ti (9) ∂si and ∂E = (yi − ti)hj (10) ∂wji . The above gives us the of the error with respect to the weights in the last layer of the network, but computing the gradients with respect to the weights in lower layers of the network (i.e. connecting the inputs to the hidden layer units) requires another application of the chain rule. This is the backpropagation . ∂E 1 Here it is useful to calculate the quantity 1 where j indexes the hidden units, sj is the weighted ∂sj 1 input sum at hidden unit j, and hj = −s1 is the activation at unit j. 1+e j

nout ∂E X ∂E ∂si ∂hj = (11) ∂s1 ∂s ∂h ∂s1 j i=1 i j j nout X = (yi − ti)(wji)(hj(1 − hj)) (12) i=1

∂E X ∂E ∂yi ∂si = (13) ∂h ∂y ∂s ∂x j i=1 i i j X ∂E = y (1 − y )w (14) ∂y i i ji i i

1 Then a weight wkj connecting input unit k to hidden unit j has gradient 1 ∂E ∂E ∂sj 1 = 1 1 (15) ∂wkj ∂sj ∂wkj nout X = (yi − ti)(wji)(hj(1 − hj))(xk) (16) i=1 By recursively computing the gradient of the error with respect to the activity of each neuron, we can compute the gradients for all weights in a network.

2 2 Classification with Softmax Transfer and Cross Entropy Error

When a classification task has more than two classes, it is standard to use a softmax output layer. The provides a way of predicting a discrete over the classes. We again use the cross-entropy error function, but it takes a slightly different form. The softmax activation of the ith output unit is

esi yi = nclass (17) P sc c e and the cross entropy error function for multi-class output is

nclass X E = − ti log(yi) (18) i Thus, computing the gradient yields ∂E t = − i (19) ∂yi yi ( esi esi 2 ∂y Pnclass sc − ( Pnclass sc ) i = k i c e c e = esi esk (20) ∂sk − Pnclass sc 2 i 6= k ( c e )  y (1 − y ) i = k = i i (21) −yiyk i 6= k

nclass ∂E X ∂E ∂yk = (22) ∂si ∂yk ∂si k ∂E ∂yi X ∂E ∂yk = − (23) ∂yi ∂si ∂yk ∂si k6=i X = −ti(1 − yi) + tkyi (24) k6=i X = −ti + yi tk (25) k = yi − ti (26) (27)

Note that this is the same formula as in the case with the logistic output units! The values themselves will be different, because the predictions y will take on different values depending on whether the output is logistic or softmax, but this is an elegant simplification. The gradient for weights in the top layer is again

∂E X ∂E ∂si = (28) ∂w ∂s ∂w ji i i ji

= (yi − ti)hj (29) and for units in the hidden layer, indexed by j, nclass ∂E X ∂E ∂si ∂hj = (30) ∂s1 ∂s ∂h ∂s1 j i i j j nclass X = (yi − ti)(wji)(hj(1 − hj)) (31) i

3 3 Regression with Linear Output and

Note that performing regression with a linear output unit and the mean squared error loss function ∂E also leads to the same form of the gradient at the output layer, = (yi − ti). ∂si

4 Algebraic trick for cross-entropy calculations

There are some tricks to reducing computation when doing cross-entropy error calculations when training a neural network. Here are a couple. For a single output neuron with logistic activation, the cross-entropy error is given by E = − (t log y + (1 − t) log (1 − y)) (32)  y  = − t log ( ) + log(1 − y) (33) 1 − y 1 ! 1+e−s 1 = − t log ( 1 ) + log (1 − −s ) (34) 1 − 1+e−s 1 + e  1  = − ts + log ( ) (35) 1 + es = −ts + log (1 + es) (36)

For a softmax output, the cross-entropy error is given by ! X esi E = − ti log (37) P esj i j   

X X sj = − ti si − log e  (38) i j (39) (40) Also note that in this softmax calculation, a constant can be added to each row of the output with no effect on the error function.

4