Introduction to Natural Computation
Lecture 9
Multilayer Perceptrons and Backpropagation
Peter Lewis
1 / 25 Overview of the Lecture
Why multilayer perceptrons?
Some applications of multilayer perceptrons.
Learning with multilayer perceptrons:
Minimising error using gradient descent.
How to update weights.
The backpropagation learning algorithm.
2 / 25 Recap: Perceptrons
−1
w0
1 x w y 1 1 Pn x = i=0 wixi 0 0
wn
xn
A key limitation of the single perceptron Only works for linearly separable data.
3 / 25 Multilayer Perceptrons (MLPs)
Input layer Output layer Hidden layer
MLPs are feed-forward neural networks.
Organised in layers: One input layer of distribution points. One or more hidden layers of artificial neurons (nodes). One output layer of artificial neurons (nodes).
Each node in a layer is connected to all other nodes in the next layer. Each connection has a weight (remember that the weight can be zero).
MLPs are universal approximators!
4 / 25 MLPs: Universal approximators
Figure : Partitioning of the input space by linear threshold-nodes in an MLP with two hidden layers and one output node in the output layer and examples of separable decision regions. Font: http://www.neural-forecasting.com/mlp_neural_nets.htm.
5 / 25 Activation Functions
x1 w1
x2 w2 1 y Σ 0 wn h
xn
Step function Sigmoid function y y
1 1
0 0 h x h x Outputs 0 or 1. Outputs a real value between 0 and 1.
With sigmoid activation functions. . . we would get smooth transitions instead of hard lined decision boundaries (e.g. in the previous slide’s example).
6 / 25 Sigmoids. . .
7 / 25 Applications of MLPs
MLPs have been applied to a very wide range of problems. . .
Classification problems. Regression problems.
Examples Patients’ lengths of stay. Stock price forecasting. Image recognition. Car driving control. ...
8 / 25 Learning with MLPs
Input layer Output layer Hidden layer
As with single perceptrons, finding the right weights is very hard!
Solution technique: learning!
As with perceptrons, learning means adjusting the weights based on training examples.
9 / 25 Supervised Learning
General idea
1 Send the MLP an input pattern, x, from the training set. 2 Get the output from the MLP, y. 3 Compare y with the “right answer”, or target t, to get the error quantity. 4 Use the error quantity to modify the weights, so next time y will be closer to t. 5 Repeat with another x from the training set.
Of course, when updating weights after seeing x, the network doesn’t just change the way it deals with x, but other inputs too. . .
Inputs it has not seen yet!
The ability to deal accurately with unseen inputs is called generalisation.
10 / 25 Learning and Error Minimisation
Recall: the perceptron learning rule Try to minimise the difference between the actual and desired outputs:
0 wi = wi + α(t − y)xi
We define an error function to represent such a difference over a set of inputs.
Typical error function: mean squared error (MSE) For example, the mean squared error can be defined as:
N 1 X E( ~w) = (tp − op)2 2N p=1
where N is the number of patterns, tp is the target output for the pattern p, op is the output obtained for the pattern p, and The 2 makes little difference, but is inserted to make life easier later on!
This tells us how “good” the neural network is. Learning aims to minimise the error E( ~w) by adjusting the weights ~w. 11 / 25 Gradient Descent
One technique that can be used for minimising functions is gradient descent. Can we use this on our error function E?
We would like a learning rule that tells us how to update weights, like this:
0 wij = wij + ∆wij
But what should ∆wij be?
12 / 25 Summary So Far...
We learnt what a multilayer perceptron is and why we might need them. We know the sort of learning rule we would like, to update weights in order to minimise the error. However, we don’t yet know what this learning rule should be. We need to learn a bit about gradient and derivatives to figure it out! After that, we can come back to the learning rule.
13 / 25 Gradient and Derivatives: The idea
The derivative is a measure of the rate of change of a function, as its input changes.
Consider a function y = f(x). dy The derivative dx indicates how much y changes in response to changes in x.
If x and y are real numbers, and if the graph of y is plotted against x, the derivative measures the slope or gradient of the line at each point, i.e., it describes the steepness or incline.
14 / 25 Gradient and Derivatives: The idea
dy dx > 0 implies that y increases as x increases. If we want to find the minimum y, we should reduce x.
dy dx < 0 implies that y decreases as x increases. If we want to find the minimum y, we should increase x.
dy dx = 0 implies that we are at a minimum or maximum or a plateau.
To get closer to the minimum:
dy xnew = x − η old dx where η indicates how much we would like to reduce or increase x and dy dx tells us the correct direction to go.
15 / 25 Gradient and Derivatives: The idea
OK... so we know how to use derivatives to adjust one input value.
But we have several weights to adjust!
We need to use partial derivatives.
A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant.
Example ∂y ∂y If y = f(x1, x2), then we can have and . ∂x1 ∂x2
In our learning rule case, if we can work out the partial derivatives, we can use this rule to update the weights:
Learning rule
0 wij = wij + ∆wij
∂E where ∆wij = −η . ∂wij
16 / 25 Summary So Far...
We learnt what a multilayer perceptron is and why we might need them.
We know a learning rule for updating weights in order to minimise the error:
0 wij = wij + ∆wij ∂E where ∆wij = −η . ∂wij
∆wij tells us in which direction and how much we should change each weight to roll down the slope (descend the gradient) of the error function E.
So, how do we calculate ∂E ? ∂wij
17 / 25 Using Gradient Descent to Minimise the Error
Recall the mean squared error function E, which we want to minimise:
N 1 X E( ~w) = (tp − op)2 2N p=1
If we use a sigmoid activation function f, then the output of neuron i for pattern p is
p 1 oi = f(ui) = 1 + e−aui where a is a pre-defined constant.
And ui is the result of the input function in neuron i: X ui = wij xij j
For the pth pattern and the ith neuron, we use gradient descent on the error function:
∂Ep p p 0 ∆wij = −η = η(ti − oi )f (ui)xij ∂wij
0 df where f (ui) = is the derivative of f with respect to ui. dui 0 If f is the sigmoid function, f (ui) = af(ui)(1 − f(ui)). 18 / 25 Using Gradient Descent to Minimise the Error
So, we can update weights after processing each pattern, using this rule:
p p 0 ∆wij = η (ti − oi )f (ui) xij
p ∆wij = ηδ i xij
This is known as the generalised delta rule. Note that we need to use the derivative of the activation function f. So, f must be differentiable! The threshold activation function is not continuous, thus not differentiable! The sigmoid activation function has a derivative which is “easy” to calculate.
19 / 25 Updating Output vs Hidden Neurons
We can update output neurons using the generalised delta rule: p ∆wij = η δi xij p 0 p p δi = f (ui)(ti − oi )
p This δi is only good for the output neurons, since it relies on target outputs. But we don’t have target output for the hidden nodes! What can we use instead?
p 0 X δi = f (ui) wkiδk k
This rule propagates error back from output nodes to hidden nodes. If effect, it blames hidden nodes according to how much influence they had.
So, now we have rules for updating both output and hidden neurons!
20 / 25 Backpropagation
Forward propagation of the input signals
Input layer Output layer Hidden layer
Backpropagation of the error
Backpropagation updates an MLP’s weights based on gradient descent of the error function.
21 / 25 Online/Stochastic Backpropagation
1: Initialize all weights to small random values. 2: repeat 3: for each training example do 4: Forward propagate the input features of the example to determine the MLP’s outputs. 5: Back propagate the error to generate ∆wij for all weights wij . 6: Update the weights using ∆wij . 7: end for 8: until stopping criteria reached.
22 / 25 Batch Backpropagation
1: Initialize all weights to small random values. 2: repeat 3: for each training example do 4: Forward propagate the input features of the example to determine the MLP’s outputs. 5: Back propagate the error to generate ∆wij for all weights wij . 6: end for 7: Update the weights based on the accumulated values ∆wij . 8: until stopping criteria reached.
23 / 25 Summary
We learnt what a multilayer perceptron is and why we might need them.
We have some intuition about using gradient descent on an error function.
We know a learning rule for updating weights in order to minimise the error: ∂E ∆wij = −η ∂wij If we use the squared error, we get the generalized delta rule p ∆wij = η δi xij . p We know how to calcuate δi for both output and hidden layers. We can use this rule to learn an MLP’s weights using the backpropagation algorithm.
24 / 25 Further Reading
Gurney K. Introduction to Neural Networks. 3rd ed. Taylor and Francis; 2003.
25 / 25