<<

Introduction of Multilayer in Python

Ko, Youngjoong

Dept. of Computer Engineering, Dong-A University Contents

1. Non-linear Classification

2. XOR problem

3. Architecture of Multilayer Perceptron (MLP)

4. Forward Computation

5. Activation Functions

6. Learning in MLP with Back-propagation Algorithm

7. Python Code and Practice

2

Non-linear Classification

 Many Impossible Cases to be classified linearly  Linear model: perceptron  Non-linear model: decision tree, nearest neighbor models

 Explore to find a non-linear learning model from perceptron

3 XOR Problem

 Limitation of performance of in XOR problem  75% Accuracy  Overcome this limitation by using two perceptrons

4 XOR Problem

 Two Steps for Solution  Mapping an original feature space into a new space  Classify in the new space

5 XOR Problem

 Example of Multilayer Perceptron as a solution

6 Architecture of Multilayer Perceptron

 Multilayer Perceptron (MLP) in Neural Network  To chain together a collection of perceptrons

 Two layers (not three layers) . Don’t count the inputs as a real . Two layers of trained weights

 Each edge corresponds to a different weight . Input -> hidden, hidden ->output

7 Architecture of Multilayer Perceptron

 Multilayer Perceptron (MLP) in Neural Network  Input layer, Hidden layer and Output layer  Weights: u and v

8 Forward Computation

 Functions in MLP

9 Forward Computation

 Other Understanding of MLP Forward Propagation

10 Activation Functions

11 Activation Functions

 Hyperbolic tangent function  Popular link function  Differential: its derivative is 1-tanh2(x)

 Sigmoid functions

12 Activation Functions

 Simple Two-layer MLP

13 Activation Functions

 Small Two-layer Perceptron to solve the XOR problem

-1 -1 1

14 Learning in MLP

15 Learning in MLP

zj

16 Learning in MLP

17 Learning in MLP

 Back-propagation Algorithm

18 Learning in MLP

zj

19 Learning in MLP

xj

20 Learning in MLP

 Understanding back-propagation on a simple example  Two layers MLP and No in the output layer

f3(s)

w1 f1(s) x1 v1

w2 f4(s)

v2 f2(s) x2 w3

f5(s)

21 Learning in MLP

 Understanding back-propagation on a simple example

22 Learning in MLP

 Understanding back-propagation on a simple example

e = v e + v e e3 3 13 1 23 2 f (s) 3 e1

w1 f1(s) x1

w2 f4(s) e2

v2 f2(s) x2 w3

f5(s)

23 Learning in MLP

 Understanding back-propagation on a simple example

e = v e + v e e3 4 14 1 24 2 f (s) 3 e1

w1 f1(s) x1 e4

w2 f4(s) e2

v2 f2(s) x2 w3

f5(s)

24 Learning in MLP

 Understanding back-propagation on a simple example

e3 f (e) 3 e1

f1(e) x1 e4

w2 f4(e) e2

v2 f2(e) x2 w3 e5

f5(e)

25 Learning in MLP

 Understanding back-propagation on a simple example

e3 f (e) 3 e1 w 1 f1(e) x1 e4

w2 f4(e) e2

v2 f2(e) x2 w3 e5

f5(e)

26 Learning in MLP

 Back-propagation Algorithm

27 Learning in MLP

 Simple Example in MLP Learning

28 Learning in MLP

29 Learning in MLP

 Back-propagation  Line 8:

 Line 9:

 Line 10:

30 Learning in MLP

 Back-propagation  Line 11:

 Line 12: Line 13:

31 Learning in MLP

32 More Considerations

 Typical complaints  # of layers  # of hidden units per layer significant  The  The initialization  the stopping iteration or weight regularization

 Random Initialization  Small random weights (say, uniform between -0.1 and 0.1)  By training a collection of networks, each with a different random initialization, we can often obtain better solutions

33 More Considerations

 Initialization Tip

out

34 More Considerations

 When is the proper number of iteration for early stopping?

35 Python Code and Practice

 You should install Python 2.7 and Numpy

 Download from: http://nlpmlir.blogspot.kr/2016/02/multilayer- perceptron.html

 Homework

36 References

• 오일석. 패턴인식. 교보문고.

• Sangkeun Jung. “Introduction to .” Natural Language Processing Tutorial, 2015.

• http://ciml.info/

37

Thank you for your attention! http://web.donga.ac.kr/yjko/ 고 영 중