Perceptron.Pdf

Perceptron.Pdf

CS 472 - Perceptron 1 Basic Neuron CS 472 - Perceptron 2 Expanded Neuron CS 472 - Perceptron 3 Perceptron Learning Algorithm l First neural network learning model in the 1960’s l Simple and limited (single layer model) l Basic concepts are similar for multi-layer models so this is a good learning tool l Still used in some current applications (large business problems, where intelligibility is needed, etc.) CS 472 - Perceptron 4 Perceptron Node – Threshold Logic Unit x1 w1 x2 w2 �θ z xn wn € n 1 if x w ∑ i i ≥ θ z i=1 = n 0 if x w ∑ i i < θ i=1 CS 472 - Perceptron 5 Perceptron Node – Threshold Logic Unit x1 w1 x2 w2 � z xn wn n 1 if x w • Learn weights such that an objective ∑ i i ≥ θ function is maximized. z i=1 = n • What objective function should we use? 0 if x w ∑ i i < θ • What learning algorithm should we use? i=1 CS 472 - Perceptron 6 Perceptron Learning Algorithm x1 .4 .1 z x2 -.2 x x t n 1 2 1 if x w ∑ i i ≥ θ .8 .3 1 z i=1 = n .4 .1 0 0 if x w ∑ i i < θ i=1 CS 472 - Perceptron 7 First Training Instance .8 .4 .1 z =1 .3 -.2 net = .8*.4 + .3*-.2 = .26 x x t n 1 2 1 if x w ∑ i i ≥ θ .8 .3 1 z i=1 = n .4 .1 0 0 if x w ∑ i i < θ i=1 CS 472 - Perceptron 8 Second Training Instance .4 .4 .1 z =1 .1 -.2 net = .4*.4 + .1*-.2 = .14 x x t n 1 2 1 if x w ∑ i i ≥ θ i=1 .8 .3 1 z Dwi = (t - z) * c * xi = n .4 .1 0 0 if x w ∑ i i < θ i=1 CS 472 - Perceptron 9 Perceptron Rule Learning Dwi = c(t – z) xi l Where wi is the weight from input i to perceptron node, c is the learning rate, t is the target for the current instance, z is the current output, and xi is ith input l Least perturbation principle – Only change weights if there is an error – small c rather than changing weights sufficient to make current pattern correct – Scale by xi l Create a perceptron node with n inputs l Iteratively apply a pattern from the training set and apply the perceptron rule l Each iteration through the training set is an epoch l Continue training until total training set error ceases to improve l Perceptron Convergence Theorem: Guaranteed to find a solution in finite time if a solution exists CS 472 - Perceptron 10 CS 472 - Perceptron 11 Augmented Pattern Vectors 1 0 1 -> 0 1 0 0 -> 1 Augmented Version 1 0 1 1 -> 0 1 0 0 1 -> 1 l Treat threshold like any other weight. No special case. Call it a bias since it biases the output up or down. l Since we start with random weights anyways, can ignore the -q notion, and just think of the bias as an extra available weight. (note the author uses a -1 input) l Always use a bias weight CS 472 - Perceptron 12 Perceptron Rule Example l Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) l Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi l Training set 0 0 1 -> 0 1 1 1 -> 1 1 0 1 -> 1 0 1 1 -> 0 Pattern Target (t) Weight Vector (wi) Net Output (z) DW 0 0 1 1 0 0 0 0 0 CS 472 - Perceptron 13 Example l Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) l Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi l Training set 0 0 1 -> 0 1 1 1 -> 1 1 0 1 -> 1 0 1 1 -> 0 Pattern Target (t) Weight Vector (wi) Net Output (z) DW 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 CS 472 - Perceptron 14 Example l Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) l Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi l Training set 0 0 1 -> 0 1 1 1 -> 1 1 0 1 -> 1 0 1 1 -> 0 Pattern Target (t) Weight Vector (wi) Net Output (z) DW 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 CS 472 - Perceptron 15 Peer Instruction l I pose a challenge question (often multiple choice), which will help solidify understanding of topics we have studied – Might not just be one correct answer l You each get some time (1-2 minutes) to come up with your answer and vote – use Mentimeter (anonymous) l Then you get some time to convince your group (neighbors) why you think you are right (2-3 minutes) – Learn from and teach each other! l You vote again. May change your vote if you want. l We discuss together the different responses, show the votes, give you opportunity to justify your thinking, and give you further insights CS 472 - Perceptron 16 Peer Instruction (PI) Why l Studies show this approach improves learning l Learn by doing, discussing, and teaching each other – Curse of knowledge/expert blind-spot – Compared to talking with a peer who just figured it out and who can explain it in your own jargon – You never really know something until you can teach it to someone else – More improved learning! l Learn to reason about your thinking and answers l More enjoyable - You are involved and active in the learning CS 472 - Perceptron 17 How Groups Interact l Best if group members have different initial answers l 3 is the “magic” group number – You can self-organize "on-the-fly" or sit together specifically to be a group – Can go 2-4 on a given day to make sure everyone is involved l Teach and learn from each other: Discuss, reason, articulate l If you know the answer, listen to where colleagues are coming from first, then be a great humble teacher, you will also learn by doing that, and you’ll be on the other side in the future – I can’t do that as well because every small group has different misunderstandings and you get to focus on your particular questions l Be ready to justify to the class your vote and justifications! CS 472 - Perceptron 18 **Challenge Question** - Perceptron l Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) l Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi l Training set 0 0 1 -> 0 1 1 1 -> 1 1 0 1 -> 1 0 1 1 -> 0 Pattern Target (t) Weight Vector (wi) Net Output (z) DW 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 l Once it converges the final weight vector will be A. 1 1 1 1 B. -1 0 1 0 C. 0 0 0 0 D. 1 0 0 0 E. None of the above CS 472 - Perceptron 19 Example l Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) l Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi l Training set 0 0 1 -> 0 1 1 1 -> 1 1 0 1 -> 1 0 1 1 -> 0 Pattern Target (t) Weight Vector (wi) Net Output (z) DW 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 3 1 0 0 0 0 0 1 1 1 0 1 1 1 1 CS 472 - Perceptron 20 Example l Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) l Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi l Training set 0 0 1 -> 0 1 1 1 -> 1 1 0 1 -> 1 0 1 1 -> 0 Pattern Target (t) Weight Vector (wi) Net Output (z) DW 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 3 1 0 0 0 0 0 1 1 1 0 1 1 1 1 3 1 0 -1 -1 -1 0 0 1 1 0 1 0 0 0 CS 472 - Perceptron 21 Example l Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) l Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi l Training set 0 0 1 -> 0 1 1 1 -> 1 1 0 1 -> 1 0 1 1 -> 0 Pattern Target (t) Weight Vector (wi) Net Output (z) DW 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 1 1 1 1 1 1 1 3 1 0 0 0 0 0 1 1 1 0 1 1 1 1 3 1 0 -1 -1 -1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 0 0 0 0 1 0 1 1 1 1 0 0 0 1 1 0 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 CS 472 - Perceptron 22 Perceptron Homework l Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0) l Assume a learning rate c of 1 and initial weights all 1: Dwi = c(t – z) xi l Show weights after each pattern for just one epoch l Training set 1 0 1 -> 0 1 .5 0 -> 0 1 -.4 1 -> 1 0 1 .5 -> 1 Pattern Target (t) Weight Vector (wi) Net Output (z) DW 1 1 1 1 CS 472 - Perceptron 23 Training Sets and Noise l Assume a Probability of Error at each input and output value each time a pattern is trained on l 0 0 1 0 1 1 0 0 1 1 0 -> 0 1 1 0 l i.e.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    53 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us