Perceptron & Neural Networks

Perceptron & Neural Networks

School of Computer Science 10-701 Introduction to Machine Learning Perceptron & Neural Networks Readings: Matt Gormley Bishop Ch. 4.1.7, Ch. 5 Lecture 12 Murphy Ch. 16.5, Ch. 28 October 17, 2016 Mitchell Ch. 4 1 Reminders • Homework 3: – due 10/24/16 2 Outline • Discriminative vs. Generative • Perceptron • Neural Networks • Backpropagation 3 DISCRIMINATIVE AND GENERATIVE CLASSIFIERS 4 Generative vs. Discriminative • Generative Classifiers: – Example: Naïve Bayes – Define a joint model of the observations x and the labels y: p(x,y) – Learning maximizes (joint) likelihood – Use Bayes’ Rule to classify based on the posterior: p(y x)=p(x y)p(y)/p(x) | | • Discriminative Classifiers: – Example: Logistic Regression – Directly model the conditional: p(y x) | – Learning maximizes conditional likelihood 5 Generative vs. Discriminative Finite Sample Analysis (Ng & Jordan, 2002) [Assume that we are learning from a finite training dataset] If model assumptions are correct: Naive Bayes is a more efficient learner (requires fewer samples) than Logistic Regression If model assumptions are incorrect: Logistic Regression has lower asymtotic error, and does better than Naïve Bayes 6 solid: NB dashed: LR 7 Slide courtesy of William Cohen solid: NB dashed: LR Naïve Bayes makes stronger assumptions about the data but needs fewer examples to estimate the parameters “On Discriminative vs Generative Classifiers: ….” Andrew Ng and Michael Jordan, NIPS 2001. 8 Slide courtesy of William Cohen Generative vs. Discriminative Learning (Parameter Estimation) Naïve Bayes: Parameters are decoupled à Closed form solution for MLE Logistic Regression: Parameters are coupled à No closed form solution – must use iterative optimization techniques instead 9 Naïve Bayes vs. Logistic Reg. Learning (MAP Estimation of Parameters) Bernoulli Naïve Bayes: Parameters are probabilities à Beta prior (usually) pushes probabilities away from zero / one extremes Logistic Regression: Parameters are not probabilities à Gaussian prior encourages parameters to be close to zero (effectively pushes the probabilities away from zero / one extremes) 10 Naïve Bayes vs. Logistic Reg. Features Naïve Bayes: Features x are assumed to be conditionally independent given y. (i.e. Naïve Bayes Assumption) Logistic Regression: No assumptions are made about the form of the features x. They can be dependent and correlated in any fashion. 11 THE PERCEPTRON ALGORITHM 12 Recall… Background: Hyperplanes Why don’t we drop the generative model and try to learn this hyperplane directly? Recall… Background: Hyperplanes Hyperplane (Definition 1): = x : wT x = b H { } Hyperplane (Definition 2): = x : wT x =0 H { w and x =1 1 } Half-spaces: + = x : wT x > 0 and x =1 H { 1 } T − = x : w x < 0 and x =1 H { 1 } Recall… Directly modeling the Background: Hyperplanes hyperplane would use a Why don’t we drop the decision function: generative model and try to learn this h(t)=sign(θT t) hyperplane directly? for: y 1, +1 ∈ {− } Online Learning Model Setup: • We receive an example (x, y) • Make a prediction h(x) • Check for correctness h(x) = y? Goal: • Minimize the number of mistakes 16 Recall… GeometricMargins Margin Definition: The margin of example � w.r.t. a linear sep. � is the distance from � to the plane � ⋅ � = 0 (or the negative if on wrong side) Definition: The margin �� of a set of examples � wrt a linear separator � is the smallest margin over points � ∈ �. Definition: The margin � of a set of examples � is the maximum �� over all linear separators �. � + w � + - + - + - - - + - - 17 Slide from Nina Balcan - - Perceptron Algorithm Data: Inputs are continuous vectors of length K. Outputs are discrete. (i) (i) N K = t ,y where t R and y +1, 1 D { }i=1 ∈ ∈ { − } Prediction: Output determined by hyperplane. T 1, if a 0 sign(a)= ≥ yˆ = hθ(x) = sign(θ x) 1, otherwise − Learning: Iterative procedure: • while not converged • receive next example (x, y) • predict y’ = h(x) • if positive mistake: add x to parameters • if negative mistake: subtract x from parameters 18 Perceptron Algorithm Learning: Algorithm 1 Perceptron Learning Algorithm (Batch) 1: procedure P( = (t(1),y(1)),...,(t(N),y(N)) ) D { } 2: θ 0 Initialize parameters ← 3: while not converged do 4: for i 1, 2,...,N do For each example ∈ { T } 5: yˆ sign(θ t(i)) Predict ← 6: if yˆ = y(i) then If mistake 7: θ θ + y(i)t(i) Update parameters ← 8: return θ 19 Analysis: Perceptron Perceptron Mistake Bound Guarantee: If data has margin γ and all points inside a ball of radius R, then Perceptron makes (R/γ)2 mistakes. ≤ (Normalized margin: multiplying all points by 100, or dividing all points by 100, doesn’t change the number of mistakes; algo is invariant to scaling.) + + + γ + θ∗ γ + - + - + - - R - - - - - 20 Slide from Nina Balcan Analysis: Perceptron Perceptron Mistake Bound Theorem 0.1 (Block (1962), Novikoff (1962)). Given dataset: = (t(i),y(i)) N . D { }i=1 Suppose: 1. Finite size inputs: x(i) R || || ≤ 2. Linearly separable data: θ∗ s.t. θ∗ =1and (i) (i) ∃ || || y (θ∗ t ) γ, i · ≥ ∀ Then: The number of mistakes made by the Perceptron + algorithm on this dataset is + + γ + θ∗ 2 γ + k (R/γ) - + ≤ - + - - - R - - - - 21 Figure from Nina Balcan Analysis: Perceptron Proof of Perceptron Mistake Bound: We will show that there exist constants A and B s.t. Ak θ(k+1) B√k ≤ || || ≤ Ak θ(k+1) B√k ≤ || || ≤ Ak θ(k+1) B√k Ak θ(k+1) ≤ ||B√k || ≤ ≤ || || ≤ Ak θ(k+1) B√k ≤ || || ≤ 22 Analysis: Perceptron Theorem 0.1 (Block (1962), Novikoff (1962)). (i) (i) N Given dataset: = (t ,y ) . + D { }i=1 + + Suppose: (i) γ + θ∗ 1. Finite size inputs: x R γ + || || ≤ - + 2. Linearly separable data: θ∗ s.t. θ∗ =1and - + (i) (i) ∃ || || - - y (θ∗ t ) γ, i - R · ≥ ∀ - - Then: The number of mistakes made by the Perceptron - - algorithm on this dataset is k (R/γ)2 ≤ Algorithm 1 Perceptron Learning Algorithm (Online) 1: procedure P( = (t(1),y(1)), (t(2),y(2)),... ) D { } 2: θ 0, k =1 Initialize parameters ← 3: for i 1, 2,... do For each example ∈ { (k) } 4: if y(i)(θ t(i)) 0 then If mistake (k+1) · (k)≤ 5: θ θ + y(i)t(i) Update parameters ← 6: k k +1 ← 7: return θ 23 Analysis: Perceptron Whiteboard: Proof of Perceptron Mistake Bound 24 Analysis: Perceptron Proof of Perceptron Mistake Bound: Part 1: for some A, Ak θ(k+1) B√k ≤ || || ≤ (k+1) (k) (i) (i) θ θ∗ =(θ + y t )θ∗ · by Perceptron algorithm update (k) (i) (i) = θ θ∗ + y (θ∗ t ) · · (k) θ θ∗ + γ ≥ · by assumption (k+1) θ θ∗ kγ ⇒ · ≥ by induction on k since θ(1) = 0 θ(k+1) kγ ⇒ || || ≥ since r m r m and θ∗ =1 || || × || || ≥ · || || Cauchy-Schwartz inequality 25 Analysis: Perceptron Proof of Perceptron Mistake Bound: Part 2: for some B, Ak θ(k+1) B√k ≤ || || ≤ θ(k+1) 2 = θ(k) + y(i)t(i) 2 || || || || by Perceptron algorithm update = θ(k) 2 +(y(i))2 t(i) 2 +2y(i)(θ(k) t(i)) || || || || · θ(k) 2 +(y(i))2 t(i) 2 ≤ || || || || since kth mistake y(i)(θ(k) t(i)) 0 ⇒ · ≤ = θ(k) 2 + R2 || || since (y(i))2 t(i) 2 = t(i) 2 = R2 by assumption and (y(i))2 =1 || || || || θ(k+1) 2 kR2 ⇒ || || ≤ by induction on k since (θ(1))2 =0 θ(k+1) √kR ⇒ || || ≤ 26 Analysis: Perceptron Proof of Perceptron Mistake Bound: Part 3: Combining the bounds finishes the proof. kγ θ(k+1) √kR ≤ || || ≤ k (R/γ)2 ⇒ ≤ The total number of mistakes must be less than this 27 Extensions of Perceptron • Kernel Perceptron – Choose a kernel K(x’, x) – Apply the kernel trick to Perceptron – Resulting algorithm is still very simple • Structured Perceptron – Basic idea can also be applied when y ranges over an exponentially large set – Mistake bound does not depend on the size of that set 28 Summary: Perceptron • Perceptron is a simple linear classifier • Simple learning algorithm: when a mistake is made, add / subtract the features • For linearly separable and inseparable data, we can bound the number of mistakes (geometric argument) • Extensions support nonlinear separators and structured prediction 29 RECALL: LOGISTIC REGRESSION 30 Using gradient ascent for linear Recall… classifiers Key idea behind today’s lecture: 1. Define a linear classifier (logistic regression) 2. Define an objective function (likelihood) 3. Optimize it with gradient descent to learn parameters 4. Predict the class with highest probability under the model 31 Using gradient ascent for linear Recall… classifiers This decision function isn’t Use a differentiable function differentiable: instead: 1 T pθ(y =1t)= h(t)=sign(θ t) | 1+2tT( θT t) − sign(x) 1 logistic(u) ≡ −u 1+ e 32 Using gradient ascent for linear Recall… classifiers This decision function isn’t Use a differentiable function differentiable: instead: 1 T pθ(y =1t)= h(t)=sign(θ t) | 1+2tT( θT t) − sign(x) 1 logistic(u) ≡ −u 1+ e 33 Recall… Logistic Regression Data: Inputs are continuous vectors of length K. Outputs are discrete. (i) (i) N K = t ,y where t R and y 0, 1 D { }i=1 ∈ ∈ { } Model: Logistic function applied to dot product of parameters with input vector. 1 pθ(y =1t)= | 1+2tT( θT t) − Learning: finds the parameters that minimize some objective function. θ∗ = argmin J(θ) θ Prediction: Output is the most probable class. yˆ = `;Kt pθ(y t) y 0,1 | ∈{ } 34 NEURAL NETWORKS 35 Learning highly non-linear functions f: X à Y l f might be non-linear function l X (vector of) continuous and/or discrete vars l Y (vector of) continuous and/or discrete vars The XOR gate Speech recognition © Eric Xing @ CMU, 2006-2011 36 Perceptron and Neural Nets l From biological neuron to artificial neuron (perceptron) Synapse Inputs Synapse Dendrites x Axon 1 Linear Hard Axon Combiner Limiter w1 Output ∑ Y Soma Soma w2 Dendrites θ x Synapse 2 Threshold l Activation function n ⎧+1,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    104 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us