
CIS 520: Machine Learning Spring 2019: Lecture 5 Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may not cover all the material discussed in the lecture (and vice versa). Outline • Linearly separable data: Hard margin SVMs • Non-linearly separable data: Soft margin SVMs • Loss minimization view • Support vector regression (SVR) 1 Linearly Separable Data: Hard Margin SVMs In this lecture we consider linear support vector machines (SVMs); we will consider nonlinear extensions in the next lecture. Let X = Rd, and consider a binary classification task with Y = Yb = {±1g.A d m training sample S = ((x1; y1);:::; (xm; ym)) 2 (R × {±1g) is said to be linearly separable if there > exists a linear classifier hw;b(x) = sign(w x + b) which classifies all examples in S correctly, i.e. for which > 2 yi(w xi + b) > 0 8i 2 [m]. For example, Figure 1 (left) shows a training sample in R that is linearly separable, together with two possible linear classifiers that separate the data correctly (note that the decision surface of a linear classifier in 2 dimensions is a line, and more generally in d > 2 dimensions is a hyperplane). Which of the two classifiers is likely to give better generalization performance? Figure 1: Left: A linearly separable data set, with two possible linear classifiers that separate the data. Blue circles represent class label 1 and red crosses −1; the arrow represents the direction of positive classification. Right: The same data set and classifiers, with margin of separation shown. Although both classifiers separate the data, the distance or margin with which the separation is achieved is different; this is shown in Figure 1 (right). For the rest of this section, assume that the training sample S = ((x1; y1);:::; (xm; ym)) is linearly separable; in this setting, the SVM algorithm selects the maximum 1 2 Support Vector Machines for Classification and Regression margin linear classifier, i.e. the linear classifier that separates the training data with the largest margin. > More precisely, define the (geometric) margin of a linear classifier hw;b(x) = sign(w x + b) on an d example (xi; yi) 2 R × {±1g as > yi(w xi + b) γi = : (1) kwk2 > > jw xi+bj Note that the distance of xi from the hyperplane w x+b = 0 is given by ; therefore the above margin kwk2 on (xi; yi) is simply a signed version of this distance, with a positive sign if the example is classified correctly and negative otherwise. The (geometric) margin of hw;b on the sample S = ((x1; y1);:::; (xm; ym)) is then defined as the minimal margin on examples in S: γ = min γi : (2) i2[m] d m Given a linearly separable training sample S = ((x1; y1);:::; (xm; ym)) 2 (R × {±1g) , the hard margin SVM algorithm finds a linear classifier that maximizes the above margin on S. In particular, any linear classifier that separates S correctly will have margin γ > 0; without loss of generality, we can represent any such classifier by some (w; b) such that > min yi(w xi + b) = 1 : (3) i2[m] The margin of such a classifier on S then becomes simply y (w>x + b) 1 γ = min i i = : (4) i2[m] kwk2 kwk2 Thus, maximizing the margin becomes equivalent to minimizing the norm kwk2 subject to the constraints in Eq. (3), which can be written as the following optimization problem: 1 2 min kwk2 (5) w;b 2 subject to > yi(w xi + b) ≥ 1 ; i = 1; : : : ; m: (6) This is a convex quadratic program (QP) and can in principle be solved directly. However it is useful to consider the dual of the above problem, which sheds light on the structure of the solution and also facilitates the extension to nonlinear classifiers which we will see in the next lecture. Note that by our assumption that the data is linearly separable, the above problem satisfies Slater's condition, and so strong duality holds. Therefore solving the dual problem is equivalent to solving the above primal problem. Introducing dual variables (or Lagrange multipliers) αi ≥ 0 (i = 1; : : : ; m) for the inequality constraints above gives the Lagrangian function m 1 X L(w; b; α) = kwk2 + α (1 − y (w>x + b)) : (7) 2 2 i i i i=1 The(Lagrange) dual function is then given by φ(α) = inf L(w; b; α) : d w2R ;b2R To compute the dual function, we set the derivatives of L(w; b; α) w.r.t. w and b to zero; this gives the following: m X w = αiyixi (8) i=1 m X αiyi = 0 : (9) i=1 Support Vector Machines for Classification and Regression 3 Substituting these back into L(w; b; α), we have the following dual function: m m m 1 X X X φ(α) = − α α y y (x>x ) + α ; 2 i j i j i j i i=1 j=1 i=1 m Pm this dual function is defined over the domain α 2 R : i=1 αiyi = 0 . This leads to the following dual problem: m m m 1 X X > X max − αiαjyiyj(xi xj) + αi (10) α 2 i=1 j=1 i=1 subject to m X αiyi = 0 (11) i=1 αi ≥ 0 ; i = 1; : : : ; m: (12) This is again a convex QP (in the m variables αi) and can be solved efficiently using numerical optimization methods. On obtaining the solution αb to the above dual problem, the weight vector wb corresponding to the maximal margin classifier can be obtained via Eq. (8): m X wb = αbiyixi : i=1 Now, by the complementary slackness condition in the KKT conditions, we have for each i 2 [m], > αbi 1 − yi(wb xi + b) = 0 : This gives > αbi > 0 =) 1 − yi(wb xi + b) = 0 : In other words, αbi is positive only for training points xi that lie on the margin, i.e. that are closest to the separating hyperplane; these points are called the support vectors. For all other training points xi, we have αbi = 0. Thus the solution for wb can be written as a linear combination of just the support vectors; specifically, if we define SV = i 2 [m]: αbi > 0 ; then we have X wb = αbiyixi : i2SV Moreover, for all i 2 SV, we have > > 1 − yi(wb xi + b) = 0 or yi − (wb xi + b) = 0 : This allows us to obtain b from any of the support vectors; in practice, for numerical stability, one generally averages over all the support vectors, giving 1 X > b = (yi − w xi) : jSVj b i2SV In order to classify a new point x 2 Rd using the learned classifier, one then computes > X > h (x) = sign(w x + b) = sign αiyi(xi x) + b : (13) wb ;b b b i2SV 4 Support Vector Machines for Classification and Regression 2 Non-Linearly Separable Data: Soft Margin SVMs The above derivation assumed the existence of a linear classifier that can correctly classify all examples in a given training sample S = ((x1; y1);:::; (xm; ym)). But what if the sample is not linearly separable? In this case, one needs to allow for the possibility of errors in classification. This is usually done by relaxing the constraints in Eq. (6) through the introduction of slack variables ξi ≥ 0 (i = 1; : : : ; m), and requiring only that > yi(w xi + b) ≥ 1 − ξi ; i = 1; : : : ; m: (14) An extra cost for errors can be assigned as follows: m 1 2 X min kwk2 + C ξi (15) w;b;ξ 2 i=1 subject to > yi(w xi + b) ≥ 1 − ξi ; i = 1; : : : ; m (16) ξi ≥ 0 ; i = 1; : : : ; m: (17) > > Thus, whenever yi(w xi + b) < 1, we pay an associated cost of Cξi = C(1 − yi(w xi + b)) in the objective > function; a classification error occurs when yi(w xi + b) ≤ 0, or equivalently when ξi ≥ 1. The parameter C > 0 controls the tradeoff between increasing the margin (minimizing kwk2) and reducing the errors (minimizing ξi): a large value of C keeps the errors small at the cost of a reduced margin; a small value of C allows for more errors while increasing the margin on the remaining examples. Forming the dual of the above problem as before leads to the same convex QP as in the linearly separable case, except that the constraints in Eq. (12) are replaced by1 0 ≤ αi ≤ C i = 1; : : : ; m: (18) The solution for wb is obtained similarly to the linearly separable case: m X wb = αbiyixi : i=1 In this case, the complementary slackness conditions yield for each i 2 [m]:2 > αbi 1 − ξbi − yi(wb xi + b) = 0 (C − αbi)ξbi = 0 : This gives > αbi > 0 =) 1 − ξbi − yi(wb xi + b) = 0 αbi < C =) ξbi = 0 : In particular, this gives > 0 < αbi < C =) 1 − yi(wb xi + b) = 0 ; these are the points on the margin. Thus here we have three types of support vectors with αbi > 0 (see Figure 2): 1 To see this, note that in this case there are 2m dual variables, say fαig for the first set of inequality constraints and fβig for the second set of inequality constraints ξi ≥ 0. When setting the derivative of the Lagrangian L(w; b; ξ; α; β) w.r.t. ξi to zero, one gets αi + βi = C, allowing one to replace βi with C − αi throughout; the constraint βi ≥ 0 then becomes αi ≤ C. 2 Again, the second set of complementary slackness conditions here are obtained by replacing the dual variables βi (for the inequality constraints ξi ≥ 0) with C − αi throughout; see also Footnote 1.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-