Support Vector Machine (With Python) Tutorial 3 Yang

Support Vector Machine (With Python) Tutorial 3 Yang

Support Vector Machine (with Python) Tutorial 3 Yang 1 Outlines Through this tutorial, you will better know: What is Support Vector Machine The SVM in Scikit-learn – C-Support Vector Classification The method to train the SVM – SMO algorithm The parameters in SVC How to use the Sickit-learn.SVM Other SVMs in Scikit-learn 2 Linear model Support vector machine: Margin: the smallest distance between the decision boundary and any of the samples maximizing the margin ⇒ a particular decision boundary Location of boundary is determined by support vectors H1 H Linear separable - Canonical representation: H2 1 1 2 푤 arg min 푤 , Class B 2 푠. 푡. 푡푛 푤 ∗ 푥푖 + 푏 ≥ 1, 푛 = 1,2, … , 푁 Support vectors 1 푇 - By Lagrangian, its dual form (QP problem) 푤 푤 푥 + 푏 = 1 푁 푁 푁 1 2 min 휓 푎 = min 푡푛푡푚(푥푛 ∙ 푥푚) 푎푛푎푚 − 푎푛 , 훼 훼 2 Class A 푤 푛=1 푚=1 푛=1 푠. 푡. 푎 ≥ 0, 푛 = 1,2, … , 푁, 푇 푛 푇 푤 푥 + 푏 = 0 푁 푤 푥 + 푏 = −1 푛=1 푎푛푡푛 = 0. 3 Nonlinear model Soft margin: H H1 Slack variables 휉푛 ≥ 0, 푛 = 1, … , 푁 Class B Maximize the margin while softly ퟎ < 흃 < ퟏ 흃 > ퟏ penalizing incorrect points 흃 = ퟎ 1 2 푁 arg min 푤 + 퐶 휉 , 2 푛=1 푛 푤푇푥 + 푏 = 1 Class A 푠. 푡. 푡푛 푤 ∗ 푥푖 + 푏 ≥ 1 − 휉푛, 푛 = 1, … , 푁. 푇 푇 The corresponding dual form by 푤 푥 + 푏 = −1 푤 푥 + 푏 = 0 Lagrangian: 1 푁 푁 푁 min 휓 푎 = min 푛=1 푚=1 푡푛푡푚푘(푥푛, 푥푚) 푎푛푎푚 − 푛=1 푎푛 , 훼 훼 2 푠. 푡. 0 ≤ 푎푛 ≤ 퐶, 푛 = 1,2, … , 푁, 퐶 controls Trade-off between the 푁 slack variable penalty and the margin 푎 푡 = 0. 푛=1 푛 푛 4 Kernel Method The kernel trick (kernel substitution) map the inputs into high-dimensional feature spaces properly solve the problems of high complexity and computation caused by inner product Example: kernel function-- 푘 푋푖, 푋푗 = 휙(푋푖) ∙ 휙(푋푗) Defined two vectors: 푥 = (푥1, 푥2, 푥3); 푦 = 푦1, 푦2, 푦3 Defined the equations: 푓 푥 = (푥1푥1, 푥1푥2, 푥1푥3, 푥2푥1, 푥2푥2, 푥2푥3, 푥3푥1, 푥3푥2, 푥3푥3), 퐾 푥, 푦 = < 푥, 푦 > 2, Assume 푥 = 1, 2, 3 , 푦 = (4, 5, 6) 푓 푥 = 1, 2, 3, 2, 4, 6, 3, 6, 9 , 푓 푦 = 16, 20, 24, 20, 25, 36, 24, 30, 36 , < 푓 푥 , 푓 푦 >= 16 + 40 + 72 + 40 + 100 + 180 + 72 + 180 + 324 = 1024, 퐾 푥, 푦 = 4 + 10 + 18 2 = 1024. Kernel is much simpler 5 sklearn.svm.SVC C-Support Vector Classification: ̶ The implementation is based on libsvm. The fit time complexity is more than quadratic with the number of samples which makes it hard to scale to dataset with more than a couple of 10000 samples. ̶ The multiclass support is handled according to a one-vs-one scheme LibSVM: ̶ LIBSVM implements the SMO algorithm for kernelized support vector machines (SVMs), supporting classification and regression.[1] [1] Chang, Chih-Chung; Lin, Chih-Jen (2011). "LIBSVM: A library for support vector machines". ACM Transactions on Intelligent Systems and Technology. 2 (3). 6 SMO Algorithm Sequential Minimal Optimization[2]: • A Fast Algorithm for Training Support Vector Machines • Quickly solve the SVM quadratic programming (QP) problem • The main steps: Repeat till convergence { 1. Select some pair 푎푖 and 푎푗 to update next (using a heuristic that tries to pick the two that will allow us to make the biggest progress towards the global maximum). 2. Reoptimize Ψ(푎 ) with respect to 푎푖 and 푎푗, while holding all the other 푎푘’s (푘 ≠ 푖, 푗) fixed. } [2] Platt, John. "Sequential minimal optimization: A fast algorithm for training support vector machines." (1998). 7 Parameters of SVC 푪: Penalty parameter 퐶 of the error term, tol: Tolerance for stopping criterion controls trade-off between the penalty and the Cache_size: Specify the size of the kernel cache margin, default=1.0 class_weight: set different penalty for different Kernel: ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, data classes by the class weight values ‘precomputed’, default= ‘rbf’ verbose:Enable verbose output, if enabled, may degree: Degree of the polynomial kernel function not work properly in a multithreaded context gamma: Kernel coefficient(‘rbf’, ‘poly’ and max_iter: Hard limit on iterations within solver, or ‘sigmoid’), gamma=auto means 1/n_features -1 for no limit coef0: Independent term in kernel function. It is decision_function_shape: for multiple only significant in ‘poly’ and ‘sigmoid’. classifications, ovo for one-vs-one, ovr for one-vs- probability: whether to enable probability rest estimates (true or false) random_state: The seed of the pseudo random shrinking: Whether to use the shrinking heuristic number generator to use when shuffling the data http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html 8 Kernel selection Linear kernel: Choose based on the accuracy ′ ̶ Linear: 푢 ∗ 푣 Mainly for linear classification, it has fewer parameters, computing fast Nonlinear kernel ′ 푑푒푔푟푒푒 ̶ Polynomial: 훾 ∗ 푢 ∗ 푣 + 푐표푒푓0 More parameters so take more 2 time for computing, however, ̶ rbf: exp −훾 ∗ 푢 − 푣 better performance with properly- ′ ̶ Sigmoid: tanh(훾 ∗ 푢 ∗ 푣 + 푐표푒푓0) tuned parameters 퐶: the penalty coefficient, low 퐶 makes the decision surface smooth, high 퐶 aims at classifying all training examples correctly 훾(푔푎푚푚푎): defines how far the influence of a single training example reaches, low values mean ‘far’ and high values mean ‘close’. 9 Example-SVC import matplotlib.pyplot as plt from sklearn.svm import SVC y1=y.copy() import numpy as np a=np.hstack((X,y1.reshape(4,1))) X=np.array([[-1,-1],[-2,-1],[1,1],[2,1]]) for i in range(len(a)): y=np.array([1,1,2,2]) if a[i,2]==1: plt.plot(a[i,0],a[i,1], 'b*') clf=SVC(kernel='linear') else: clf.fit(X,y) plt.plot(a[i,0],a[i,1], 'rx') print(clf.fit(X,y)) plt.plot(-0.8, -1,'bo') print(clf.predict([[-0.8,-1]])) w=clf.coef_[0] #Only for linear kernel xx=np.linspace(-2,2) yy=-(w[0]*xx+clf.intercept_[0])/w[1] plt.plot(xx, yy, ‘k-’, label=‘$hyperplane$’) plt.legend() plt.savefig(path+’¥¥SVM.png’) plt.show() 10 Exercise1-Linear model-Tasks First load the training data and testing data of a linear example Create a SVM by SVC Train the SVM model by the data in training file Classify the data in test file Plot the figure of data points and the hyperplane Pls change the parameter C and observe 11 Exercise 1-Linear model(1) import numpy as np # introduce the SVC import pandas as pd clf=SVC(C=10, kernel='linear') from sklearn.svm import SVC clf.fit(train_x, train_y) from sklearn import metrics Test_y=pd.Series(clf.predict(test_x), name='Y') import matplotlib.pyplot as plt import os print('Classification report for classifier %s:¥n%s¥n' % (clf, metrics.classification_report(test_y, # load data Test_y))) path=os.getcwd() print("Confusion matrix:¥n%s" % traindata=pd.read_csv(path+'¥¥traindata.csv') metrics.confusion_matrix(test_y, Test_y)) train_x=traindata.iloc[:, :-1] train_y=traindata.iloc[:, -1] testdata=pd.read_csv(path+'¥¥testdata.csv') test_x=testdata.iloc[:, :-1] test_y=testdata.iloc[:, -1] 12 Exercise 1-Linear model(2) #plot the training data #plot the hyperplane label=train_y.copy() w=clf.coef_[0] label[label<0]=0 xx=np.linspace(-2, 2) label=label.astype(int) yy=-(w[0]*xx+clf.intercept_[0])/w[1] label=label.values plt.axis([-2, 2, -2, 2]) colormap=np.array(['r','b']) plt.plot(xx, yy, 'k-', label='$hyperplane$') plt.scatter(train_x.iloc[:,0], train_x.iloc[:,1], zorder=3, marker='o', c=colormap[label], #calculate the bias of margins label='traindata') margin=1/np.sqrt(np.sum(clf.coef_**2)) yy_down=yy-np.sqrt(1+(w[0]/w[1])**2)*margin #plot the support vectors yy_up=yy+np.sqrt(1+(w[0]/w[1])**2)*margin plt.scatter(clf.support_vectors_[:,0], #plot margins clf.support_vectors_[:,1],zorder=2,facecolors plt.plot(xx, yy_down, 'k--') ='none', s=80, edgecolors='k', label='Support plt.plot(xx, yy_up, 'k--') Vectors') 13 Exercise 1-Linear model(3) #plot the test data set labelt=test_y.copy() labelt[labelt<0]=0 labelt=labelt.astype(int) labelt=labelt.values plt.scatter(test_x.iloc[:,0], test_x.iloc[:,1], zorder=3, marker='+', c=colormap[labelt], label='testdata') plt.legend(loc=[0.26,0.01]) plt.savefig(path+'¥¥svc-linear.png') plt.show() 14 Exercise 2-nonlinear model-Tasks First load the training data and testing data of a nonlinear example Create a SVM by SVC with three kernels Train the SVM model by the data in training file Classify the data in test file Plot the figure of data points and the hyperplane Pls change the parameter Change parameter C and observe Change parameter 훾 and observe 15 Exercise 2-nonlinear model(1) import numpy as np # introduce the SVC and fit the model import pandas as pd for fig_n, kernel in enumerate(('linear', 'rbf', 'poly')): from sklearn.svm import SVC clf=SVC(C=1.0, kernel=kernel, gamma=10) import matplotlib.pyplot as plt clf.fit(train_x, train_y) import os print('Classification report: %s¥nAccuracy rate:%s¥n' # load data % (clf, clf.score(test_x, test_y))) path=os.getcwd() traindata=pd.read_csv(path+'¥¥traindata.csv') #plot new window for figure train_x=traindata.iloc[:, :-1] plt.figure(fig_n) train_y=traindata.iloc[:, -1] #clear the current figure testdata=pd.read_csv(path+'¥¥testdata.csv') plt.clf() test_x=testdata.iloc[:, :-1] test_y=testdata.iloc[:, -1] 16 Exercise 2-nonlinear model(2) #plot the train data plt.scatter(train_x.iloc[:,0], train_x.iloc[:,1], c=train_y.iloc[:], cmap=plt.cm.Paired, edgecolor='k', zorder=10, s=20) #plot the support vectors plt.scatter(clf.support_vectors_[:,0], clf.support_vectors_[:,1], s=80, facecolors='none', zorder=10, edgecolors='k') plt.axis('tight') x_min, x_max = train_x.iloc[:,0].min()-1, train_x.iloc[:,0].max()+1 y_min, y_max= train_x.iloc[:,1].min()-1, train_x.iloc[:,1].max()+1 # create a mesh to plot in XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j] Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()]) 17 Exercise 2-nonlinear model(3) # Put the

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    27 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us