Support Vector and Kernel Machines

Support Vector and Kernel Machines

Support Vector and Kernel Machines Nello Cristianini BIOwulf Technologies [email protected] http://www.support-vector.net/tutorial.html ICML 2001 A Little History z SVMs introduced in COLT-92 by Boser, Guyon, Vapnik. Greatly developed ever since. z Initially popularized in the NIPS community, now an important and active field of all Machine Learning research. z Special issues of Machine Learning Journal, and Journal of Machine Learning Research. z Kernel Machines: large class of learning algorithms, SVMs a particular instance. www.support-vector.net A Little History z Annual workshop at NIPS z Centralized website: www.kernel-machines.org z Textbook (2000): see www.support-vector.net z Now: a large and diverse community: from machine learning, optimization, statistics, neural networks, functional analysis, etc. etc z Successful applications in many fields (bioinformatics, text, handwriting recognition, etc) z Fast expanding field, EVERYBODY WELCOME ! - www.support-vector.net Preliminaries z Task of this class of algorithms: detect and exploit complex patterns in data (eg: by clustering, classifying, ranking, cleaning, etc. the data) z Typical problems: how to represent complex patterns; and how to exclude spurious (unstable) patterns (= overfitting) z The first is a computational problem; the second a statistical problem. www.support-vector.net Very Informal Reasoning z The class of kernel methods implicitly defines the class of possible patterns by introducing a notion of similarity between data z Example: similarity between documents z By length z By topic z By language … z Choice of similarity Î Choice of relevant features www.support-vector.net More formal reasoning z Kernel methods exploit information about the inner products between data items z Many standard algorithms can be rewritten so that they only require inner products between data (inputs) z Kernel functions = inner products in some feature space (potentially very complex) z If kernel given, no need to specify what features of the data are being used www.support-vector.net Just in case … z Inner product between vectors x,z =∑xizi i z Hyperplane: x x x + = o x w, x b 0 x w o x o b x o o o www.support-vector.net Overview of the Tutorial z Introduce basic concepts with extended example of Kernel Perceptron z Derive Support Vector Machines z Other kernel based algorithms z Properties and Limitations of Kernels z On Kernel Alignment z On Optimizing Kernel Alignment www.support-vector.net Parts I and II: overview z Linear Learning Machines (LLM) z Kernel Induced Feature Spaces z Generalization Theory z Optimization Theory z Support Vector Machines (SVM) www.support-vector.net IMPORTANT Modularity CONCEPT z Any kernel-based learning algorithm composed of two modules: – A general purpose learning machine – A problem specific kernel function z Any K-B algorithm can be fitted with any kernel z Kernels themselves can be constructed in a modular way z Great for software engineering (and for analysis) www.support-vector.net 1-Linear Learning Machines z Simplest case: classification. Decision function is a hyperplane in input space z The Perceptron Algorithm (Rosenblatt, 57) z Useful to analyze the Perceptron algorithm, before looking at SVMs and Kernel Methods in general www.support-vector.net Basic Notation ∈ z Input space x X ∈ = − + z Output space y Y { 1, 1} ∈ z Hypothesis h H → z Real-valued: f : X R = 1 1 i i z Training Set S {(x , y ),...,(x , y ),....} ε z Test error z Dot product x, z www.support-vector.net Perceptron z Linear Separation of the input space x = + x f (x) w, x b x o x x = w h(x) sign( f (x)) o x o b x o o o www.support-vector.net Perceptron Algorithm Update rule (ignoring threshold): x ≤ x z if y i ( w k , x i ) 0 then x o x x wk + 1 ← wk + ηyixi w o x ← + b o x k k 1 o o o www.support-vector.net Observations z Solution is a linear combination of training points w = ∑ α iy ix i α i ≥ 0 z Only used informative points (mistake driven) z The coefficient of a point in combination reflects its ‘difficulty’ www.support-vector.net Observations - 2 z Mistake bound: x x x 2 g x o R x M ≤ x o γ x o o o o z coefficients are non-negative z possible to rewrite the algorithm using this alternative representation www.support-vector.net IMPORTANT Dual Representation CONCEPT The decision function can be re-written as follows: f(x)= w,x +b=∑αiyi xi,x +b w = ∑αiyixi www.support-vector.net Dual Representation z And also the update rule can be rewritten as follows: α ← α + η z α + ≤ i i if y i 3 ∑ j y j x j , x i b 8 0 then z Note: in dual representation, data appears only inside dot products www.support-vector.net Duality: First Property of SVMs z DUALITY is the first feature of Support Vector Machines z SVMs are Linear Learning Machines represented in a dual fashion f(x)= w,x +b=∑αiyi xi,x +b z Data appear only within dot products (in decision function and in training algorithm) www.support-vector.net Limitations of LLMs Linear classifiers cannot deal with z Non-linearly separable data z Noisy data z + this formulation only deals with vectorial data www.support-vector.net Non-Linear Classifiers z One solution: creating a net of simple linear classifiers (neurons): a Neural Network (problems: local minima; many parameters; heuristics needed to train; etc) z Other solution: map data into a richer feature space including non-linear features, then use a linear classifier www.support-vector.net Learning in the Feature Space z Map data into a feature space where they are linearly separable x → φ(x) f x f(x) o x f(o) f(x) o x f(o) f(x) o f(o) f(x) o x f(o) X F www.support-vector.net Problems with Feature Space z Working in high dimensional feature spaces solves the problem of expressing complex functions BUT: z There is a computational problem (working with very large vectors) z And a generalization theory problem (curse of dimensionality) www.support-vector.net Implicit Mapping to Feature Space We will introduce Kernels: z Solve the computational problem of working with many dimensions z Can make it possible to use infinite dimensions – efficiently in time / space z Other advantages, both practical and conceptual www.support-vector.net Kernel-Induced Feature Spaces z In the dual representation, the data points only appear inside dot products: f(x)=∑αiyi φ(xi),φ(x) +b z The dimensionality of space F not necessarily important. May not even know the map φ www.support-vector.net IMPORTANT Kernels CONCEPT z A function that returns the value of the dot product between the images of the two arguments K(x1, x2) = φ(x1),φ(x2) z Given a function K, it is possible to verify that it is a kernel www.support-vector.net Kernels z One can use LLMs in a feature space by simply rewriting it in dual representation and replacing dot products with kernels: x1, x2 ← K(x1, x2) = φ(x1),φ(x2) www.support-vector.net IMPORTANT The Kernel Matrix CONCEPT z (aka the Gram matrix): K(1,1) K(1,2) K(1,3) … K(1,m) K(2,1) K(2,2) K(2,3) … K(2,m) K= … … … … … K(m,1) K(m,2) K(m,3) … K(m,m) www.support-vector.net The Kernel Matrix z The central structure in kernel machines z Information ‘bottleneck’: contains all necessary information for the learning algorithm z Fuses information about the data AND the kernel z Many interesting properties: www.support-vector.net Mercer’s Theorem z The kernel matrix is Symmetric Positive Definite z Any symmetric positive definite matrix can be regarded as a kernel matrix, that is as an inner product matrix in some space www.support-vector.net More Formally: Mercer’s Theorem z Every (semi) positive definite, symmetric function is a kernel: i.e. there exists a mapping φ such that it is possible to write: K(x1, x2) = φ(x1),φ(x2) Pos. Def. I K ( x , z ) f ( x ) f ( z ) d x d z ≥ 0 ∀ f ∈ L 2 www.support-vector.net Mercer’s Theorem z Eigenvalues expansion of Mercer’s Kernels: K(x1, x2) = ∑ λiφi(x1)φi(x2) i z That is: the eigenfunctions act as features ! www.support-vector.net Examples of Kernels z Simple examples of kernels are: K(x, z) = x, z d 2 K(x, z) = e− x−z /2σ www.support-vector.net Example: Polynomial Kernels x = (x1, x2); z = (z1, z2); 2 2 x, z = (x1z1 + x2z2) = = 2 2 + 2 2 + = x1 z1 x2 z2 2x1z1x2z2 = 2 2 2 2 = (x1 , x2 , 2x1x2),(z1 , z2 , 2z1z2) = φ φ (x), (z) www.support-vector.net Example: Polynomial Kernels www.support-vector.net Example: the two spirals z Separated by a hyperplane in feature space (gaussian kernels) www.support-vector.net IMPORTANT Making Kernels CONCEPT z The set of kernels is closed under some operations. If K, K’ are kernels, then: z K+K’ is a kernel z cK is a kernel, if c>0 z aK+bK’ is a kernel, for a,b >0 z Etc etc etc…… z can make complex kernels from simple ones: modularity ! www.support-vector.net Second Property of SVMs: SVMs are Linear Learning Machines, that z Use a dual representation AND z Operate in a kernel induced feature space (that is: f(x)=∑αiyi φ(xi),φ(x) +b is a linear function in the feature space implicitely defined by K) www.support-vector.net Kernels over General Structures z Haussler, Watkins, etc: kernels over sets, over sequences, over trees, etc.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    85 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us