Statistical Learning Theory: Models, Concepts, and Results

Statistical Learning Theory: Models, Concepts, and Results

STATISTICAL LEARNING THEORY: MODELS, CONCEPTS, AND RESULTS Ulrike von Luxburg and Bernhard Sch¨olkopf 1 INTRODUCTION Statistical learning theory provides the theoretical basis for many of today’s ma- chine learning algorithms and is arguably one of the most beautifully developed branches of artificial intelligence in general. It originated in Russia in the 1960s and gained wide popularity in the 1990s following the development of the so-called Support Vector Machine (SVM) , which has become a standard tool for pattern recognition in a variety of domains ranging from computer vision to computa- tional biology. Providing the basis of new learning algorithms, however, was not the only motivation for developing statistical learning theory. It was just as much a philosophical one, attempting to answer the question of what it is that allows us to draw valid conclusions from empirical data. In this article we attempt to give a gentle, non-technical overview over the key ideas and insights of statistical learning theory. We do not assume that the reader has a deep background in mathematics, statistics, or computer science. Given the nature of the subject matter, however, some familiarity with mathematical concepts and notations and some intuitive understanding of basic probability is required. There exist many excellent references to more technical surveys of the mathematics of statistical learning theory: the monographs by one of the founders of statistical learning theory ( [Vapnik, 1995 ], [Vapnik, 1998 ]), a brief overview over statistical learning theory in Section 5 of [Sch¨olkopf and Smola, 2002 ], more techni- cal overview papers such as [Bousquet et al. , 2003 ], [Mendelson, 2003 ], [Boucheron et al. , 2005 ], [Herbrich and Williamson, 2002 ], and the monograph [Devroye et al. , 1996 ]. 2 THE STANDARD FRAMEWORK OF STATISTICAL LEARNING THEORY 2.1 Background In our context, learning refers to the process of inferring general rules by observing examples. Many living organisms show some ability to learn. For instance, children can learn what “a car” is, just by being shown examples of objects that are cars Handbook of the History of Logic. Volume 10: Inductive Logic. Volume editors: Dov M. Gabbay, Stephan Hartmann and John Woods. General editors: Dov M. Gabbay, Paul Thagard and John Woods. © 2009 Elsevier BV. All rights reserved. 652 Ulrike von Luxburg and Bernhard Sch¨olkopf and objects that are not cars. They do not need to be told any rules about what is it that makes an object a car, they can simply learn the concept “car” by observing examples. The field of machine learning does not study the process of learning in living organisms, but instead studies the process of learning in the abstract. The question is how a machine, a computer, can “learn” specific tasks by following specified learning algorithms. To this end, the machine is shown particular examples of a specific task. Its goal is then to infer a general rule which can both explain the examples it has seen already and which can generalize to previously unseen, new examples. Machine learning has roots in artificial intelligence, statistics, and computer science, but by now has established itself as a scientific discipline in its own right. As opposed to artificial intelligence, it does not try to explain or generate “intelligent behavior”, its goal is more modest: it just wants to discover mechanisms by which very specific tasks can be “learned” by a computer. Once put into a formal framework, many of the problems studied in machine learning sound familiar from statistics or physics: regression, classification, clustering, and so on. However, machine learning looks at those problems with a different focus: the one of inductive inference and generalization ability. The most well-studied problem in machine learning is the problem of classifica- tion. Here we deal with two kind of spaces: the input space X (also called space of instances) and the output space (label space) Y. For example, if the task is to classify certain objects into a given, finite set of categories such as “car”, “chair”, “cow”, then X consists of the space of all possible objects (instances) in a certain, fixed representation, while Y is the space of all available categories. In order to learn, an algorithm is given some training examples ( X1, Y 1), ..., (Xn, Y n), that is pairs of objects with the corresponding category label. The goal is then to find a mapping f : X → Y which makes “as few errors as possible”. That is, among all the elements in X, the number of objects which are assigned to the wrong category is as small as possible. The mapping f : X → Y is called a classifier. In general, we distinguish between two types of learning problems: supervised ones and unsupervised ones. Classification is an example of a supervised learning problem: the training examples consist both of instances Xi and of the correct labels Yi on those instances. The goal is to find a functional relationship between instances and outputs. This setting is called supervised because at least on the training examples, the learner can evaluate whether an answer is correct, that is the learner is being supervised. Contrary to this, the training data in the un- supervised setting only consists of instances Xi, without any further information about what kind of output is expected on those instances. In this setting, the question of learning is more about discovering some “structure” on the underlying space of instances. A standard example of such a setting is clustering. Given some input points X1, ..., X n, the learner is requested to construct “meaningful groups” among the instances. For example, an online retailer might want to cluster his cus- tomers based on shopping profiles. He collects all kinds of potentially meaningful information about his customers (this will lead to the input Xi for each customer) Statistical Learning Theory 653 and then wants to discover groups of customers with similar behavior. As opposed to classification, however, it is not specified beforehand which customer should belong to which group — it is the task of the clustering algorithm to work that out. Statistical learning theory (SLT) is a theoretical branch of machine learning and attempts to lay the mathematical foundations for the field. The questions asked by SLT are fundamental: • Which learning tasks can be performed by computers in general (positive and negative results)? • What kind of assumptions do we have to make such that machine learning can be successful? • What are the key properties a learning algorithm needs to satisfy in order to be successful? • Which performance guarantees can we give on the results of certain learning algorithms? To answer those questions, SLT builds on a certain mathematical framework, which we are now going to introduce. In the following, we will focus on the case of supervised learning, more particular on the case of binary classification. We made this choice because the theory for supervised learning, in particular classification, is rather mature, while the theory for many branches of unsupervised learning is still in its infancy. 2.2 The formal setup In supervised learning, we deal with an input space (space of instances, space of objects) X and an output space (label space) Y. In the case of binary classification, we identify the label space with the set {− 1, +1 }. That is, each object can belong to one out of two classes, and by convention we denote those classes by −1 and 1. The question of learning is reduced to the question of estimating a functional re- lationship of the form f : X → Y, that is a relationship between input and output. Such a mapping f is called a classifier . In order to do this, we get access to some training points (training examples, training data) (X1, Y 1), ..., (Xn, Y n) ∈ X×Y. A classification algorithm (classification rule) is a procedure that takes the training data as input and outputs a classifier f. We do not make specific assumptions on the spaces X or Y, but we do make an assumption on the mechanism which generates those training points. Namely, we assume that there exists a joint prob- ability distribution P on X × Y, and the training examples ( Xi, Y i) are sampled independently from this distribution P . This type of sampling is often denoted as iid sampling (independent and identically distributed). There are a few important facts to note here. 654 Ulrike von Luxburg and Bernhard Sch¨olkopf 1. No assumptions on P . In the standard setting of SLT we do not make any assumption on the probability distribution P : it can be any distribution on X × Y. In this sense, statistical learning theory works in an agnostic setting which is different from standard statistics, where one usually assumes that the probability distribution belongs to a certain family of distributions and the goal is to estimate the parameters of this distribution. 2. Non-deterministic labels due to label noise or overlapping classes. Note that P is a probability distribution not only over the instances X, but also over the labels Y. As a consequence, labels Yi in the data are not necessarily just a deterministic function of the objects Xi, but can be random themselves. There are two main reasons why this can be the case. The first reason is that the data generating process can be subject to label noise. That is, it can happen that label Yi we get as a training label in the learning process is actually wrong. This is an important and realistic assumption. For example, to generate training data for email spam detection, humans are required to label emails by hand into classes “spam” and “not spam”. All humans make mistakes from time to time.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    56 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us