A Framework for Structural Risk Minimisation

A Framework for Structural Risk Minimisation

A Framework for Structural Risk Minimisation John Shawe-Taylor Peter L. Bartlett Robert C. Williamson Martin Anthony Computer Science Dept Systems Engineering Dept Engineering Dept Mathematics Dept Royal Holloway Australian National Univ Australian National Univ London School of Economics University of London Canberra 0200 Australia Canberra 0200 Australia Houghton Street Egham, TW20 0EX, UK [email protected] [email protected] London WC2A 2AE, UK [email protected] [email protected] Abstract time it would be foolish not to use as many examples as are available from the start. Hence, we suppose that a fixed num- The paper introduces a framework for studying ber of examples is allowed and that the aim of the learner is structural risk minimisation. The model views to bound the expected generalisation error with high confi- structural risk minimisation in a PAC context. It dence. The second drawback of the Linial et al.approach is then considers the more general case when the hi- that it is not clear how it can be adapted to handle the case erarchy of classes is chosen in response to the data. where errors are allowed on the training set. In this situa- This theoretically explains the impressive perfor- tion there is a need to trade off the number of errors with mance of the maximal margin hyperplane algo- the complexity of the class, since taking a class which is too rithm of Vapnik. It may also provide a general complex can result in a worse generalisation error (with a technique for exploitingserendipitous simplicityin fixed number of examples) than allowing some extra errors observed data to obtain better prediction accuracy in a more restricted class. from small training sets. The model we consider will allow a precise bound on the error arising in different classes and hence a reliable way of applying the structural risk minimisation principle introduced 1 Introduction by Vapnik [8, 10]. Indeed, the results reported in Sections 2 The standard PAC model of learning considers a fixed hy- and 3 of this paper are implicit in the cited references, though we feel justified in presenting them in this framework as pothesis class H together with a required accuracy and we believe they deserve greater attention and development. confidence 1 . The theory characterises when a target (We also make explicit some of the assumptions inherent function from H can be learned from examples in terms of the Vapnik-Chervonenkis dimension, a measure of the flex- in the presentations in [8, 10].) In Sections 4 and 5 we address a shortcoming of the SRM method which Vapnik [8] ibility of the class H and specifies sample sizes required to deliver the required accuracy with the allowed confidence. highlights: according to the SRM principle the structure has to be defined a priori before the training data appear. In many cases of practical interest the precise class containing An heuristic using maximal separation hyperplanes proposed the target function to be learned may not be known in advance. by Vapnik and coworkers [3] violates this principle in that The learner may only be given a hierarchy of classes the hierarchy defined depends on the data. We introduce a framework which allows this dependency on the data and yet H H H 1 2 d can still place rigorous bounds on the generalisation error. As H and be told that the target will lie in one of the sets d . Linial, an example, we show that the maximal margin hyperplane Mansour and Rivest [5] studied learning in such a framework approach proposed by Vapnik falls into the framework, thus by allowingthe learner to seek a consistent hypothesis in each placing the approach on a firm theoretical foundation and H subclass d in turn, drawing enough extra examples at each solving a long standing open problem in statistical induction. stage to ensure the correct level of accuracy and confidence The approach can be interpreted as a way of encoding our should a consistent hypothesis be found. bias, or prior assumptions, and possibly taking advantage of This paper addresses two shortcomings of the above ap- them if they happen to be correct. In the case of the fixed proach. The first is the requirement to draw extra examples hierarchy, we expect the target (or a close approximation H d when seeking in a richer class. It may be unrealistic to as- to it) to be in a class d with small . In the maximal sume that examples can be obtained cheaply, and at the same separation case, we expect the target to be consistent with some classifying hyperplane that has a large separation from the examples. This corresponds to a collusion between the probability distribution and the target concept, which would be impossible to exploit in the standard PAC distribution independent framework. If these assumptions happen to be correct for the training data, we can be confident we have an m d accurate hypothesis from a small data set (at the expense of . We have deliberately omitted this dependence some small penalty if they are incorrect). as they have a different status in the learning framework. p d i d (Vapnik [7] implicitly assumes i 1 , 1 ) The 2 Basic Example — No Training Errors corresponding term 4 p As an initial example we consider a hierarchy of classes ln d m H H H 1 2 d m d represents the overestimate of arising from our prior uncertainty about which class to use. If we have any H d where we will assume VCdim d for the rest of this section. Such a hierarchy of classes is termed a decomposable information about which classes are more likely to arise we concept class by Linial et al. [5]. We will assume that a can use it to improve our bias. For example, if we know that H p d class d will occur then we choose 1 and recover the fixed number m of labelled examples are given as a vector t x x standard PAC model. If on the other hand the probabilities z x x to the learner, where x 1 m , and q of the function falling in different classes are known to be d , t tx tx t x 1 m , and that the target function lies m d the expected overestimate in or loss will be H in one of the subclasses d . The learner uses an algorithm d h to find a value of which contains an hypothesis that is consistent with the sample z. What we require is a function 4 X L q p d q d ln m m d which will give the learner an upper bound on the d 1 generalisation error of h with confidence 1 . The following Hence, the difference between the loss of p and that obtained hjfi theorem gives a suitable function. We use Erz : by using the values q as bias terms is hx tx gj h i i to denote the number of errors that makes hP f h t g m on z, and erP x: x x to denote the expected L L p q KL p q x x error when 1 m are drawn independently according 4 to P . where KL 0 is the Kullback-Leibler divergence be- H p d Theorem 1 Let i be as above, and let be any set of tween the two distributions. Hence, we have the following P p positive numbers satisfying d 1 With probability corollary. d 1 m 1 over independent identically distributed example, q Corollary 2 If we know the probabilities d of the target h for any d for which a learner finds a consistent hypothesis H falling into the classes d , then we minimise the expected H h in d , the generalisation error of is bounded by p q d overestimate of the generalisationerror by choosing d . 4 2em 1 4 In this case the expected amount of the overestimate is m d d ln ln ln m d p d 4 H q Proof : The proof uses the standard bound on generalisation m error for each of the classes but divides the confidence be- where H q is the entropy of the distribution q. p H d tween the classes giving proportion d to class . Hence, we show that 3 Structural Risk Minimisation f d h H h Pr z : d Erz 0 and Using the framework established in the previous section we h m d g erP now wish to consider the possibility of errors on the training by showing that for all d sample. We will make use of the following result of Vapnik in a f h H h h m d g p P d Pr z : d Er 0 er z slightly improved version of Anthony and Shawe-Taylor [2]. The probability on the left of the inequality is, however, Note also that the result is expressed in terms of the quantity h bounded with the usual analysis via Sauer’s lemma by Erz which denotes the number of errors of the hypothesis h on the sample z, rather than the usual proportion of errors. m d m m Π 4 H 2 exp d 4 Theorem 3 ([2]) Let 0 1 and 0 1. Suppose H is an hypothesis space of functions from an input space Π m where H is the growth function: the maximum number d f g S X to 0 1 , and let be any probability measure on m H m of dichotomies implementable on points by d .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us