Entropy Regularized Lpboost

Entropy Regularized Lpboost

Entropy Regularized LPBoost Manfred K. Warmuth1, Karen A. Glocer1, and S.V.N. Vishwanathan2 1 Computer Science Department University of California, Santa Cruz CA 95064, U.S.A {manfred,kag}@cse.ucsc.edu 2 NICTA, Locked Bag 8001 Canberra ACT 2601, Australia [email protected] Abstract. In this paper we discuss boosting algorithms that maximize the soft margin of the produced linear combination of base hypotheses. LPBoost is the most straightforward boosting algorithm for doing this. It maximizes the soft margin by solving a linear programming problem. While it performs well on natural data, there are cases where the number of iterations is linear in the number of examples instead of logarithmic. By simply adding a relative entropy regularization to the linear ob- jectiveofLPBoost,wearriveattheEntropy Regularized LPBoost al- gorithm for which we prove a logarithmic iteration bound. A previous algorithm, called SoftBoost, has the same iteration bound, but the gener- alization error of this algorithm often decreases slowly in early iterations. Entropy Regularized LPBoost does not suffer from this problem and has a simpler, more natural motivation. 1 Introduction In the boosting by sampling setting, we are given a training set of ± labeled examples and an oracle for producing base hypotheses that are typically not very good. A boosting algorithm is able to find a linear combination of hypotheses that is a better classifier than the individual base hypotheses. To achieve this, the boosting algorithms maintain a distribution on the examples. At each iteration, this distribution is given to the oracle, which must return a base hypothesis that has a certain weak guarantee w.r.t. the current distribution on the examples. The algorithm then redistributes the weight on the examples so that more weight is put on the harder examples. In the next iteration, the oracle again provides a base hypothesis with the same weak guarantee for the new distribution and incorporates the obtained base hypothesis into the linear combination, and so forth. When the algorithm stops, it outputs a linear combination of the base hypotheses obtained in all iterations. When the data is linearly separable, then maximizing the margin is a prov- ably effective proxy for low generalization error [20] and in the inseparable case, maximizing the soft margin is a more robust choice. The soft margin can be max- imized directly with a linear program, and this is precisely the approach taken Y. Freund et al. (Eds.): ALT 2008, LNAI 5254, pp. 256–271, 2008. c Springer-Verlag Berlin Heidelberg 2008 Entropy Regularized LPBoost 257 by a variant LPBoost [9, 16, 3]. Unless otherwise specified, the name LPBoost stands for this variant. While this algorithm has been shown to perform well in practice, no iteration bounds are known for it. As a matter of fact, the number of iterations required by LPBoost can be linear in the number of examples [24]. So what is the key to designing boosting algorithms with good iteration bounds? Clearly, linear programming is not enough. To answer this question, let us take a step back and observe that in boost- ing there are two sets of dual weights/variables: the weights on the fixed set of examples and the weights on the set of currently selected hypotheses. In our case, both sets of weights are probability distributions. It has been shown that AdaBoost determines the weights on the hypotheses by minimizing a sum of ex- ponentials [7]. Many boosting algorithms are motivated by modifying this type of objective function for determining the weights of the hypotheses [4]. Alter- natively AdaBoost can be seen as minimizing a relative entropy to the current distribution subject to the linear constraint that the edge of the last hypothe- sis is nonnegative [11, 12]. All boosting algorithms whose iteration bounds are logarithmic in the number of examples are motivated by either optimizing a sum of exponentials in the hypothesis domain or a cost function that involves a relative entropy in the example domain. This line of research culminated in 1 N the boosting algorithm SoftBoost [24], which requires O( 2 ln ν ) iterations to produce a linear combination of base hypotheses whose soft margin is within of the maximum minimum soft margin. Here ν ∈ [1,N] is the trade-off parameter for the soft margin.1 More precisely, SoftBoost minimizes the relative entropy to the initial distribution subject to linear constraints on the edges of all the base hypotheses obtained so far. The upper bound on the edges is gradually de- creased, and this leads to the main problem with SoftBoost: the generalization error decreases slowly in early iterations. Once the relative entropy regularization was found to be crucial in the de- 1 sign of boosting algorithms, a natural algorithm emerged: simply add η times the relative entropy to the initial distribution to the maximum soft edge ob- jective of LPBoost and minimize the resulting sum. We call this algorithm En- tropy Regularized LPBoost. A number of similar algorithms (such as ν-Arc [16]) were discussed in Gunnar R¨atsch’s Ph.D. thesis[14], but no iteration bounds were proven for them even though they were shown to have good experimen- tal performance. Most recently, a similar boosting algorithm was considered by [18, 19] based on the Lagrangian dual of the optimization problem that mo- 1 tivates the Entropy Regularized LPBoost algorithm. However the O( >3 ln N) iteration bound proven for the recent algorithm is rather weak. In this paper, 1 N we prove an O( 2 ln ν ) iteration bound for Entropy Regularized LPBoost. This ln N bound is similar to the lower bound of O( g2 ) for boosting algorithms [5] for the hard margin case, where g is the minimum guaranteed edge of the weak learner. In related work, the same bound was proved for another algorithm that involves a tradeoff with the relative entropy [21]. This algorithm, however, belongs to 1 An earlier version of this algorithm called TotalBoost dealt with the separable case and maximized the hard margin [23]. 258 M.K. Warmuth, K.A. Glocer, and S.V.N. Vishwanathan the “corrective” family of boosting algorithms (which includes AdaBoost) that only update the weights on the examples based on the last hypothesis. LPBoost, SoftBoost and the new Entropy Regularized LPBoost are “totally corrective” in the sense that they optimize their weight based on all past hypothesis. While the corrective algorithms are always simpler and faster on an iteration by iteration basis, the totally corrective versions require drastically fewer iterations and, in our experiments, beat the corrective variants based on total computation time. Also the simple heuristic of cycling over the past hypotheses with a corrective algorithm to make it totally corrective is useful for quick prototyping but in our experience this is typically less efficient than direct implementations of the totally corrective algorithms based on standard optimization techniques. Outline: In Section 2 we discuss the boosting setting and motivate the Entropy Regularized LPBoost algorithm by adding a relative entropy regularizer to a linear program formulated in the weight domain on the examples. We give the algorithm in Section 3 and discuss its stopping criterion. The dual of Entropy Regularized LPBoost’s minimization problem in given in Section 4, and our main result, the iteration bound, is covered in Section 5. Finally, Section 6 contains our experimental evaluation of Entropy Regularized LPBoost and its competitors. The paper concludes with an outlook and discussion in Section 7. 2 The Boosting Setting and LPBoost In the boosting setting, we are given a set of N labeled training examples (xn,yn), n =1...N, where the instances xn are in some domain X and the la- bels yn ∈±1. Boosting algorithms maintain a distribution d on the N examples, so d lies in the N dimensional probability simplex SN . Intuitively, the examples that are hard to classify are given more weight. In each iteration t =1, 2,...the algorithm gives the current distribution dt−1 to an oracle (a.k.a. the weak learn- ing algorithm), which returns a new base hypothesis ht : X→[−1, 1] from some base hypothesis class H. The hypothesis returned by the oracle comes with a certain guarantee of performance. This guarantee will be discussed in Section 3. t One measure of the performance of a base hypothesis h w.r.t. the current t−1 N t−1 t distribution d is its edge, which is defined as n=1 dn ynh (xn). When the range of ht is ±1 instead of the interval [-1,1], then the edge is just an affine t transformation of the weighted error ht of hypothesis h . A hypothesis that predicts perfectly has edge 1 and a hypothesis that always predicts incorrectly has edge −1, while a random hypothesis has edge approximately 0. The higher the edge, the more useful the hypothesis is for classifying the training examples. The edge of a set of hypotheses is defined as the maximum edge of the set. It is convenient to define an N-dimensional vector ut that combines the base t t t hypothesis h with the labels yn of the N examples: un := ynh (xn). With this notation, the edge of the hypothesis ht becomes ut · dt−1. After a hypothesis ht is received, the algorithm must update its distribution t−1 t d on the examples using u . The final output of the boosting algorithm is al- T q ways a convex combination of base hypotheses fw(xn)= q=1 wqh (xn), where Entropy Regularized LPBoost 259 Algorithm 1. Entropy Regularized LPBoost 1. Input: S = {(x1,y1),...,(xN ,yN )}, accuracy parameter >0, regularization parameter η>0, and smoothness parameter ν ∈ [1,N].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us