Coordinate Descent

Coordinate Descent

Coordinate Descent Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: conditional gradient method For the problem min f(x) subject to x 2 C x where f is convex, smooth and C is a convex set, the conditional gradient (Frank-Wolfe) method chooses an initial x(0) and repeats for k = 1; 2; 3;::: s(k−1) 2 argmin rf(x(k−1))T s s2C (k) (k−1) (k−1) x = (1 − γk)x + γks Here γk is a step size, either prespecified (as in γk = 2=(k + 1)) or chosen by line search For many problems, linear minimization over C is simpler or more efficient than projection onto C, hence the appeal of Frank-Wolfe 2 Outline Today: • Coordinate descent • Examples • Implementation tricks • Coordinate descent|literally • Screening rules 3 Coordinate descent We've seen some pretty sophisticated methods thus far We now focus on a very simple technique that can be surprisingly efficient, scalable: coordinate descent, or more appropriately called coordinatewise minimization n Q: Given convex, differentiable f : R ! R, if we are at a point x such that f(x) is minimized along each coordinate axis, then have we found a global minimizer? I.e., does f(x + δei) ≥ f(x) for all δ; i ) f(x) = minz f(z)? n (Here ei = (0;:::; 1;::: 0) 2 R , the ith standard basis vector) 4 f x1 x2 A: Yes! Proof: @f @f rf(x) = (x);::: (x) = 0 @x1 @xn Q: Same question, but now for f convex, and not differentiable? 5 4 2 f ● 0 x2 −2 −4 x2 x1 −4 −2 0 2 4 x1 A: No! Look at the above counterexample Pn Q: Same question again, but now f(x) = g(x) + i=1 hi(xi), with g convex, differentiable and each hi convex? (Here the nonsmooth part is called separable) 6 4 2 f ● 0 x2 −2 −4 x2 x1 −4 −2 0 2 4 x1 A: Yes! Proof: for any y, n T X f(y) − f(x) ≥ rg(x) (y − x) + [hi(yi) − hi(xi)] i=1 n X = [rig(x)(yi − xi) + hi(yi) − hi(xi)] ≥ 0 i=1 | {z } ≥0 7 Coordinate descent Pn This suggests that for f(x) = g(x) + i=1 hi(xi), with g convex, differentiable and each hi convex, we can use coordinate descent to find a minimizer: start with some initial guess x(0), and repeat (k) (k−1) (k−1) (k−1) x1 2 argmin f x1; x2 ; x3 ; : : : xn x1 (k) (k) (k−1) (k−1) x2 2 argmin f x1 ; x2; x3 ; : : : xn x2 (k) (k) (k) (k−1) x3 2 argmin f x1 ; x2 ; x3; : : : xn x2 ::: (k) (k) (k) (k) xn 2 argmin f x1 ; x2 ; x3 ; : : : xn x2 (k) for k = 1; 2; 3;:::. Important: after solving for xi , we use its new value from then on! 8 Tseng (2001) proves that for such f (provided f is continuous on compact set fx : f(x) ≤ f(x(0))g and f attains its minimum), any limit point of x(k), k = 1; 2; 3;::: is a minimizer of f 1 Notes: • Order of cycle through coordinates is arbitrary, can use any permutation of f1; 2; : : : ng • Can everywhere replace individual coordinates with blocks of coordinates • \One-at-a-time" update scheme is critical, and \all-at-once" scheme does not necessarily converge • The analogy for solving linear systems: Gauss-Seidel versus Jacobi method 1Using real analysis, we know that x(k) has subsequence converging to x? (Bolzano-Weierstrass), and f(x(k)) converges to f ? (monotone convergence) 9 Example: linear regression n n×p Given y 2 R , and X 2 R with columns X1;:::Xp, consider linear regression: 1 2 min ky − Xβk2 β 2 Minimizing over βi, with all βj, j 6= i fixed: T T 0 = rif(β) = Xi (Xβ − y) = Xi (Xiβi + X−iβ−i − y) i.e., we take T Xi (y − X−iβ−i) βi = T Xi Xi Coordinate descent repeats this update for i = 1; 2; : : : ; p; 1; 2;::: 10 Coordinate descent vs gradient descent for linear regression: 100 instances with n = 100, p = 20 1e+02 GD CD 1e−01 1e−04 f(k)−fstar 1e−07 1e−10 0 10 20 30 40 k 11 Is it fair to compare 1 cycle of coordinate descent to 1 iteration of gradient descent? Yes, if we're clever • Gradient descent: β β + tXT (y − Xβ), costs O(np) flops • Coordinate descent, one coordinate update: T T Xi (y − X−iβ−i) Xi r βi T = 2 + βi Xi Xi kXik2 where r = y − Xβ • Each coordinate costs O(n) flops: O(n) to update r, O(n) to T compute Xi r • One cycle of coordinate descent costs O(np) operations, same as gradient descent 12 1e+02 GD AGD CD 1e−01 Same example, but 1e−04 now with acceler- f(k)−fstar ated gradient de- scent for comparison 1e−07 1e−10 0 10 20 30 40 k Is this contradicting the optimality of accelerated gradient descent? No! Coordinate descent uses more than first-order information 13 Example: lasso regression Consider the lasso problem: 1 2 min ky − Xβk2 + λkβk1 β 2 Pp Note that the nonsmooth part is separable: kβk1 = i=1 jβij Minimizing over βi, with βj, j 6= i fixed: T T 0 = Xi Xiβi + Xi (X−iβ−i − y) + λsi where si 2 @jβij. Solution is simply given by soft-thresholding T Xi (y − X−iβ−i) βi = S 2 λ/kXik2 T Xi Xi Repeat this for i = 1; 2; : : : p; 1; 2;::: 14 Proximal gradient vs coordinate descent for lasso regression: 100 instances with n = 100, p = 20 PG APG CD 1e−01 For same reasons as before: • All methods 1e−04 use O(np) flops per f(k)−fstar 1e−07 iteration • Coordinate descent uses 1e−10 much more information 1e−13 0 10 20 30 40 k 15 Example: box-constrained QP n n Given b 2 R , Q 2 S+, consider box-constrained quadratic program 1 min xT Qx + bT x subject to l ≤ x ≤ u x 2 Pn Fits into our framework, as Ifl ≤ x ≤ ug = i=1 Ifli ≤ xi ≤ uig Minimizing over xi with all xj, j 6= i fixed: same basic steps give P bi − j6=i Qijxj xi = T[li;ui] Qii where T[li;ui] is the truncation (projection) operator onto [li; ui]: 8 u if z > u <> i i T[li;ui](z) = z if li ≤ z ≤ ui > :li if z < li 16 Example: support vector machines A coordinate descent strategy can be applied to the SVM dual: 1 min αT X~X~ T α − 1T α subject to 0 ≤ α ≤ C1; αT y = 0 α 2 Sequential minimal optimization or SMO (Platt 1998) is basically blockwise coordinate descent in blocks of 2. Instead of cycling, it chooses the next block greedily Recall the complementary slackness conditions αi 1 − ξi − (Xβ~ )i − yiβ0 = 0; i = 1; : : : n (1) (C − αi)ξi = 0; i = 1; : : : n (2) where β; β0; ξ are the primal coefficients, intercept, and slacks. T Recall that β = X~ α, β0 is computed from (1) using any i such that 0 < αi < C, and ξ is computed from (1), (2) 17 SMO repeats the following two steps: • Choose αi; αj that do not satisfy complementary slackness, greedily (using heuristics) • Minimize over αi; αj exactly, keeping all other variables fixed Using equality constraint, reduces to minimizing uni- variate quadratic over an interval (From Platt 1998) Note this does not meet separability assumptions for convergence from Tseng (2001), and a different treatment is required Many further developments on coordinate descent for SVMs have been made; e.g., a recent one is Hsiesh et al. (2008) 18 Coordinate descent in statistics and ML History in statistics: • Idea appeared in Fu (1998), and again in Daubechies et al. (2004), but was inexplicably ignored • Three papers around 2007, especially Friedman et al. (2007), really sparked interest in statistics and ML communities Why is it used? • Very simple and easy to implement • Careful implementations can be near state-of-the-art • Scalable, e.g., don't need to keep full data in memory Examples: lasso regression, lasso GLMs (under proximal Newton), SVMs, group lasso, graphical lasso (applied to the dual), additive modeling, matrix completion, regression with nonconvex penalties 19 Pathwise coordinate descent for lasso Here is the basic outline for pathwise coordinate descent for lasso, from Friedman et al. (2007), Friedman et al. (2009) Outer loop (pathwise strategy): • Compute the solution over a sequence λ1 > λ2 > : : : > λr of tuning parameter values • For tuning parameter value λk, initialize coordinate descent algorithm at the computed solution for λk+1 (warm start) Inner loop (active set strategy): • Perform one coordinate cycle (or small number of cycles), and record active set A of coefficients that are nonzero • Cycle over coefficients in A until convergence • Check KKT conditions over all coefficients; if not all satisfied, add offending coefficients to A, go back one step 20 Notes: • Even when the solution is desired at only one λ, the pathwise strategy (solving over λ1 > : : : > λr = λ) is typically much more efficient than directly performing coordinate descent at λ • Active set strategy takes advantage of sparsity; e.g., for very large problems, coordinate descent for lasso is much faster than it is for ridge regression • With these strategies in place (and a few more clever tricks), coordinate descent can be competitve with fastest algorithms for `1 penalized minimization problems • Fast Fortran implement glmnet, easily linked to R or MATLAB 21 What's in a name? The name coordinate descent is confusing.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    39 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us