Randomized Block Cubic Newton Method

Randomized Block Cubic Newton Method

Randomized Block Cubic Newton Method Nikita Doikov 1 Peter Richtarik´ 2 3 4 Abstract properties. Our aim is to capitalize on these different prop- erties in the design of our algorithm. We assume that g has We study the problem of minimizing the sum of Lipschitz gradient1, φ has Lipschitz Hessian, while is three convex functions—a differentiable, twice- allowed to be nonsmooth, albeit “simple”. differentiable and a non-smooth term—in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton 1.1. Block Structure (RBCN) method, which in each iteration builds Moreover, we assume that the N coordinates of x are parti- a model of the objective function formed as the P tioned into n blocks of sizes N1;:::;Nn, with i Ni = N, sum of the natural models of its three compo- Ni and then write x = (x(1); : : : ; x(n)), where x(i) 2 R . nents: a linear model with a quadratic regular- This block structure is typically dictated by the particular izer for the differentiable term, a quadratic model application considered. Once the block structure is fixed, with a cubic regularizer for the twice differen- we further assume that φ and are block separable. That is, tiable term, and perfect (proximal) model for the Pn Pn φ(x) = i=1 φi(x(i)) and (x) = i=1 i(x(i)), where nonsmooth term. Our method in each iteration φi are twice differentiable with Lipschitz Hessians, and i minimizes the model over a random subset of are closed convex (and possibly nonsmooth) functions. blocks of the search variable. RBCN is the first algorithm with these properties, generalizing sev- Revealing this block structure, problem (1) takes the form eral existing methods, matching the best known n n def X X bounds in all special cases. We establish O(1/), min F (x) = g(x) + φi(x(i)) + i(x(i)): (2) p x2Q O(1= ) and O(log(1/)) rates under different i=1 i=1 assumptions on the component functions. Lastly, We are specifically interested in the case when n is big, in we show numerically that our method outperforms which case it make sense to update a small number of the the state of the art on a variety of machine learning block in each iteration only. problems, including cubically regularized least- squares, logistic regression with constraints, and 1.2. Related Work Poisson regression. There has been a substantial and growing volume of research related to second-order and block-coordinate optimization. 1. Introduction In this part we briefly mention some of the papers most relevant to the present work. In this paper we develop an efficient randomized algorithm for solving an optimization problem of the form A major leap in second-order optimization theory was made since the cubic Newton method was proposed by arXiv:1802.04084v2 [math.OC] 7 Aug 2018 Griewank(1981) and independently rediscovered by Nes- def min F (x) = g(x) + φ(x) + (x); (1) terov & Polyak(2006), who also provided global complexity x2Q guarantees. where Q ⊆ RN is a closed convex set, and g; φ and are convex functions with different smoothness and structural Cubic regularization was equipped with acceleration by Nes- terov(2008), adaptive stepsizes by (Cartis et al., 2011a;b) 1National Research University Higher School of Economics, 2 and extended to a universal framework by Grapiglia & Nes- Moscow, Russia King Abdullah University of Science and terov(2017). The universal schemes can automatically Technology, Thuwal, Saudi Arabia 3University of Edinburgh, Edinburgh, United Kingdom 4Moscow Institute of Physics adjust to the implicit smoothness level of the objective. Cu- and Technology, Dolgoprudny, Russia. Correspondence to: bically regularized second-order schemes for solving sys- Nikita Doikov <[email protected]>, Peter Richtarik´ <pe- tems of nonlinear equations were developed by Nesterov [email protected], [email protected]>. 1Our assumption is bit more general than this; see Assump- February 10, 2018 tions1,2 for details. Randomized Block Cubic Newton Method (2007) and randomized variants for stochastic optimiza- reduces to the cubically-regularized Newton method of tion were considered by Tripuraneni et al.(2017); Ghadimi Nesterov & Polyak(2006). Even when n = 1, RBCN et al.(2017); Kohler & Lucchi(2017); Cartis & Scheinberg can be seen as an extension of this method to composite (2018). optimization. For n > 1, RBCN provides an extension of the algorithm in Nesterov & Polyak(2006) to the Despite their attractive global iteration complexity guar- randomized block coordinate setting, popular for high- antees, the weakness of second-order methods in general, dimensional problems. and cubic Newton in particular, is their high computational cost per iteration. This issue remains the subject of active • In the special case when φ = 0 and Ni = 1 for all research. For successful theoretical results related to the i, RBCN specializes to the stochastic Newton (SN) approximation of the cubic step we refer to (Agarwal et al., method of Qu et al.(2016). Applied to the empirical 2016) and (Carmon & Duchi, 2016). risk minimization problem (see Section7), our method has a dual interpretation (see Algorithm2). In this At the same time, there are many successful attempts case, our method reduces to the stochastic dual Newton to use block coordinate randomization to accelerate first- ascent method (SDNA) also described in (Qu et al., order (Tseng & Yun, 2009; Richtarik´ & Taka´cˇ, 2014; 2016) 2016). Hence, RBCN can be seen as an extension of and second-order (Qu et al., 2016; Mutny` & Richtarik´ , 2018) SN and SDNA to blocks of arbitrary sizes, and to the methods. inclusion of the twice differentiable term φ. In this work we are addressing the issue of combining block- • In the case when φ = 0 and the simplest over approxi- coordinate randomization with cubic regularization, to get a 2 second-order method with proven global complexity guar- mation of g is assumed: 0 r g(x) LI, the com- antees and with a low cost per iteration. posite block coordinate gradient method Tseng & Yun (2009) can be applied to solve (1). Our method extends A powerful advance in convex optimization theory was the this in two directions: we add twice-differentiable advent of composite or proximal first-order methods (see terms φ, and use a tighter model for g, using all global (Nesterov, 2013) as a modern reference). This technique has curvature information (if available). become available as an algorithmic tool in block coordinate setting as well (Richtarik´ & Taka´cˇ, 2014; Qu et al., 2016). We prove high probability global convergence guarantees Our aim in this work is the development of a composite under several regimes, summarized next: cubically regularized second-order method. • Under no additional assumptions on g, φ and beyond 1.3. Contributions convexity (and either boundedness of Q, or bounded- ness of the level sets of F on Q), we prove the rate We propose a new randomized second-order proximal al- n gorithm for solving convex optimization problems of the O ; form (2). Our method, Randomized Block Cubic Newton τ (RBCN) (see Algorithm1) treats the three functions appear- where τ is the mini-batch size (see Theorem1). ing in (1) differently, according to their nature. • Under certain conditions combining the properties of Our method is a randomized block method because in each g with the way the random blocks are sampled, for- iteration we update a random subset of the n blocks only. malized by the assumption β > 0 (see (12) for the This facilitates faster convergence, and is suited to problems definition of β), we obtain the rate where n is very large. Our method is proximal because we n keep the functions i in our model, which is minimized in O p each iteration, without any approximation. Our method is τ maxf1; βg a cubic Newton method because we approximate each φi (see Theorem2). In the special case when n = 1, we using a cubically-regularized second order model. necessarily have τ = 1 and β = µ/L (reciprocal of the condition number of g) we get the rate O( Lp ). If We are not aware of any method that can solve (2) via µ using the most appropriate models of the three functions g is quadratic and τ = n, then β = 1 and the resulting complexity O( p1 ) recovers the rate of cubic Newton (quadratic with a constant Hessian for g, cubically regular- ized quadratic for φ and no model for ), not even in the established by Nesterov & Polyak(2006). case n = 1. • Finally, if g is strongly convex, the above result can be Our approach generalizes several existing results: improved (see Theorem3) to n 1 O log : • In the case when n = 1, g = 0 and = 0, RBCN τ maxf1; βg Randomized Block Cubic Newton Method 1.4. Contents Note that these definitions imply that N The rest of the paper is organized as follows. In Section2 hA[S]x; yi = hAx[S]; y[S]i; x; y 2 R : we introduce the notation and elementary identities needed to efficiently handle the block structure of our model. In Next, we define the block-diagonal operator, which, up Section3 we make the various smoothness and convexity to permutation of coordinates, retains diagonal blocks and assumptions on g and φi formal. Section4 is devoted to nullifies the off-diagonal blocks: the description of the block sampling process used in our n n def X T T X method, along with some useful identities. In Section5 blockdiag(A) = UiUi AUiUi = A[fig]: we describe formally our randomized block cubic Newton i=1 i=1 (RBCN) method. Section6 is devoted to the statement and N def N description of our main convergence results, summarized Finally, denote R[S] = x[S] j x 2 R .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us