
Alternating Randomized Block Coordinate Descent Jelena Diakonikolas 1 Lorenzo Orecchia 1 Abstract perform a gradient descent step on a single block at every iteration, while leaving the remaining variable blocks fixed. Block-coordinate descent algorithms and alternat- A paradigmatic example of this approach is the randomized ing minimization methods are fundamental opti- Kaczmarz algorithm of (Strohmer & Vershynin, 2009) for mization algorithms and an important primitive linear systems and its generalization (Nesterov, 2012). The in large-scale optimization and machine learn- second class is that of alternating minimization methods, ing. While various block-coordinate-descent-type i.e., algorithms that partition the variable set into only n = 2 methods have been studied extensively, only alter- blocks and alternate between exactly optimizing one block nating minimization – which applies to the setting or the other at each iteration (see, e.g., (Beck, 2015) and of only two blocks – is known to have conver- references therein). gence time that scales independently of the least smooth block. A natural question is then: is the Besides the computational advantage in only having to up- setting of two blocks special? date a subset of variables at each iteration, methods in these two classes are also able to exploit better the structure of We show that the answer is “no” as long as the the problem, which, for instance, may be computationally least smooth block can be optimized exactly – expensive only in a small number of variables. To formalize an assumption that is also needed in the setting this statement, assume that the set of variables is partitioned of alternating minimization. We do so by intro- into n ≤ N mutually disjoint blocks, where the ith block ducing a novel algorithm AR-BCD, whose con- of variable x is denoted by xi, and the gradient correspond- vergence time scales independently of the least th ing to the i block is denoted by rif(x). Each block i smooth (possibly non-smooth) block. The basic will be associated with a smoothness parameter L ; I.e., algorithm generalizes both alternating minimiza- i 8x; y 2 RN : tion and randomized block coordinate (gradient) descent, and we also provide its accelerated ver- i i krif(x + IN y) − rif(x)k∗ ≤ Liky k; (1.1) sion – AAR-BCD. As a special case of AAR- i BCD, we obtain the first nontrivial accelerated where IN is a diagonal matrix whose diagonal entries equal alternating minimization algorithm. one for coordinates from block i, and are zero otherwise. In this setting, the convergence time of standard randomized block-coordinate descent methods, such as those in (Nes- P 1. Introduction i Li terov, 2012), scales as O , where is the desired First-order methods for minimizing smooth convex func- additive error. By contrast, when n = 2; the convergence tions are a cornerstone of large-scale optimization and ma- time of the alternating minimization method (Beck, 2015) Lmin arXiv:1805.09185v2 [math.OC] 1 Jul 2019 chine learning. Given the size and heterogeneity of the data scales as O ; where Lmin is the minimum smooth- in these applications, there is a particular interest in design- ness parameter of the two blocks. This means that one of ing iterative methods that, at each iteration, only optimize the two blocks can have arbitrarily poor smoothness (in- over a subset of the decision variables (Wright, 2015). cluding 1), as long it is easy to optimize over it. Some important examples with a nonsmooth block (with smooth- This paper focuses on two classes of methods that consti- ness parameter equal to infinity) can be found in (Beck, tute important instantiations of this idea. The first class 2015). Additional examples of problems for which exact is that of block-coordinate descent methods, i.e., methods optimization over the least smooth block can be performed that partition the set of variables into n ≥ 2 blocks and efficiently are provided in AppendixB. 1 Department of Computer Science, Boston University, Boston, In this paper, we address the following open question, which MA, USA. Correspondence to: Jelena Diakonikolas <jele- [email protected]>, Lorenzo Orecchia <[email protected]>. was implicitly raised by (Beck & Tetruashvili, 2013): can we design algorithms that combine the features of randomized block-coordinate descent and alternating minimization? In Alternating Randomized Block Coordinate Descent particular, assuming we can perform exact optimization However, even in the non-smooth convex case, methods on block n, can we construct a block-coordinate descent that perform exact minimization over a fixed set of blocks Pn−1 algorithm whose running time scales with O( i=1 Li); i.e., may converge arbitrarily slowly. This has lead scholars th independently of the smoothness Ln of the n block? This to focus on the case of smooth convex minimization, for would generalize both existing block-coordinate descent which nonasymptotic convergence rates were obtained re- methods, by allowing one block to be optimized exactly, cently in (Beck & Tetruashvili, 2013; Beck, 2015; Sun & and existing alternating minimization methods, by allowing Hong, 2015; Saha & Tewari, 2013). However, prior to n to be larger than 2 and requiring exact optimization only our work, convergence bounds that are independent of the on a single block. largest smoothness parameter were only known for the set- ting of two blocks. We answer these questions in the affirmative by presenting a novel algorithm: alternating randomized block coordinate Randomized coordinate descent methods, in which steps descent (AR-BCD). The algorithm alternates between an over coordinate blocks are taken in a non-cyclic random- exact optimization over a fixed, possibly non-smooth block, ized order (i.e., in each iteration one block is sampled with and a gradient descent or exact optimization over a randomly replacement) were originally analyzed in (Nesterov, 2012). selected block among the remaining blocks. For two blocks, The same paper (Nesterov, 2012) also provided an accel- the method reduces to the standard alternating minimization, erated version of these methods. The results of (Nesterov, while when the non-smooth block is empty (not optimized 2012) were subsequently improved and generalized to vari- over), we get randomized block coordinate descent (RCDM) ous other settings (such as, e.g., composite minimization) in from (Nesterov, 2012). (Lee & Sidford, 2013; Allen-Zhu et al., 2016; Nesterov & Stich, 2017; Richtarik´ & Taka´cˇ, 2014; Fercoq & Richtarik´ , Our second contribution is AAR-BCD, an accelerated ver- 2015; Lin et al., 2014). The analysis of the different block sion of AR-BCD, which achieves the accelerated rate of coordinate descent methods under various sampling proba- 1 without incurring any dependence on the smoothness k2 bilities (that, unlike in our setting, are non-zero over all the of block n. Furthermore, when the non-smooth block is blocks) was unified in (Qu & Richtarik´ , 2016) and extended empty, AAR-BCD recovers the fastest known convergence to a more general class of steps within each block in (Gower bounds for block-coordinate descent (Qu & Richtarik´ , 2016; & Richtarik´ , 2015; Qu et al., 2016). Allen-Zhu et al., 2016; Nesterov, 2012; Lin et al., 2014; Nesterov & Stich, 2017). As a special case, AAR-BCD Our results should be carefully compared to a number of provides the first accelerated alternating minimization algo- proximal block-coordinate methods that rely on different rithm, obtained directly from AAR-BCD when the number assumptions (Tseng & Yun, 2009; Richtarik´ & Taka´cˇ, 2014; of blocks equals two.1 Another conceptual contribution Lin et al., 2014; Fercoq & Richtarik´ , 2015). In this setting, is our extension of the approximate duality gap technique the function f is assumed to have the structure f0(x) + of (Diakonikolas & Orecchia, 2017), which leads to a gen- Ψ(x); where f0 is smooth, the non-smooth convex function Pn eral and more streamlined analysis. Ψ is separable over the blocks, i.e., Ψ(x) = i=1 Ψi(xi), and we can efficiently compute the proximal operator of Finally, to illustrate the results, we perform a preliminary each Ψ . This strong assumption allows these methods to experimental evaluation of our methods against existing i make use of the standard proximal optimization framework. block-coordinate algorithms and discuss how their perfor- By contrast, in our paper, the convex objective can be taken mance depends on the smoothness and size of the blocks. to have an arbitrary form, where the non-smoothness of a block need not be separable, though the function is assumed Related Work Alternating minimization and cyclic block to be differentiable. coordinate descent are old and fundamental algorithms (Or- tega & Rheinboldt, 1970) whose convergence (to a station- 2. Preliminaries ary point) has been studied even in the non-convex setting, in which they were shown to converge asymptotically under We assume that we are given oracle access to the gradients of the additional assumptions that the blocks are optimized a continuously differentiable convex function f : RN ! R, exactly and their minimizers are unique (Bertsekas, 1999). where computing gradients over only a subset of coordinates 1 is computationally much cheaper than computing the full The remarks about accelerated alternating minimization have N been added in the second version of the paper, in July 2019, partly gradient. We are interested in minimizing f(·) over R , and we denote x = argmin N f(x). We let k · k denote an to clarify the relationship to methods obtained in (Guminov et al., ∗ x2R 2019), which was posted to the arXiv for the first time in June arbitrary (but fixed) norm, and k·k∗ denote its dual norm, de- 2019. At a technical level, nothing new is introduced compared to the first version of the paper – everything stated in Section 4.2 follows either as a special case or a simple corollary of the results that appeared in the first version of the paper in May 2018.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-