How to Escape Saddle Points Efficiently

How to Escape Saddle Points Efficiently

How to Escape Saddle Points Efficiently Chi Jin 1 Rong Ge 2 Praneeth Netrapalli 3 Sham M. Kakade 4 Michael I. Jordan 1 Abstract a point with small gradient is independent of the dimension This paper shows that a perturbed form of gradi- (“dimension-free”). More precisely, for a function that is `- ent descent converges to a second-order station- gradient Lipschitz (see Definition1), it is well known that gradient descent finds an -first-order stationary point (i.e., ary point in a number iterations which depends ? 2 only poly-logarithmically on dimension (i.e., it is a point x with krf(x)k ≤ ) within `(f(x0) − f )/ it- erations (Nesterov, 1998), where x0 is the initial point and almost “dimension-free”). The convergence rate ? of this procedure matches the well-known con- f is the optimal value of f. This bound does not depend vergence rate of gradient descent to first-order on the dimension of x. In convex optimization, finding an stationary points, up to log factors. When all sad- -first-order stationary point is equivalent to finding an ap- dle points are non-degenerate, all second-order proximate global optimum. stationary points are local minima, and our result In non-convex settings, however, convergence to first-order thus shows that perturbed gradient descent can stationary points is not satisfactory. For non-convex func- escape saddle points almost for free. Our results tions, first-order stationary points can be global minima, can be directly applied to many machine learning local minima, saddle points or even local maxima. Find- applications, including deep learning. As a par- ing a global minimum can be hard, but fortunately, for ticular concrete example of such an application, many non-convex problems, it is sufficient to find a local we show that our results can be used directly to minimum. Indeed, a line of recent results show that, in establish sharp global convergence rates for ma- many problems of interest, all local minima are global min- trix factorization. Our results rely on a novel ima (e.g., in tensor decomposition (Ge et al., 2015), dic- characterization of the geometry around saddle tionary learning (Sun et al., 2016a), phase retrieval (Sun points, which may be of independent interest to et al., 2016b), matrix sensing (Bhojanapalli et al., 2016; the non-convex optimization community. Park et al., 2016), matrix completion (Ge et al., 2016), and certain classes of deep neural networks (Kawaguchi, 2016)). Moreover, there are suggestions that in more gen- 1. Introduction eral deep networks most of the local minima are as good as global minima (Choromanska et al., 2014). Given a function f : Rd ! R, a gradient descent aims to minimize the function via the following iteration: On the other hand, saddle points (and local maxima) can correspond to highly suboptimal solutions in many prob- x = x − ηrf(x ); t+1 t t lems (see, e.g., Jain et al., 2015; Sun et al., 2016b). Fur- where η > 0 is a step size. Gradient descent and its vari- thermore, Dauphin et al.(2014) argue that saddle points ants (e.g., stochastic gradient) are widely used in machine are ubiquitous in high-dimensional, non-convex optimiza- learning applications due to their favorable computational tion problems, and are thus the main bottleneck in train- properties. This is notably true in the deep learning set- ing neural networks. Standard analysis of gradient descent ting, where gradients can be computed efficiently via back- cannot distinguish between saddle points and local minima, propagation (Rumelhart et al., 1988). leaving open the possibility that gradient descent may get stuck at saddle points, either asymptotically or for a suffi- Gradient descent is especially useful in high-dimensional ciently long time so as to make training times for arriving at settings because the number of iterations required to reach a local minimum infeasible. Ge et al.(2015) showed that 1University of California, Berkeley 2Duke University by adding noise at each step, gradient descent can escape 3Microsoft Research India 4University of Washington. Corre- all saddle points in a polynomial number of iterations, pro- spondence to: Chi Jin <[email protected]>. vided that the objective function satisfies the strict saddle th property (see Assumption A2). Lee et al.(2016) proved Proceedings of the 34 International Conference on Machine that under similar conditions, gradient descent with ran- Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). dom initialization avoids saddle points even without adding How to Escape Saddle Points Efficiently Algorithm 1 Perturbed Gradient Descent (Meta-algorithm) ture. We show that when local strong convexity is present, for t = 0; 1;::: do the -dependence goes from a polynomial rate, 1/2, to lin- if perturbation condition holds then ear convergence, log(1/). As an example, we show that xt xt + ξt; ξt uniformly ∼ B0(r) sharp global convergence rates can be obtained for matrix xt+1 xt − ηrf(xt) factorization as a direct consequence of our analysis. 1.1. Our Contributions noise. However, this result does not bound the number of steps needed to reach a local minimum. This paper presents the first sharp analysis that shows that (perturbed) gradient descent finds an approximate second- Previous work explains why gradient descent avoids sad- order stationary point in at most polylog(d) iterations, thus dle points in the nonconvex setting, but not why it is effi- escaping all saddle points efficiently. Our main technical cient—all of them have runtime guarantees with high poly- contributions are as follows: nomial dependency in dimension d. For instance, the num- ber of iterations required in Ge et al.(2015) is at least • For `-gradient Lipschitz, ρ-Hessian Lipschitz func- 4 Ω(d ), which is prohibitive in high dimensional setting tions (possibly non-convex), gradient descent with ap- such as deep learning (typically with millions of parame- propriate perturbations finds an -second-order sta- ters). Therefore, we wonder whether gradient descent type ? 2 tionary point in O~(`(f(x0) − f )/ ) iterations. This of algorithms are fundamentally slow in escaping saddle rate matches the well-known convergence rate of gra- points, or it is the lack of our theoretical understanding dient descent to first-order stationary points up to log while gradient descent is indeed efficient. This motivates factors. the following question: Can gradient descent escape sad- dle points and converge to local minima in a number of • Under a strict-saddle condition (see Assumption A2), iterations that is (almost) dimension-free? the same convergence result applies for local minima. This means that gradient descent can escape all saddle In order to answer this question formally, this paper inves- points with only logarithmic overhead in runtime. tigates the complexity of finding -second-order stationary points. For ρ-Hessian Lipschitz functions (see Definition • When the function has local structure, such as local 5), these points are defined as (Nesterov & Polyak, 2006): strong convexity (see Assumption A3.a), the above re- 2 p sults can be further improved to linear convergence. krf(x)k ≤ , and λmin(r f(x)) ≥ − ρ. We give sharp rates that are comparable to previous Under the assumption that all saddle points are strict problem-specific local analysis of gradient descent 2 with smart initialization (see Section 1.2). (i.e., for any saddle point xs, λmin(r f(xs)) < 0), all second-order stationary points ( = 0) are local minima. • All the above results rely on a new characterization of Therefore, convergence to second-order stationary points the geometry around saddle points: points from where is equivalent to convergence to local minima. gradient descent gets stuck at a saddle point constitute This paper studies a simple variant of gradient descent a thin “band.” We develop novel techniques to bound (with phasic perturbations, see Algorithm1). For `-smooth the volume of this band. As a result, we can show that functions that are also Hessian Lipschitz, we show that per- after a random perturbation the current point is very turbed gradient descent will converge to an -second-order unlikely to be in the “band”; hence, efficient escape ? 2 stationary point in O~(`(f(x0) − f )/ ), where O~(·) hides from the saddle point is possible (see Section5). polylog factors. This guarantee is almost dimension free (up to polylog(d) factors), answering the above highlighted 1.2. Related Work question affirmatively. Note that this rate is exactly the Over the past few years, there have been many problem- same as the well-known convergence rate of gradient de- specific convergence results for non-convex optimization. scent to first-order stationary points (Nesterov, 1998), up One line of work requires a smart initialization algorithm to log factors. Furthermore, our analysis admits a maxi- to provide a coarse estimate lying inside a local neighbor- mal step size of up to Ω(1=`), which is the same as that in hood, from which popular local search algorithms enjoy analyses for first-order stationary points. fast local convergence (see, e.g., Netrapalli et al., 2013; As many real learning problems present strong local geo- Candes et al., 2015; Sun & Luo, 2016; Bhojanapalli et al., metric properties, similar to strong convexity in the global 2016). While there are not many results that show global setting (see, e.g. Bhojanapalli et al., 2016; Sun & Luo, convergence for non-convex problems, Jain et al.(2015) 2016; Zheng & Lafferty, 2016), it is important to note that show that gradient descent yields global convergence rates our analysis naturally takes advantage of such local struc- for matrix square-root problems. Although these results How to Escape Saddle Points Efficiently (which requires Hessian-vector product oracle), it is possi- Table 1. Oracle models and iteration complexity for convergence ble to find an -second-order stationary pointin O(log d/2) to second-order stationary point iterations.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us