<<

Escaping from saddle points on Riemannian manifolds

Yue Sun†, Nicolas Flammarion‡, Maryam Fazel†

† Department of Electrical and Computer Engineering, University of Washington, Seattle ‡ School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland

November 8, 2019

1 / 24 Manifold constrained optimization

We consider the problem minimize f (x), subject to x ∈ M x Same as , generally we cannot find global optimum in polynomial time, so we want to find an approximate local minimum.

Plot of saddle in Euclidean Contour of function value on space. sphere.

2 / 24 Example of manifolds

d Pd 2 2 1. Sphere. {x ∈ R : i=1 xi = r }. m×n T 2. Stiefel manifold. {X ∈ R : X X = I }. 3. Grassmannian manifold. Grass(p, n) is set of p dimensional n subspaces in R . m×n T 4. Bruer-Monteiro relaxation. {X ∈ R : diag(X X ) = 1}.

3 / 24

A curve in a continuous map γ : t → M. Usually t ∈ [0, 1], where γ(0) and γ(1) are start and end points of the curve.

4 / 24 Tangent vector and tangent space

We use x ∈ M can be start point of many γ(t + τ) − γ(t) γ˙ (t) = lim , and a tangent space Tx M is τ→0 τ the set of tangent vectors at x. as the velocity of the curve, γ˙ (t) is a Tangent space is a metric space. tangent vector at γ(t) ∈ M.

5 / 24 Gradient of a function

Let f : M → R be a function defined on M, and γ be a curve on M. The directional derivative of f in directionγ ˙ (0) is1

d(f (γ(t))) f (γ(0 + τ)) − f (γ(0)) γ˙ (0)f = = lim dt t=0 τ→0 τ

Then we can define gradf (x) ∈ Tx M, which satisfies

hgradf , yi = yf

for all y ∈ Tx M.

1Usuallyγ ˙ denotes differential operator and γ0 denotes the tangent vector. They are closely related. To avoid confusion, we always useγ ˙ . 6 / 24 Vector field

The gradient of a function is a special case of vector field of a manifold. A vector field is a function from a point in M to tangent vector at that point.

7 / 24 Connection

Denote a smooth vector field on M by X(M). A connection defines directional derivative

∇ : X(M) × X(M) → X(M)

satisfying

∇fx+gy u = f ∇x u + g∇y u,

∇x (au + bv) = a∇x u + b∇x v,

∇x (fu) = (xf )u + f (∇x u).

Note that

X j j ∇ei u = (∂i u ej + u ∇ei ej ). j

A special connection is Riemannian (Levi-Civita) connection.

8 / 24 Riemannian Hessian

A directional Hessian is defined as

H(x)[u] = ∇ugradf (x)

2 for u ∈ Tx M . Similar as gradient, we can define a Hessian from directional version.

hH(x)u, vi = h∇ugradf (x), vi, ∀u, v ∈ Tx M.

It is a symmetric operator.

2 In Riemannian geometry, one writes ux to indicate u ∈ Tx M and the

directional Hessian is ∇ux gradf . 9 / 24 Geodesic

A geodesic is a special class of curves on the manifold, which satisfies zero acceleration condition

∇γ˙ (t)γ˙ = 0.

10 / 24 Exponential map

For any x ∈ M, y ∈ Tx M and the geodesic γ defined by y,

γ(0) = x, γ˙ (0) = y

we call the mapping Expx (y): Tx M → M such that Expx (y) = γ(1) as exponential map.

There is a neighborhood with radius I in Tx M, such that for all y ∈ Tx M, kyk ≤ I, exponential map is a bijection/diffeomorphism.

11 / 24 Parallel transport

The parallel transport Γ transports a tangent vector w along a curve γ, satisfying the zero acceleration condition

γ(t) ∇γt wt = 0, wt = Γγ(0)w.

12 / 24 tensor

The curvature tensor describes how curved the manifold is. It relates to the second order structure of the manifold. A definition by connection is

R(x, y)w = ∇x ∇y w − ∇y ∇x w − ∇[x,y]w.

where x, y, w are in tangent space of the same point3.

3[x, y] is the Lie bracket defined by [x, y]f = xyf − yxf 13 / 24 Curvature tensor

0,0 0,τy tx,τy tx,0 Γ Γtx,τy Γ Γ w − w R(x, y)w = lim 0,τy tx,0 0,0 t,τ→0 tτ

14 / 24 Smooth function on Riemannian manifold

We consider the manifold constrained optimization problem minimize f (x), subject to x ∈ M x assuming the function and manifold satisfying 1. There is a finite constant β such that y kgradf (y) − Γx gradf (x)k ≤ βd(x, y) for all x, y ∈ M. 2. There is a finite constant ρ such that y x kH(y) − Γx H(x)Γy k2 ≤ ρd(x, y) for all x, y ∈ M. 3. There is a finite constant K such that

|R(x)[u, v]| ≤ K for all x ∈ M and u, v ∈ Tx M. f may not be convex.

15 / 24 Taylor expansion of a smooth function For x, y ∈ E Euclidean space,

f (y) − f (x) = hy − x, ∇f (x) + ∇2f (x)(y − x) Z 1 + (∇2f (x + τ(y − x)) − ∇2f (x))(y − x)dτi. 0 For x, y ∈ M, let γ denote the geodesic where γ(0) = x, γ(1) = y, then 1 f (y) − f (x) = hγ˙ (0), gradf (x) + ∇ gradf + ∆i 2 γ˙ (0) R 1 where ∆ = 0 ∆(γ(τ))dτ, Z 1 x ∆(γ(τ)) = (Γγ(τ)∇γ˙ (τ)gradf − ∇γ˙ (0)gradf )dτ. 0

16 / 24 Riemannian

On a smooth manifold, there exists a η such that, if

xt+1 = Expxt (−gradf (xt )),

then η f (x ) ≤ f (x ) − kgradf (x )k2. t+1 t 2 t Converge to first order stationary.

17 / 24 Proposed algorithm for escaping saddle

Hope to escape from saddle point and converge to an approximate local minimum.

1. At iterate x, check the norm of gradient. + 2. If large: do x = Expx (−ηgradf (x)) to decrease function value. 3. If small: near either a saddle point or a local min. Perturb iterate by adding appropriate noise, run a few iterations. 3.1 if f decreases, iterates escape saddle point (and alg continues). 3.2 if f doesn’t decrease: at approximate local min (alg terminates).

18 / 24 Difficulty of second order analysis

1. We use linearization in first order analysis, for second order, manifold has a second order structure as well. 2. Consider power method in Euclidean space. We need to prove that the biggest eigenvector direction of x grows exponentially. 3. If it’s iteration of variable, we have to consider gradient in different tangent spaces. 4. Some recent work require strong assumptions such as flat manifold, product manifold. 5. Other recent work assume smoothness parameters of composition of function and manifold operator, which are hard to check.

19 / 24 Useful lemmas

Let x ∈ M and y, a ∈ Tx M. Let us denote by z = Expx (a) then

z 2 d(Expx (y + a), Expz (Γx y)) ≤ c(K) min{kak, kyk}(kak + kyk) .

20 / 24 Useful lemmas

Holonomy.

x z y kΓz Γy Γx w − wk ≤ c(K)d(x, y)d(y, z)kwk.

Similar to definition of curvature tensor, if a vector is parallel transported around a closed curve, then the change is bounded by the area whose boundary is the curve.

21 / 24 Useful lemmas

Euclidean: f (x) = xT Hx ⇒ x+ = (I − ηH)x.

Exponential growth in a vector space.

If function f is β gradient Lipschitz, ρ√Hessian Lipschitz, curvature constant is bounded by K, x is a (, − ρˆ ) saddle point, and define + + u = Expu(−ηgradf (u)) and w = Expw (−ηgradf (w)). If a small enough neighborhood4,

−1 + −1 + −1 −1 kExpx (w ) − Expx (u ) − (I − ηH(x))(Expx (w) − Expx (u))k ≤ C(K, ρ, β)d(u, w)(d(u, w) + d(u, x) + d(w, x)) .

for some explicit constant C(K, ρ, β).

4Quantified in paper. 22 / 24 Theorem Theorem (Jin et al., Eucledean space) √ Perturbed GD converges to a (, − ρ)- of f in ! β(f (x ) − f (x ∗)) βd(f (x ) − f (x ∗)) O 0 log4 0 2 2δ

iterations.

We replace Hessian Lipschitz ρ byρ ˆ as a function of ρ and K and we quantify it in the paper. Theorem (manifold) Perturbed RGD converges to a (, −pρˆ(ρ, K))-stationary point of f in ! β(f (x ) − f (x ∗)) βd(f (x ) − f (x ∗)) O 0 log4 0 2 2δ

iterations.

23 / 24 Experiment

Burer-Monteiro facotorization. d×d Let A ∈ S , the problem 6 max trace(AX ), 5 d×d 4 X ∈S 3

s.t. diag(X ) = 1, X  0, rank(X ) ≤ r. Function value 2

1 can be factorized as 0

0 5 10 15 20 25 30 35 40 45 max trace(AYY T ), s.t. diag(YY T ) = 1. Iterations d×p Y ∈R Iteration versus function value. when r(r + 1)/2 ≤ d, p(p + 1)/2 ≥ d.

24 / 24