Stochastic Approximation

Stochastic Approximation

Chapter 6 Stochastic approximation This chapter is dedicated to stochastic approximation for large sums. Stochastic approximation has a long history starting with Robbins-Monro algorithm [53] with the ODE method [38] from Ljung and latter extensions, see [11] for a complete exposition and [16]. The idea of using stochastic approximation in large scale setting gained significance interest in the machine learning littera- ture see for example [17]. We provide example of non asymptotic convergence rate analyses for stochastic subgradient and stochastic proximal gradient forfinite sums. 6.1 Motivation, largen 6.1.1 Lasso estimator The Lasso estimator is given as follows: 1 ˆ�1 2 θ arg min Xθ Y +λ θ 1. ∈ θ d 2n� − � � � ∈R We have seen that the optimization problem has a favorable structure which allow to devise efficient algorithms. Another way to write the same optimization problem is to consider 1 n 1 ˆ�1 T 2 θ arg min (xi θ y i) +λ θ 1, ∈ θ d n 2 − � � R i=1 ∈ � which actually exhibits an additional sum structure. In this chapter we will be considering opti- mization problems of the form 1 n min F(x) := fi(x) +g(x). (6.1) x d n R i=1 ∈ � wheref i andg are convex lower semicontinuous convex functions. 6.1.2 Stochastic approximation To solve problem (6.1), one may usefirst order methods such as the ones described in the previous chapter. Computing a subgradient in this case require to compute subgradient off i,i=1, . , n and average them. The computational cost is of the order ofn subgradient computation andn vector operations. Whenn is very large, or even infinite, this could be prohibitive. Intuitively, if there is redundancy in the elements of the sum, one should be able to take advantage of it. For example, suppose thatf i =f, for alli, then blindly computing the gradient ofF has a cost of the ordern d, while onlyd operations (computing one gradient would suffice). × 65 66 CHAPTER 6. STOCHASTIC APPROXIMATION More generally, one could rewrite the objective function in (6.1) in the following form: F:x E[f I (x)] +g(x), �→ whereI denotes a uniform random variable over 1, . , n . Stochastic approximation, or stochas- { } tic optimization algorithm allow to handle such objectives. The main algorithmic step is as follows: For anyx R d, • ∈ Samplei uniformly at random in 1, . , n . • { } Perform an algorithmic step using only the value off (x) and f (x) or eventuallyv ∂f (x) • i ∇ i ∈ i The simple example can be extended to more general random variablesI and under proper in- tegrability and domination conditions, one can invert gradient (or subgradient) and expectation, assumingg=0 for simplicity d If for each value ofI,f I is continuously differentiable, we then have for anyx R , • ∈ E[ f I (x)] = E[f I (x)] = F(x) ∇ ∇ ∇ Assume thatf is convex for all realizations ofI. Assume that we have access to a random • I variablev such thatv ∂f (x) almost surely, then the expectation is convex and I I ∈ I E[v I ] ∂E[f I (x)] =∂F(x). ∈ Hence the process of using a single element of the sum in an algorithm can be seen as performing optimization based on noisy unbiased estimates of the gradient, or subgradient, of the objective. This intuition is described more formaly in the coming section. 6.2 Prototype stochastic approximation algorithm This section describes Robbins-Monro algorithm for stochastic approximation. Consider a Lips- chitz maph:R p R p, the goal is tofind a zero ofh. The operator only has access to unbiased �→ noisy estimates ofh. The Robins-Monro algorithm is described as follows,(X k)k is a sequence ∈N of random variables such that for anyk N ∈ Xk+1 =X k +α k (h(Xk) +M k+1) (6.2) where (α k)k is a sequence of positive step sizes satisfying • ∈N n α = + k ∞ i=1 �n α2 <+ k ∞ i=1 � (M k)k is a martingale difference sequence with respect to the increasing family ofσ-fields • ∈N =σ(X ,M , m k)=σ(X ,M ,...,M ). Fk m m ≤ 0 1 k This means thatE[M k+1 k] = 0, for allk N. |F ∈ In addition, we assume that there exists a positive constantC such that • 2 sup E Mk+1 2 k C. k N � � |F ≤ ∈ � � 6.3. THE ODE APPROACH 67 + 2 2 The intuition here is that our hypotheses on the step size ensure that the quantity ∞ E α Mk+1 k k=0 k� � |F K α M isfinite and hence the zero mean martingale k=0 k k+1 has square summable� increments� and � converges to a square integrable random varibleM inR p both almost surely and inL 2 (see for � example [29, Section 5.4]). The long term behaviour of such recursions is at the heart of thefield of stochastic approx- imation. The fact that the step sizes tends to0 and that the sum of perturbation stabilizes suggests that in the limit one obtains trajectories of a continuous time differential equation. This is formalized in the next section. 6.3 The ODE approach For optimization we may chooseh= F(x) assuming thatF has Lispchitz gradient. We consider −∇ Robbins-Monro algorithm in this setting. This idea dates back to Ljung [38], see also [11] for an advanced presentation. An accessible exposition of the following result is found in [16], Theorem 6.3.1. Conditioning on boundedness of X , almost surely, the (random) set of { k}k N accumulation point of the sequence is compact connected and∈ invariant by theflow generated by the continuous time limit: ˙x=h(x). This theorems means that for any¯x accumulation point of the algorithm, the unique solution x:t R p of the continuous time ODE satisfyingx(0) = ¯x remains bounded for allt R. This �→ ∈ allows to conclude in the convex case. Corollary 6.3.1. IfF is convex, differentiable and attains its minimum, settingh= F, −∇ conditioning on the event that supk Xk isfinite, almost surely, all the accumulation points of ∈N � � Xk are critical points ofF. p Proof. Fix¯x R such that F(¯x)= 0, this means thatF(¯x) F ∗ >0. Consider the solution to ∈ ∇ � − ˙x= F(x), ∇ starting at¯x, we have ∂ F(x(t)) = F(x(t)) 2 0 ∂t �∇ � 2 ≥ ∂ 2 x(t) x ∗ = F(x(t)), x(t) x ∗ F(x(t)) F ∗ F(¯x) F ∗ >0. ∂t � − �2 �∇ − � ≥ − ≥ − We deduce thatF is increasing along the trajectory and diverges, hence the solution escapes any compact set which means that¯x does not belong to a compact invariant set. The power of the ODE approach lies in the fact that it allows to treat much more complicated situations beyond convexity and differentiability. 6.4 Rates for convex optimization In the context of convex optimization problems of the form described in the introduction of this chapter, one can obtain precise covergence rate estimates using elementary arguments. 68 CHAPTER 6. STOCHASTIC APPROXIMATION 6.4.1 Stochastic subgradient descent Proposition 6.4.1. Consider the problem 1 n min F(x) := fi(x), x d n R i=1 ∈ � where eachf i is convex andL-Lipschitz. Choosex 0 R and a sequence of random variables ∈ (ik)k independently identically distributed uniformly on 1, . , n and a sequence of positive ∈N { } step sizes(α k)k . Consider the recursion ∈N x =x α v (6.3) k+1 k − k k v ∂f (x ) (6.4) k ∈ ik k Then for allK N,K 1 ∈ ≥ 2 2G2 K 2 L x0 x ∗ 2 + L k=0 αk E[F(¯xK ) F ∗] � − � − ≤ K 2 k=0 αk � K α x � ¯x = k=0 k k where K K α . � k=0 k � Proof. Wefixk N and condition oni 1, . , ik so thatx k andx k+1 arefixed. We have for any ∈ k N ∈ 1 2 1 2 x x ∗ = x α v x ∗ 2� k+1 − �2 2� k − k k − �2 2 1 2 T αk 2 = x x ∗ +α v (x∗ x ) + v 2� k − �2 k k − k 2 � k�2 2 1 2 αk 2 x x ∗ +α (f (x∗) f (x )) + L . ≤ 2� k − �2 k ik − ik k 2 Conditioning onx k and taking expectation with respect toi k, 2 2 1 2 1 2 αkL E xk+1 x ∗ xk E xk x ∗ xk + +E[α k(fik (x∗) f ik (xk)) xk] 2� − �2| ≤ 2� − �2| 2 − | � � � � 2 2 1 2 αkL = x x ∗ + +α (F(x ∗) F(x )). 2� k − �2 2 k − k Taking expectation with respect tox k, using tower property of conditional expectation, we have 2 2 1 2 1 2 αkL E xk+1 x ∗ E xk x ∗ + +α kE[(F(x ∗) F(x k))]. 2� − �2 ≤ 2� − �2 2 − � � � � By summing up, we obtain, for allK N,K 1 ∈ ≥ K 2 2 K 2 αkE[F(x k) F ∗] x0 x +L αk k=0 − � − ∗� k=0 k ≤ K � i=0 αi 2 k=0 α�i and the result follows from convexity� off. � Corollary 6.4.1. Under the hypotheses of Proposition 6.4.1, we have the following Ifα =α is constant, we have • k 2 2 x0 x ∗ L α E[F(¯xk) F ∗] � − � + . − ≤ 2(k + 1)α 2 6.4. RATES FOR CONVEX OPTIMIZATION 69 x x∗ /L In particular, choosingα = � 0− � , we have • i √k+1 x0 x ∗ L E[F(¯xk) F ∗] � − � . − ≤ √k+1 Choosingα = x x ∗ /(L√k) for allk, we obtain for allk • k � 0 − � x0 x ∗ 2L(1 + log(k)) E[F(¯xk) F ∗] =O � − � .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us