(Quasi-)Newton Methods

(Quasi-)Newton Methods

MS Maths Big Data Alexandre Gramfort (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, n n where g : R ! R . Given a starting point x0, Newton method consists in iterating: 0 −1 xk+1 = xk − g (xk) g(xk) where g0(x) is the derivative of g at point x. Applying this method to the optimization problem: min f(x) n x2R consists in setting g(x) = rf(x), i.e. looking for stationary points. The iterations read: 2 −1 xk+1 = xk − r f(xk) rf(xk) : Newton method is particularly interesting as its convergence is quadratic locally around x∗, i.e.: ∗ ∗ 2 kxk+1 − x k ≤ γkxk − x k ; γ > 0 : ∗ However the convergence is guaranteed only if x0 is sufficiently close to x . The method may diverge if the initial point is too far from x∗ or if the Hessian is not positive definite. In order to address this issue of local convergence, Newton method can be combined with a line search method in the direction 2 −1 dk = −∇ f(xk) rf(xk) : proof. We now prove the quadratic local convergence of Newton method. TODO 1.2 Variable metric methods The idea behind variable metric methods consists in using iterations of the form ( dk = −Bkgk ; xk+1 = xk + ρkdk ; where gk = rf(xk) and Bk is a positive definite matrix. If Bk = In, it corresponds to gradient descent. Fixing Bk = B leads to the following remark. Remark. When minimizing min f(x) n x2R one can set x = Cy with C invertible (change of variable). Let us denote f~(y) = f(Cy). This leads to: rf~(y) = CT f(Cy) : Gradient descent applied to f~(y) reads: T yk+1 = yk − ρkC rf(Cyk) which is equivalent to T xk+1 = xk − ρkCC rf(xk) : This amounts to using B = CCT . In the case where f is a quadratic form, it is straightforward to observe that this will improve convergence. page 1 MS Maths Big Data Alexandre Gramfort Theorem 1. Let f(x) a positive definite quadratic form and B a positive definite matrix. Le precondi- tioned gradient algorithm: ( x0 = fixed; xk+1 = xk − ρkBgk; ρk optimal has a linear convergence: ∗ ∗ kxk+1 − x k ≤ γkxk − x k where: χ(BA) − 1 γ = < 1 : χ(BA) + 1 Remark. The lower the conditioning of BA, the faster is the algorithm. One cannot set B = A−1 as it would implied having already solved the problem, but this however suggests to use B so that is approximate A−1. This is the idea behind quasi-Newton methods. 2 Quasi-Newton methods 2.1 Quasi-Newton relation A quasi-Newton method reads ( dk = −Bkgk ; xk+1 = xk + ρkdk ; or ( −1 dk = −Hk gk ; xk+1 = xk + ρkdk ; where Bk (resp. Hk) is a matrix which aims to approximate the inverse of the Hessian (resp. the Hessian) of f at xk. The question is how to achieve this? One can start with B0 = In, but then how to update Bk at every iteration? The idea is the following: by applying a Taylor expansion on the gradient, we know that at point xk, the gradient and the Hessian are such that: 2 gk+1 = gk + r f(xk)(xk+1 − xk) + (xk+1 − xk) : If one assumes that the approximation is good enough one has: 2 gk+1 − gk ≈ r f(xk)(xk+1 − xk) ; which leads to the quasi-Newton relation. Definition 1. Two matrices Bk+1 and Hk+1 verify the quasi-Newton relation if: Hk+1(xk+1 − xk) = rf(xk+1) − rf(xk) or xk+1 − xk = Bk+1(rf(xk+1) − rf(xk)) The follow up question is how to update Bk while making sure that it says positive definite. 2.2 Update formula of Hessian The update strategy at iteration ( dk = −Bkgk ; xk+1 = xk + ρkdk ; is to correct Bk with a symmetric matrix ∆k: Bk+1 = Bk + ∆k such that quasi-Newton relation holds: xk+1 − xk = Bk+1(gk+1 − gk) with Bk+1 positive definite, assuming Bk is positive definite. For simplicity we will write Bk+1 > 0. We will now see different formulas to define ∆k, under some assumptions that its rank is 1 or 2. We will speak about rank 1 or rank 2 corrections. page 2 MS Maths Big Data Alexandre Gramfort 2.3 Broyden formula Broyden formula is a rank 1 correction. Let us write T Bk+1 = Bk + vv : The vector v 2 Rn should verify the quasi-Newton relation: Bk+1yk = sk ; where yk = gk+1 − gk and sk = xk+1 − xk. It follows that: T Bkyk + vv yk = sk ; which leads by application of the dot product with yk to: T 2 T (yk v) = (sk − Bkyk) yk ; Using the equality T T T T vv yk(vv yk) vv = T 2 ; (v yk) T T 2 T one can write after replacing vv yk by sk − Bkyk, and (v yk) by yk (sk − Bkyk), the correction formula T (sk − Bkyk)(sk − Bkyk) Bk+1 = Bk + T ; (sk − Bkyk) yk also known as Broyden formula. Theorem 2. Let f a quadratic form positive definite. Let us consider the method that, starting for x0, iterates: xk+1 = xk + sk ; where the vectors sk are linearly independent. Then the sequence of matrices starting by B0 and defined as: T (sk − Bkyk)(sk − Bkyk) Bk+1 = Bk + T ; (sk − Bkyk) yk −1 where yk = rf(xk+1) − rf(xk), converges in less than n iterations towards A , the inverse of the Hessian of f. proof. Since we consider here a quadratic function, the Hessian is constant and equal to A. This leads to: yi = rf(xi+1) − rf(xi); 8i: We have seen that Bk+1 is constructed such that: Bk+1yk = sk : Let us show that: Bk+1yi = si ; i = 0; : : : ; k − 1 : By recurrence, let us assume that it is true for Bk: Bkyi = si ; i = 0; : : : ; k − 2 : Let i ≤ k − 2. One has T T (sk − Bkyk)(sk yi − Bkyk yi) Bk+1yi = Bkyi + T : (1) (sk − Bkyk) yk T T By hypothesis, one has Bkyi = si which implies that yk Bkyi = yk si, but since Asj = yj for all j, it leads to: T T T yk si = sk Asi = sk yi ; page 3 MS Maths Big Data Alexandre Gramfort The numerator in (1) is therefore 0, which leads to: Bk+1yi = Bkyi = si. One therefore has: Bk+1yi = si; 8i = 0; : : : ; k : After n iterations, one has Bnyi = si; 8i = 0; : : : ; n − 1 : but since yi = Asi, this last equation is equivalent to: BnAsi = si; 8i = 0; : : : ; n − 1 : n −1 As the si form a basis of R this implies that BnA = In or Bn = A : The issue with Broyden’s formula is that there is no guarantee that the matrices Bk are positive −1 definite, even if the function f is quadratic and B0 = In. It is nevertheless interesting to have Bn = A . 2.4 Davidon, Fletcher and Powell formula The formula from Davidon, Fletcher and Powell is a rank 2 correction. It reads: T T sksk Bkykyk Bk Bk+1 = Bk + T − T : (2) sk yk yk Bkyk The following theorem states that under certain conditions, the formula guarantees to have Bk positive definite. Theorem 3. Let us consider the update ( dk = −Bkgk ; xk+1 = xk + ρkBgk; ρk optimal where B0 > 0 is provided as well as x0. Then the matrices Bk defined as in (2) are positive definite for all k > 0. proof. Let x 2 Rn. One has: T 2 T 2 T T T 2 T 2 T T (sk x) (yk Bkx) yk Bkykx Bkx − (yk Bkx) (sk x) x Bk+1x = x Bkx + T − T : = T + T : (3) sk yk yk Bkyk yk Bkyk sk yk T If one defines the dot product hx; yi as x Bky the equation above reads: 2 T 2 T hyk; ykihx; xi − hyk; xi (sk x) x Bk+1x = + T : hyk; yki sk yk The first term is positive by Cauchy-Schwartz inequality. Regarding the second term, as the step size is optimal one has: T gk+1dk = 0; which implies T T T sk yk = ρk(gk+1 − gk) dk = ρkgk Bkgk > 0; T and so x Bk+1x ≥ 0. Both terms being positive, the sum is zero only if both terms are 0. This implies T T that x = λyk with λ 6= 0. In this case the second term cannot be zero as sk x = λsk yk. This implies that Bk+1 > 0. Remark. In the proof we used the fact that an optimal step size is used. The result still holds if one uses an approximate line search strategy for example using Wolf and Powell’s rule. In this case the point xk+1 is such that: 0 T T φ (ρk) = rf(xk+1) dk ≥ m2rf(xk) dk; 0 < m2 < 1; which guarantees: T xk+1 − xk T xk+1 − xk gk+1 > gk ; ρk ρk T and therefore (gk+1 − gk) (xk+1 − xk) > 0. page 4 MS Maths Big Data Alexandre Gramfort Algorithm 1 Davidon-Fletcher-Powell algorithm Require: " > 0 (tolerance), K (maximum number of iterations) n 1: x0 2 R , B0 > 0 (for example In) 2: for k = 0 to K do 3: if kgkk < " then 4: break 5: end if 6: dk = −Bkrf(xk) 7: Compute optimal step size ρk 8: xk+1 = xk + ρkdk 9: sk = ρkdk 10: yk = gk+1 − gk T T sksk Bkykyk Bk 11: Bk+1 = Bk + T − T sk yk yk Bkyk 12: end for 13: return xk+1 2.5 Davidon-Fletcher-Powell algorithm One can know use the later formula in algorithm1.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us