Numerical Linear Algebra

Numerical Linear Algebra

Numerical Linear Algebra Rayleigh Quotient. Power Iteration. Inverse Iteration. Rayleigh Quotient Iteration. Arnoldi Iteration. Lanczos Iteration. 1/13 It addresses the question, given an x, what scalar, α, "acts most like the eigenvalue" for x in the sense of minimizing jjMx − αxjj? Calculating the gradient: 2 rr(x) = (Mx − r(x)x) x∗x This shows that the eigenvectors of M are the stationary points of the function r(x) and the eigenvalues of M are the values of r(x) at these stationary points. The Rayleigh quotient is a quadratically accurate estimate of the eigenvalue! Rayleigh Quotient For a given complex Hermitian matrix M and nonzero vector x, the Rayleigh quotient r(x), is defined as: x∗Mx r(x) = : x∗x 2/13 Calculating the gradient: 2 rr(x) = (Mx − r(x)x) x∗x This shows that the eigenvectors of M are the stationary points of the function r(x) and the eigenvalues of M are the values of r(x) at these stationary points. The Rayleigh quotient is a quadratically accurate estimate of the eigenvalue! Rayleigh Quotient For a given complex Hermitian matrix M and nonzero vector x, the Rayleigh quotient r(x), is defined as: x∗Mx r(x) = : x∗x It addresses the question, given an x, what scalar, α, "acts most like the eigenvalue" for x in the sense of minimizing jjMx − αxjj? 2/13 This shows that the eigenvectors of M are the stationary points of the function r(x) and the eigenvalues of M are the values of r(x) at these stationary points. The Rayleigh quotient is a quadratically accurate estimate of the eigenvalue! Rayleigh Quotient For a given complex Hermitian matrix M and nonzero vector x, the Rayleigh quotient r(x), is defined as: x∗Mx r(x) = : x∗x It addresses the question, given an x, what scalar, α, "acts most like the eigenvalue" for x in the sense of minimizing jjMx − αxjj? Calculating the gradient: 2 rr(x) = (Mx − r(x)x) x∗x 2/13 The Rayleigh quotient is a quadratically accurate estimate of the eigenvalue! Rayleigh Quotient For a given complex Hermitian matrix M and nonzero vector x, the Rayleigh quotient r(x), is defined as: x∗Mx r(x) = : x∗x It addresses the question, given an x, what scalar, α, "acts most like the eigenvalue" for x in the sense of minimizing jjMx − αxjj? Calculating the gradient: 2 rr(x) = (Mx − r(x)x) x∗x This shows that the eigenvectors of M are the stationary points of the function r(x) and the eigenvalues of M are the values of r(x) at these stationary points. 2/13 Rayleigh Quotient For a given complex Hermitian matrix M and nonzero vector x, the Rayleigh quotient r(x), is defined as: x∗Mx r(x) = : x∗x It addresses the question, given an x, what scalar, α, "acts most like the eigenvalue" for x in the sense of minimizing jjMx − αxjj? Calculating the gradient: 2 rr(x) = (Mx − r(x)x) x∗x This shows that the eigenvectors of M are the stationary points of the function r(x) and the eigenvalues of M are the values of r(x) at these stationary points. The Rayleigh quotient is a quadratically accurate estimate of the eigenvalue! 2/13 Algorithm 1 Power Iteration 1: v(0) = some vector with jjv(0)jj = 1 2: for k = 1; 2;::: do 3: w = Av(k−1) 4: v(k) = w=jjwjj (k) T (k) 5: λk = (v ) Av 6: end for Power Iteration Suppose v(0) is a normalized vector. The following is expected to produce a sequence v(i) that converges to an eigenvector corresponding to the largest eigenvalue of A. 3/13 Power Iteration Suppose v(0) is a normalized vector. The following is expected to produce a sequence v(i) that converges to an eigenvector corresponding to the largest eigenvalue of A. Algorithm 2 Power Iteration 1: v(0) = some vector with jjv(0)jj = 1 2: for k = 1; 2;::: do 3: w = Av(k−1) 4: v(k) = w=jjwjj (k) T (k) 5: λk = (v ) Av 6: end for 3/13 Then one can write: (k) k (0) v =ckA v k k k =ckλ1(a1q1 + a2(λ2/λ1) q2 + ··· + am(λm/λ1) qk In the limit k ! 1, k 2k (k) λ2 (k) λ2 jjv − (±q1)jj = O jλ − λ1j = O λ1 λ1 Power Iteration We can analyze the power iteration as: (0) v = a1q1 + a2q2 + ::: + amqm where qi are orthonormal eigenvectors with corresponding eigenvalues satisfying jλ1j ≥ jλ2j ≥ · · · ≥ jλjm ≥ 0. 4/13 In the limit k ! 1, k 2k (k) λ2 (k) λ2 jjv − (±q1)jj = O jλ − λ1j = O λ1 λ1 Power Iteration We can analyze the power iteration as: (0) v = a1q1 + a2q2 + ::: + amqm where qi are orthonormal eigenvectors with corresponding eigenvalues satisfying jλ1j ≥ jλ2j ≥ · · · ≥ jλjm ≥ 0. Then one can write: (k) k (0) v =ckA v k k k =ckλ1(a1q1 + a2(λ2/λ1) q2 + ··· + am(λm/λ1) qk 4/13 Power Iteration We can analyze the power iteration as: (0) v = a1q1 + a2q2 + ::: + amqm where qi are orthonormal eigenvectors with corresponding eigenvalues satisfying jλ1j ≥ jλ2j ≥ · · · ≥ jλjm ≥ 0. Then one can write: (k) k (0) v =ckA v k k k =ckλ1(a1q1 + a2(λ2/λ1) q2 + ··· + am(λm/λ1) qk In the limit k ! 1, k 2k (k) λ2 (k) λ2 jjv − (±q1)jj = O jλ − λ1j = O λ1 λ1 4/13 Algorithm 3 Inverse Iteration 1: v(0) = some vector with jjv(0)jj = 1 2: for k = 1; 2;::: do 3: Solve (A − µI)w = v(k−1) for w 4: v(k) = w=jjwjj (k) T (k) 5: λk = (v ) Av 6: end for −1 Then by a judicious choice of µ close to λJ , (λJ − µ) can be made much larger for j 6= J. Applying the power iteration should converge to qJ . Inverse Iteration For any µ that is not an eigenvalue of A, the eigenvectors of (A − µI)−1 has the same eigenvectors as A and the −1 corresponding eigenvalues are (λj − µ) . 5/13 Algorithm 4 Inverse Iteration 1: v(0) = some vector with jjv(0)jj = 1 2: for k = 1; 2;::: do 3: Solve (A − µI)w = v(k−1) for w 4: v(k) = w=jjwjj (k) T (k) 5: λk = (v ) Av 6: end for Applying the power iteration should converge to qJ . Inverse Iteration For any µ that is not an eigenvalue of A, the eigenvectors of (A − µI)−1 has the same eigenvectors as A and the −1 corresponding eigenvalues are (λj − µ) . −1 Then by a judicious choice of µ close to λJ , (λJ − µ) can be made much larger for j 6= J. 5/13 Algorithm 5 Inverse Iteration 1: v(0) = some vector with jjv(0)jj = 1 2: for k = 1; 2;::: do 3: Solve (A − µI)w = v(k−1) for w 4: v(k) = w=jjwjj (k) T (k) 5: λk = (v ) Av 6: end for Inverse Iteration For any µ that is not an eigenvalue of A, the eigenvectors of (A − µI)−1 has the same eigenvectors as A and the −1 corresponding eigenvalues are (λj − µ) . −1 Then by a judicious choice of µ close to λJ , (λJ − µ) can be made much larger for j 6= J. Applying the power iteration should converge to qJ . 5/13 Inverse Iteration For any µ that is not an eigenvalue of A, the eigenvectors of (A − µI)−1 has the same eigenvectors as A and the −1 corresponding eigenvalues are (λj − µ) . −1 Then by a judicious choice of µ close to λJ , (λJ − µ) can be made much larger for j 6= J. Applying the power iteration should converge to qJ . Algorithm 6 Inverse Iteration 1: v(0) = some vector with jjv(0)jj = 1 2: for k = 1; 2;::: do 3: Solve (A − µI)w = v(k−1) for w 4: v(k) = w=jjwjj (k) T (k) 5: λk = (v ) Av 6: end for 5/13 In this algorithm, one combines the two: Algorithm 7 Rayleigh quotient Iteration 1: v(0) = some vector with jjv(0)jj = 1 2: λ(0) = (v(0))T Av(0) 3: for k = 1; 2;::: do 4: Solve (A − λ(k−1)I)w = v(k−1) for w 5: v(k) = w=jjwjj 6: λ(k) = (v(k))T Av(k) 7: end for Rayleigh Quotient Iteration The Rayleigh Quotient is a method for obtaining eigenvalue from and eigenvector estimate while the Inverse iteration obtains an eigenvector from estimate of the eigenvalue. 6/13 Rayleigh Quotient Iteration The Rayleigh Quotient is a method for obtaining eigenvalue from and eigenvector estimate while the Inverse iteration obtains an eigenvector from estimate of the eigenvalue. In this algorithm, one combines the two: Algorithm 8 Rayleigh quotient Iteration 1: v(0) = some vector with jjv(0)jj = 1 2: λ(0) = (v(0))T Av(0) 3: for k = 1; 2;::: do 4: Solve (A − λ(k−1)I)w = v(k−1) for w 5: v(k) = w=jjwjj 6: λ(k) = (v(k))T Av(k) 7: end for 6/13 Given a matrix A and a vector b, the associated sequence of vectors: b; Ab; A2b; A3b : : : is called Krylov sequence or Krylov subspace. Projection into Krylov Subspace Iterative methods are based on projecting an m−dimensional problem into a lower dimensional Krylov subspace. 7/13 Projection into Krylov Subspace Iterative methods are based on projecting an m−dimensional problem into a lower dimensional Krylov subspace.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    34 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us