Lecture Notes on Numerical Methods for Eigenvalues: Part III

Lecture Notes on Numerical Methods for Eigenvalues: Part III

Lecture Notes on Numerical Methods for Eigenvalues: Part III Jiangguo (James) Liu ∗ May 4, 2020 This part discusses some advanced numerical methods for eigenvalues. We shall first discuss numerical eigensolvers for general matrices, then study numerical eigensolvers for symmetric (Hermitian) matrices. 1 Upper Hessenberg Form Recall that the general Schur canonical form is an upper triangular matrix, whereas a real Schur canonical form allows 2 × 2 blocks on its diagonal. To unify them, we consider the upper Hessenberg form. Definition (Upper Hessenberg form). A matrix H = [hij]n×n is called in the upper Hessenberg form provided that hi;j = 0 if i > j + 1. That is, its 1st sub-diagonal entries are allowed to be nonzero. Example. The matrix Tn obtained from finite difference discretization for the 1-dim Poisson boundary value problem with a homogeneous Dirichlet condition is actually in the upper Hessenberg form. 2 2 −1 0 0 ··· 0 0 3 6 −1 2 −1 0 ··· 0 0 7 6 7 6 0 −1 2 −1 ··· 0 0 7 Tn = 6 7 : 6 ····················· 7 6 7 4 0 0 0 0 ··· 2 −1 5 0 0 0 0 · · · −1 2 Actually it is also a symmetric tri-diagonal matrix. Definition (Unreduced upper Hessenberg form). An upper Hessenberg form is called unreduced provided that all the entries in its 1st sub-diagonal are nonzero. Note that if an upper Hessenberg form H is not unreduced simply because of hj+1;j = 0, then we can decompose it into two unreduced upper Hessenberg matrices by considering H1 = H(1 : j; 1 : j) and H2 = H((j + 1) : n; (j + 1) : n). Then H1;H2 each is an unreduced ∗Department of Mathematics, Colorado State University, Fort Collins, CO 80523-1874, USA, [email protected] 1 upper Hessenberg matrix and the eigenvalues of H are just in the union of the eigenvalues of H1 and H2. Our the next job is to simplify a general matrix A into an upper Hessenberg matrix H through an orthogonal or unitary similarity transformation: QAQT = H: (1) This can be accomplished by Householder reflections and left as an exercise. A good news or a bad news is that this has been implemented • as hess(A) in Matlab; • as hessenberg in Python library scipy.linalg. This implies that finding the eigenvalues of a general matrix is attributed to finding the eigenvalues of an upper Hessenberg matrix. It can also be proved that when A is a real symmetric matrix, then its upper Hessenberg canonical form becomes a symmetric tri- diagonal matrix. 2 Francis QR-Iterations The eigenvalues of an upper Hessenberg matrix can be obtained via Francis QR-Iterations. But we need to check certain properties. Lemma 1. If A is upper Hessenberg and A = QR is the QR-factorization obtained via Householder reflections, then Q is also upper Hessenberg. Remark. The lemma indicates that if we carry out QR-factorization to a matrix in the upper Hessenberg form, then we obtain an orthogonal matrix that is also in the upper Hessenberg form. Proof. Exercise. Lemma 2. If R is an upper triangular matrix and Q is an orthogonal matrix in the upper Hessenberg form, then B = RQ is also in the upper Hessenberg form. Proof. Exercise. Francis QR-iteration algorithm. This is an iterative algorithm for eigenvalues based on QR-factorization. Shown below is a pseudo-code. Given a square matrix A; A1 = hess(A); % Reduction to an upper Hessenberg form for k=1 to kmax do % kmax = maximal number of iterations [Qk,Rk] = qr(Ak); % QR-factorization Ak1 = Rk * Qk; Ak = Ak1; % Refreshing end 2 Here are some remarks regarding the above iterative algorithm. (i) Note that T T Ak+1 = RkQk = Qk (QkRk)Qk = Qk AkQk: So Ak+1 is orthogonally or unitarily similar to Ak and hence all the way back to A1 and finally A. So the eigenvalues are maintained in this process. (ii) By Lemma 1, Qk is upper Hessenberg since Ak is so; (iii) By Lemma 2, Ak+1 is upper Hessenberg since Qk is so; (iv) If conditions are favorable, then Ak converges to an upper triangular matrix whose diagonal entries are the eigenvalues of A. Operations count for Francis QR-iteration. Three aspects are in consideration. (i) Reduction to an upper Hessenberg for a general order-n square matrix; (ii) Operations needed for QR-factorization (using Householder reflections); (iii) Operations needed for RQ multiplication. QR-iteration with shift. The shift technique can be combined with the QR-iteration algorithm. Shown below is a pseudo-code. Given A; A1 = hess(A); do while until convergence Choose a shift sigmak near an eig.val. of Ak; [Qk,Rk] = qr(Ak-sigmak*I); Ak = Rk*Qk + sigmak*I; k = k+1; enddo Lemma 3. The QR-iteration algorithm with shifts maintains the eigenvalues of the given matrix. Proof. It can be verified that T T T T Ak+1 = RkQk + σkI = Qk (QkRk)Qk + σkQk Qk = Qk (QkRk + σkI)Qk = Qk AkQk: This is actually an orthogonal similarity transformation. Shown below is a naive implementation of Francis QR-iteration function Am = FrancisQR(A,m) % Initial reduction to upper Hessenberg form Am = hess(A); for k=1:m [Q,R] = qr(Am); Am = R*Q; end % The diagonal entries of Am should be (approximation to) eigenvalues of A return; 3 >> format short; >> A = hilb(5) A = 1.0000 0.5000 0.3333 0.2500 0.2000 0.5000 0.3333 0.2500 0.2000 0.1667 0.3333 0.2500 0.2000 0.1667 0.1429 0.2500 0.2000 0.1667 0.1429 0.1250 0.2000 0.1667 0.1429 0.1250 0.1111 >> e = eig(A) e = 0.0000 0.0003 0.0114 0.2085 1.5671 >> Am = FrancisQR(A,5) Am = 1.5669 0.0124 0.0000 0.0000 -0.0000 0.0124 0.2086 0.0000 -0.0000 0.0000 0 0.0000 0.0114 -0.0000 0.0000 0 0 -0.0000 0.0003 0.0000 0 0 0 0.0000 0.0000 >> Am = FrancisQR(A,10) Am = 1.5671 0.0000 0.0000 0.0000 0.0000 0.0000 0.2085 0.0000 -0.0000 -0.0000 0 0.0000 0.0114 -0.0000 -0.0000 0 0 -0.0000 0.0003 0.0000 0 0 0 -0.0000 0.0000 >> f = diag(Am) f = 1.5671 0.2085 0.0114 0.0003 0.0000 >> f = flipud(f); >> format long; >> [e, f, e-f] ans = 0.000003287928772 0.000003287928772 -0.000000000000000 0.000305898040151 0.000305898040151 0.000000000000000 0.011407491623420 0.011407491623420 0.000000000000000 0.208534218611013 0.208534218611210 -0.000000000000196 1.567050691098231 1.567050691098035 0.000000000000197 4 Unfortunately, the Francis QR-iteration algorithm may fail. One may examine the fol- lowing matrix 2 0 0 1 3 A = 4 1 0 0 5 : 0 1 0 The next question will be whether the shifting technique would be make it work. This is left as an exercise. 3 Methods for Symmetric Eigenvalue Problems As discussed in Textbook Section 5.3, there are a variety of numerical eigen-solvers for symmetric matrices, or more choices as compared to that for general matrices. Note also that • A general matrix can be reduced via orthogonal similarity transformations to an upper Hessenberg matrix; • With the same spirit, a symmetric matrix can be simplified to a symmetric tridiagonal matrix, e.g., hess(A). Therefore, for some discussion, the focus will be on how to find the eigenvalues of a symmetric tridiagonal matrix. 4 3 Note that for the reduction to a symmetric tridiagonal matrix, the initial cost is 3 n flops 8 3 if only eigenvalues are needed, and 3 n flops if eigenvectors are also desired. As listed on the textbook, there are five eigensolvers to be considered. (i) Tridiagonal QR-iteration. { Fastest method for finding all eigenvalues and optionally eigenvectors; { Need only O(n2) operations; { Mainly for small matrices (n ≤ 25); { Used by Matlab; { Implemented in LAPACK as ssyev (dense), sstev (sparse). (ii) Rayleigh quotient iteration. Skipped (iii) Divide-and-conquer. { Fastest for finding all eigen-pairs of symmetric tridiagonal matrices of order ≥ 25; { Implemented in LAPACK as sstevd; { Need O(n3) flops in bad cases but O(n2) flops in good cases; { About O(n2:5) for certain experiments. (iv) Bisection and inverse method { Mainly used to find eigenvalues in a specified interval. 5 (v) Jacobi's method. { The oldest (bating back to 1846); { Slow; { O(n3) operations; { But still interesting. We shall focus on only (i) and (iii). 3.1 Symmetric Tridiagonal QR-Iteration The Francis QR-iteration algorithm can be applied to a symmetric tridiagonal matrix T0 to produce a sequence of symmetric tridiagonal matrices (Tk)k2N that converges to a diagonal matrix, if conditions are favorable. For this adopted algorithm: •O (n) flops are need for each iteration; •O (n2) flops for finding all n eigenvalues. For more details, see Textbook p.213. We need to consider shifting techniques for symmetric diagonal matrices also. Let T be such a matrix in the sequence generated by the QR-iteration algorithm. Specifically, 2 3 a1 b1 6 b1 a2 7 T = 6 7 : (2) 4 ::::::::: 5 bn−1 an There are two choices for shifting. (i) A simple choice: shift = an; a b (ii) Wilkinson's shift: Choose the eigenvalue of the last 2 × 2 block n−1 n−1 that bn−1 an is closer to an. This is an easy task, The two eigenvalues are surely real because of the symmetry. Finding the eigenvalues boils down to solving a quadratic equation. Wilkinson's Theorem.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us