Structured Eigenvalue Problems { Structure-Preserving Algorithms, Structured Error Analysis Draft of chapter for the second edition of Handbook of Linear Algebra Heike Faßbender TU Braunschweig 1 Introduction Many eigenvalue problems arising in practice are structured due to (physical) properties induced by the original problem. Structure can also be introduced by discretization and linearization techniques. Preserving this structure can help preserve physically relevant symmetries in the eigenvalues of the matrix and may improve the accuracy and efficiency of an eigenvalue computation. This is well-known for symmetric matrices A = AT 2 Rn×n: Every eigenvalue is real and every right eigenvector is also a left eigenvector belonging to the same eigenvalue. Many numerical methods, such as QR, Arnoldi and Jacobi- Davidson automatically preserve symmetric matrices (and hence compute only real eigenvalues), so unavoidable round-off errors cannot result in the computation of complex-valued eigenvalues. Algorithms tailored to symmet- ric matrices (e.g., divide and conquer or Lanczos methods) take much less computational effort and sometimes achieve high relative accuracy in the eigenvalues and { having the right representation of A at hand { even in the eigenvectors. Another example is matrices for which the complex eigenvalues with nonzero real part theoretically appear in a pairing λ, λ, λ−1; λ−1. Using a general eigenvalue algorithm such as QR or Arnoldi results here in com- puted eigenvalues which in general do not display this eigenvalue pairing any longer. This is due to the fact, that each eigenvalue is subject to unstructured rounding errors, so that each eigenvalue is altered in a slightly different way 0 Unstructured error Structured error Figure 1: Effect of unstructured and structured rounding errors, round = original eigenvalue, pentagon = unstructured perturbation, star = structured perturbation (see left picture in Figure 1). When using a structure-preserving algorithm this effect can be avoided, as the eigenvalue pairing is enforced so that all four eigenvalues are subject to the same rounding errors (see right picture in Figure 1). This chapter focuses on two related classes of structured eigenproblems (symmetric/Hermitian and orthogonal/unitary; Hamiltonian and symplec- tic), structure-preserving algorithms, and structured error analysis (such as structured condition numbers and backward errors) for these classes. It is based on [FK06]. 2 Types of Matrices A well-known example for structured eigenvalue problems are symmetric eigenvalue problems. They are probably the best understood and examined eigenvalue problems, closely connected are orthogonal/unitary problems. A different set of closely connected eigenproblems are Hamiltonian and symplec- tic eigenproblems which are also fairly well understood. The relation between the Hamiltonian and symplectic eigenproblems is best described by compar- ing it with the relation between symmetric and orthogonal eigenproblems or the Hermitian and unitary eigenproblems. In all these cases, the underlying algebraic structures are an algebra and a group acting on this algebra. For the 1 algebra (Hamiltonian, symmetric/Hermitian matrices), the structure is ex- plicit, i.e., can be read off the matrix by viewing it. In contrast, the structure of a matrix contained in a group (symplectic, orthogonal/unitary matrices) is given only implicitly. It is very difficult to make this structure explicit. Common to all considered structured matrix classes is that the matrices in these classes can be represented by just a few parameters. This can be used to develop structure-preserving algorithms which are usually faster and more accurate than standard solver. Definitions: 0 I • Let J = 2 2n×2n where I 2 n×n is the identity. −I 0 R R • A matrix S 2 Rn×n is called symmetric if and only if A = AT : • A matrix G 2 Cn×n is called Hermitian if and only if G = G∗: • A matrix Q 2 Rn×n is called orthogonal if and only if QQT = QT Q = I: • A matrix U 2 Cn×n is called unitary if and only if UU ∗ = U ∗U = I: • A matrix H 2 R2n×2n is called Hamiltonian if and only if HJ = (HJ)T . • A matrix M 2 R2n×2n is called symplectic if and only if MJM T = J; or equivalently, M T JM = J: h i @ n×n • A matrix T = @ 2 R is called tridiagonal if tij = 0 for i > j +1 and i < j − 1; j = 1; : : : ; n: • A matrix T is called unreduced tridiagonal if T is a tridiagonal matrix with ti;i−1 6= 0; i = 2; : : : ; n and ti;i+1 6= 0; i = 1; : : : ; n − 1. h i n×n • A matrix A = @ 2 R is called upper Hessenberg if aij = 0 for i > j + 1; i; j = 1; : : : ; n: • A matrix A is called unreduced upper Hessenberg if A is an upper Hessenberg matrix with ai;i−1 6= 0; i = 2; : : : ; n. 2 h i n×n • A matrix B = @ 2 R is called upper triangular if bij = 0 for i > j; i; j = 1; : : : ; n: h0 i .@. • A matrix B = .0 is called strict upper triangular if B is an upper triangular matrix with bjj = 0; j = 1; : : : ; n: A11 A12 2n×2n n×n • A matrix A = 2 R ;Aij 2 R ; is called J{Hessenberg A21 A22 if A11;A21;A22 are upper triangular matrices and A12 is an upper Hes- senberg matrix, that is 2 3 @ @ A = 4 5 : @ @ • A matrix A is called unreduced J{Hessenberg if A is a J{Hessenberg −1 matrix, A21 exists, and A12 is an unreduced upper Hessenberg matrix. H11 H12 2n×2n n×n • A Hamiltonian matrix H = 2 R ;Hij 2 R ; is H21 H22 called Hamiltonian J-Hessenberg if H11 = −H22;H21 are diagonal matrices and H12 is a symmetric tridiagonal matrix, that is 2 3 @@@@ H = 4 5 : @@ TR • A Hamiltonian matrix H = is called a real Hamilto- 0 − T T nian Schur matrix if R is an n × n symmetric and T an n × n quasi upper triangular matrix. R11 R12 2n×2n n×n • A matrix R = 2 R ;Rij 2 R ; is called J-triangular R21 R22 if the submatrices Rij are all upper triangular, and R21 is strictly upper triangular, that is 2 3 @ @ R = 4 0 5 : .@. .0 @ 3 B11 B12 2n×2n n×n • A symplectic matrix B = 2 R ;Bij 2 R ; is called B21 B22 symplectic butterfly if B11 and B21 are diagonal matrices and B12 and B22 are tridiagonal matrices, that is 2 3 @@@@ B = 4 5 : @@@@ • The QR factorization of A 2 Kn×n is given by A = QR where Q 2 Rn×n is orthogonal and R 2 Rn×n is upper triangular if K = R. If K = C, then Q 2 Cn×n is unitary and R 2 Cn×n is upper triangular. • The SR factorization of a matrix A 2 R2n×2n is given by A = SR where S 2 R2n×2n is symplectic and R 2 R2n×2n is J-triangular. • A trivial matrix is both symplectic and J-triangular and has the form C−1 F ; where C; F 2 n×n are diagonal matrices. 0 C R • A Cayley transformation of a square matrix A is given by C = (I − A)−1(I + A). The inverse transformation is A = (C − I)(C + I)−1: • The matrix Gk = G(k; c; s) 2 Ik−1;k−1 3 6 c s 7 6 n−k;n−k 7 6 I 7 Gk = 6 7 ; 6 Ik−1;k−1 7 6 7 4 −s c 5 In−k;n−k where c2 + s2 = 1; c; s 2 R is called a symplectic Givens transfor- mation. • The matrix Hk = H(k; v) 2 Ik−1;k−1 3 6 P 7 Hk = 6 7 ; 4 Ik−1;k−1 5 P 4 n−k+1;n−k+1 vvT n−k+1 where P = I − 2 vT v ; v 2 R is called a symplectic Householder transformation. • The matrix Lk = L(k; c; d) 2 Ik−2;k−2 3 6 c d 7 6 7 6 c d 7 6 n−k;n−k 7 6 I 7 Lk = 6 7; 6 Ik−2;k−2 7 6 7 6 c−1 7 6 7 4 c−1 5 In−k;n−k where c; d 2 R is called a symplectic Gauss transformation (type I). • The matrix Lek = Le(k; c; d) 2 Ik−1;k−1 3 6 c d 7 6 n−k;n−k 7 6 I 7 Lek = 6 7; 6 Ik−1;k−1 7 6 7 4 c−1 5 In−k;n−k where c; d 2 R is called a symplectic Gauss transformation (type II). Facts: −γk σk + Let Gk = Gk(γk) = diag(Ik−1; ;In−k−1) with γk 2 C; σk 2 R σk γk 2 2 and jγkj + σk = 1, and Gn(γn) = diag(In−1; −γn) with γn 2 C; jγnj = 1. 2n×2n Let P = [e1 e3 ··· e2n−1 e2 e4 ··· e2n] 2 R where ej is the jth standard basis vector on R2n. 1. [Wat07, Chapter 1.3] The matrix multiplication P AP T performs a per- fect shuffle of the rows and columns of A 2 R2n×2n: If one performs a perfect shuffle of the rows and columns of a J-triangular matrix, one gets an upper triangular matrix. The product of J-triangular matrices is J-triangular. The nonsingular J-triangular matrices form a group. 5 2. [PL81, BMW89] The symplectic Givens and Householder transforma- tions are orthogonal, while the symplectic Gauss transformations are nonorthogonal. It is crucial that the simple structure of these elemen- tary symplectic transformations is exploited when computing matrix products of the form GkA; AGk;HkA; AHk;LkA; ALk; LekA; and ALek.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages37 Page
-
File Size-