
d d Computing Accurate Eigensystems of Scaled Diagonally Dominant Matrices (Appeared in SIAM J. Numer. Anal., v. 27, n. 3, pp. 762-791, 1990) Jesse Barlow James Demmel Department of Computer Science Courant Institute The Pennsylvania State University 251 Mercer Str. University Park, PA 16802 New York, NY 10012 Abstract When computing eigenvalues of sym metric matrices and singular values of general matrices in ®nite precision arithmetic we in general only expect to compute them with an error bound pro- portional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant de®nite pencils. These classes include many graded matrices, and all sym metric positive de®nite matrices which can be consistently ordered (and thus all symmetric positive de®nite tridiagonal matrices) . In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiag- onal and tridiagonal matrices. Keywords: Graded matrices, Singular value decomposition, symmetric eigenproblem, perturba- tion theory, error analysis AMS(MOS) subject classi®cations: 65F15, 15A60 Acknowledgements: The ®rst author was supported by the Air Force Of®ce of Scienti®c Research under grant no. AFOSR-88-0161 and the Of®ce of Naval Research under grant no. N00014-80-0517. The second author was supported by the National Science Foundation (grants NSF-DCR-8552474 and NSF-ASC-8715728). Part of this work was performed while the ®rst author was visiting the Courant Institute and the second author was visiting the IBM Bergen Scienti®c Center. 1 d d 1. Introduction When computing the eigenvalues of sym metric matrices and singular values of general matrices in ®nite precision arithmetic one generally only expects to compute them with an error bound f (n)ε eeA ee,wheref(n) is a modestly growing function of the matrix dimension n, ε is the machine precision, and eeA ee is the 2-norm of the matrix A. This follows as a result of stan- dard theorems which state: (1.1) A perturbation δA in the matrix A cannot change its eigenvalues (singular values) by more than eeδA ee [12]. (1.2) The standard algorithm for computing eigenvalues (singular values) of A computes the exact eigenvalues (singular values) of A +δA, eeδA ee ≤ f (n)εeeA ee,wheref(n)isamod- estly growing function of n and ε is the machine precision [12]. These error bounds imply that tiny eigenvalues and singular values (tiny compared to e e A ee) cannot generally be computed to high relative accuracy, since the error bound f (n)ε eeA ee may be much larger than the desired quantity. In fact, if each matrix entry is uncertain in its least signi®cant digits, the tiny eigenvalues and singular values may not even be determined accurately by the data. Sometimes, however, the eigenvalues and singular values are determined much more accurately than error bounds like f (n)ε eeA ee would indicate. This was shown for singular values of bidiagonal matrices in [9], where it was proven that small relative perturbations in the bidiag- onal entries only cause sm all relative perturbations in the singular values, independent of their magnitudes. It was also shown how to compute all the singular values to high relative accuracy. In this paper we extend these results to eigenvalues of sym metric scaled diagonally dominant matrices and scaled diagonally dominant de®nite pencils. (Henceforth we will abbreviate "scaled diagonally dominant" by s.d.d.) A sym metric s.d.d. matrix is any matrix of the form ∆A ∆,whereAis sym metric and diagonally dominant in the usual sense, and ∆ is an arbitrary nonsingular diagonal matrix. A pencil H −λM is s.d.d. de®nite if H and M are sym metric s.d.d. and M is positive de®nite. Examples of s.d.d. matrices are the "graded" matrices R10 10 H R 1 10 H J J J J 2 2 − 3 4 J10 10 10 J J10 10 10 J A = J 102 103 103 J and A = J 104 106 104 J . 0 J J 1 J J 3 4 4 4 − 3 J 10 10 10 J J 10 10 10J Q 104 105P Q 10 1 P Note that A 0 is graded in the usual sense, but not diagonally dominant in the usual sense. A 1 is neither diagonally dominant in the usual sense, nor graded in the usual sense, since the diago- nal entries are positive and negative, and not sorted. Thus we see that the usual diagonal domi- nance implies being s.d.d., but not the converse. In fact, the set of s.d.d. matrices includes all symmetric positive de®nite matrices which can be consistently ordered, a class which includes all sym metric positive de®nite tridiagonal matrices. Dense matrices may be s.d.d. as well. Another example arises from modeling a series of masses m 1 ,...,mn on a line con- nected by simple, linear springs with spring constants k 0 ,...,kn (the ends of the extreme springs are ®xed). The natural frequencies of vibration of this system are the square roots of the eigenvalues of the s.d.d. de®nite pencil H −λM,whereMis the diagonal mass matrix + + diag( m 1,...,mn)andHis the tridiagonal stiffness matrix with diagonal k 0112k , k k ,..., + − − −1/2 −1/2 kn −1knand offdiagonal k 1,..., kn−1. Note that the matrix MHM, which has the same eigenvalues as H −λM, is sym metric s.d.d. In particular, we will show that sm all relative perturbations in the entries of an s.d.d. matrix only cause sm all relative perturbations in the eigenvalues and singular values, indepen- dent of their magnitudes. This is a much tighter perturbation bound than the classical one pro- vided by (1.1) above. Our proof of this result generalizes and uni®es results in [9] for bidiago- nal matrices alone and in [14] for sym metric tridiagonal s.d.d. matrices alone. 2 d d Given that the matrix entries determine all eigenvalues or singular values to high relative accuracy, one would naturally like to compute them that accurately as well. We present algo- rithms based on bisection which attain this accuracy; in the case of bidiagonal or sym metric positive de®nite tridiagonal matrices QR iteration (suitably modi®ed) can be shown to attain high accuracy as well. It is not yet known whether algorithms based on divide and conquer [5, 11, 13] can be made to work in some of these situations too. One may also ask if the singular vectors and eigenvectors of s.d.d. matrices and pencils are determined any more accurately than for general matrices. To state the standard perturba- tion bound for eigenvectors of sym metric matrices and singular vectors of general matrices, we λ λ ≡ eλ−λ e need to de®ne the gap:if i is an eigenvalue (singular value) of A then gap( i) min ij. j≠i λ In other words, it is the absolute distance between iand the remainder of the spectrum. +δ α= T λ (1.3) Let y be a unit eigenvector of A A, yAythe Rayleigh quotient, i the eigenvalue α θ of A closest to ,andzi its unit eigenvector. Let (zi, y) be the acute angle between y and θ ≤ eeδ ee λ zii.Thensin (y,z) 4 A /gap( i) [17, p. 222]. In other words, the error as measured by the angle is proportional to the reciprocal of the gap; λ if the gap is sm all ( i is in a cluster of eigenvalues), the corresponding eigenvector is poorly determined. As before, the standard algorithms guarantee eeδA ee ≤ f (n)εeeA ee, so eigenvectors ee ee ee ee λ of eigenvalues poorly separated with respect to A (i.e. A /gap( i) is large) will generally not be computed accurately. Analogous results hold for singular vectors of general matrices. For eigenvectors of s.d.d. matrices, a stronger perturbation theorem is true. Brie¯y, we eλ−λ e eλλe1/2 λ can replace the gap in (1.3) with the relative gap,min ij/ ji. Thus, as long as i is j≠ i relatively well separated from its neighbors, its corresponding eigenvector is determined to high relative accuracy. This is a much stronger result than (1.3), as the following example shows. Suppose the eigenvalues are 1, 2.10− 10 and 10− 10 . Then the gap for the sm allest eigenvalue is gap(10− 10 )=10− 10 , but the relative gap is .707. Thus (1.3) predicts a loss of 10 decimal digits, whereas the ®ner analysis predicts nearly full accuracy. We also show that a suitable variation of inverse iteration can be used to compute the eigenvectors to this accuracy. We conjecture that other methods based on divide and conquer can attain this accuracy as well, but this has not been proven. Similar results can be proven for singular vectors of bidiagonal matrices and eigenvectors of s.d.d. de®nite pencils; the result for singular vectors partially settles an open question in [9]. The rest of this paper is organized as follows.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages31 Page
-
File Size-