The Godunov–Inverse Iteration: a Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem
Total Page:16
File Type:pdf, Size:1020Kb
The Godunov{Inverse Iteration: A Fast and Accurate Solution to the Symmetric Tridiagonal Eigenvalue Problem Anna M. Matsekh a;1 aInstitute of Computational Technologies, Siberian Branch of the Russian Academy of Sciences, Lavrentiev Ave. 6, Novosibirsk 630090, Russia Abstract We present a new hybrid algorithm based on Godunov's method for computing eigenvectors of symmetric tridiagonal matrices and Inverse Iteration, which we call the Godunov{Inverse Iteration. We use eigenvectors computed according to Go- dunov's method as starting vectors in the Inverse Iteration, replacing any non- numeric elements of Godunov's eigenvectors with random uniform numbers. We use the right-hand bounds of the Ritz intervals found by the bisection method as Inverse Iteration shifts, while staying within guaranteed error bounds. In most test cases convergence is reached after only one step of the iteration, producing error estimates that are as good as or superior to those produced by standard Inverse Iteration implementations. Key words: Symmetric eigenvalue problem, tridiagonal matrices, Inverse Iteration 1 Introduction Construction of algorithms that enable to find all eigenvectors of the symmet- ric tridiagonal eigenvalue problem with guaranteed accuracy in O(n2) arith- metic operations has become one of the most pressing problems of the modern numerical algebra. QR method, one of the most accurate methods for solv- ing eigenvalue problems, requires 6n3 arithmetic operations and O(n2) square root operations to compute all eigenvectors of a tridiagonal matrix [Golub and Email address: [email protected] (Anna M. Matsekh ). 1 Present address: Modeling, Algorithms and Informatics Group, Los Alamos Na- tional Laboratory. P.O. Box 1663, MS B256, Los Alamos, NM 87544, USA Preprint submitted to Elsevier Science 31 January 2003 Loan (1996)]. Existing implementations of the Inverse Iteration (e.g. LAPCK version of the Inverse Iteration xSTEIN [Anderson et al. (1995)] and EISPACK version of the Inverse Iteration TINVIT [Smith et al. (1976)], require O(n2) floating point operations to find eigenvectors corresponding to well separated eigenvalues, but in order to achieve numerical orthogonality of eigenvector ap- proximations corresponding to clustered eigenvalues, reorthogonalization pro- cedures, such as Modified Gram-Schmidt process, are applied on each step of Inverse Iteration, increasing worst case complexity of the algorithm to O(n3) operations. Typically Inverse Iteration for symmetric tridiagonal problem re- quires only three–five iteration steps [Wilkinson (1965), Smith et al. (1976), Anderson et al. (1995)] before convergence is achieved, but it tends to be very sensitive to the choice of the shift. If very accurate eigenvalue approximation is used as the Inverse Iteration shift convergence may not be achieved since shifted matrix in this case is nearly singular. To deal with this problem small perturbations are usually introduced in the shift parameter to avoid iteration breakdown, but as a rule the choice of such a perturbation is arbitrary, which may affect accuracy of the resulting eigenvectors. Recently there have been a number of attempts to construct fast accurate al- gorithms that allow to compute eigenvectors corresponding to accurate eigen- value approximations of tridiagonal symmetric matrices by dropping the re- dundant equation from the symmetric eigensystem. In 1995 K. V. Fernando proposed an algorithm for computing an accurate eigenvector approximation x from a nearly homogeneous system (A λI)x by finding an optimal index of the redundant equation that could be− dropped [Fernando (1997)]. In 1997 Inderjit Dhillon proposed an O(n2) algorithm for the symmetric tridiagonal eigenproblem [Dhillon (1997)] based on incomplete LDU and UDL (twisted) factorizations, which provided yet another solution to finding the redundant equation in a tridiagonal symmetric eigensystem. Much earlier, in 1983, S. K. Godunov and his collaborators [Godunov et al. (1988), Godunov et al. (1993)] proposed a two-sided Sturm sequence based method that enables to determine all eigenvectors of tridiagonal symmetric matrices with guaranteed accuracy in O(n2) floating point operations by eliminating the redundant equation. The use of the two-sided Sturm sequences in the Godunov's method allows to avoid accuracy loss associated with rounding errors in the conventional Sturm sequence based methods. The algorithm gives provably accurate solu- tions to the symmetric tridiagonal eigenvalue problem on a specially designed floating point arithmetics with extended precision and directed rounding [Go- dunov et al. (1988)]. Unfortunately in IEEE arithmetics division by zero and overflow errors in the Godunov's eigenvector computations can not be entirely avoided, while for computationally coincident and closely clustered eigenvalues it produces nearly collinear eigenvectors, taking no measures for reorthogonal- ization. In empirical studies Godunov's method, direct method by it's nature, consistently delivers residuals that are approximately two orders of magni- 2 tude larger than those of the eigenvectors computed according to some of the Inverse Iteration implementations. Inverse Iteration on the other hand often suffers from the nondeterministic character of starting vectors, the need to introduce disturbances into the shift, and the high worst case omplexity. In the attempt to overcome shortcomings of both Godunov's method and the Inverse Iteration we constructed a new hybrid procedure for comput- ing eigenvectors of symmetric tridiagonal unreduced matrices which we call Godunov{Inverse Iteration. Godunov{Inverse Iteration can be viewed as Go- dunov's method with iterative improvement. It uses eigenvectors computed from the two-sided Sturm sequences as starting vectors. One step of the In- verse Iteration is usually sufficient to get desirable accuracy and orthogonality of the eigenvectors. This is in contrast to LAPACK version of the Inverse Itera- tion xSTEIN [Anderson et al. (1995)] which uses randomly generated starting vectors and requires three–five steps for convergence and EISPACK version of the Inverse Iteration TINVIT [Smith et al. (1976)] which uses direct solution to the tridiagonal problem as a starting vector in the Inverse Iteration and requires up to five steps for convergence. By choosing right-hand bounds of the the smallest machine presentable eigenvalue intervals, found by the bi- section algorithm, as shifts in the Godunov{Inverse Iteration, instead of the eigenvalue approximation (the center of this interval) we insure that iteration matrix won't be numerically singular, and the perturbation does not exceed error bounds for the corresponding eigenvalue. This paper is organized as follows. In section 2 we give a formal description of the symmetric eigenvalue problem. In section 3 we give an overview of the original version of the Godunov's method. In section 4 we discuss shortcom- ings of the Godunov's method and present Godunov{Inverse Iteration. We implemented and tested Godunov-Inverse Iteration, Godunov's method, In- verse Iteration with random starting vectors (LAPACK approach) and Inverse Iteration with starting vectors found as direct solutions to the eigenproblem (EISPACK approach), as well as Sturm sequence based bisection method, and Householder and Lanczos tridiagonalization procedures for dense and sparse matrices respectively. In section 5 we compare the quality of the eigenvectors computed with these methods on a tridiagonal, dense and sparse test matri- ces. Throughout the paper we assume that symmetric eigenvalue problems with matrices having arbitrary structure can be reduced to tridiagonal form with orthogonal transformations, which preserve spectral properties of original matrices to machine precision [Golub and Loan (1996)]. 3 2 Formulation of the Problem Consider the fundamental algebraic eigenvalue problem, in which Ax = λx (1) n n for real symmetric matrices A R × . There always exists a real orthogonal n n 2 transformation W R × such that matrix A is diagonalizable [Wilkinson (1965)], that is, 2 T W AW = diag(λi); (2) where eigenvalues λi; i = 1; : : : ; n are all real. We solve problem (1) with a real symmetric matrix A with arbitrary structure according to the Rayleigh{Ritz procedure [Godunov et al. (1988), Godunov et al. (1993)] that is, we (i) compute orthonormal transformation Q such that matrix T = QT AQ is tridiagonal (ii) solve eigenproblem T u = µu (iii) take (µ, Qu) as an approximation to the eigenpair (λ, x) 3 Godunov's Method Godunov's method [Godunov et al. (1988), Godunov et al. (1993)] was de- signed to compute eigenvectors of an unreduced symmetric tridiagonal matrix 0 d0 b0 0 1 ··· . B .. C B b0 d1 . C T = B C (3) B . C B . .. .. b C B n 2 C B − C B C @ 0 bn 2 dn 1 A ··· − − by a sequence of plane rotations on a specially designed architecture that supports extended precision and directed rounding. Let (αi; βi) be an eigenin- terval that is guaranteed to contain an eigenvalue µi(T ) of the matrix T . Such an interval can be found by the bisection method with the accuracy [Godunov et al. (1988)] (5γ + 1) mach βi αi 3 M(T ); (4) j − j ≤ 2γ (5γ + 1) mach − 4 where γ is the base used for the floating point exponent, and 8 d0 + b0 > j j j > M(T ) = max > max di + bi + bi+1 : (5) < 1 i<n 1 ≤ − j j j j j j > d + b > n 1 n 1 :> j − j j − j Note that using Cauchy-Bunyakovsky inequality it can be established that M(T ) pn T 2 [Godunov et al. (1988)]. Godunov eigenvector approxima- i ≤ k k tion u corresponding to the eigenvalue approximation µi(T ) (αi; βi) is found 2 recursively from the two-sided Sturm sequence Pk(µi); i; k = 1; : : : ; n, by let- ting i i i u0 = 1 and uk = uk 1sign(bk 1)=Pk 1(µi) (6) − − − − in just O(n) operations per a normalized eigenvector. Two-sided Sturm se- quence def + + P0(µi);:::;Pn 1(µi) = P0 (αi);:::Pl (αi);Pl−+1(βi);:::Pn− 1(βi) (7) − − + is constructed from the left-sided and right-sided Sturm sequences Pk (αi); k = + 0; : : : ; n 1 and P −(βi); k = n 1;:::; 0.