
Numerical Linear Algebra with Applications, Vol. 4(5), 425–437 (1997) An Inexact Inverse Iteration for Large Sparse Eigenvalue Problems Yu-Ling Lai1, Kun-Yi Lin2 and Wen-Wei Lin2 1Department of Mathematics, National Chung-Cheng University, Chia-Yi 621, Taiwan 2Institute of Applied Mathematics, National Tsing-Hua University, Hsinchu 300, Taiwan In this paper, we propose an inexact inverse iteration method for the computation of the eigenvalue with the smallest modulus and its associated eigenvector for a large sparse matrix. The linear systems of the traditional inverse iteration are solved with accuracy that depends on the eigenvalue with the second smallest modulus and iteration numbers. We prove that this approach preserves the linear convergence of inverse iteration. We also propose two practical formulas for the accuracy bound which are used in actual implementation. © 1997 by John Wiley & Sons, Ltd. Numer. Linear Algebra Appl., Vol. 4, 425–437 (1997) (No. of Figures: 3 No. of Tables: 2 No. of Refs: 17) KEY WORDS inexact inverse iteration; linear convergence; large eigenvalue problem 1. Introduction The numerical computation of eigenvalues of large matrices is a problem of major im- portance in many engineering and scientific applications. Eigenvalue calculations arise in structural dynamics, quantum chemistry, nuclear engineering and so forth. Consequently, it goes without saying that the eigenvalue problem is essential in applied numerical linear algebra. Inverse iteration [10,17] is one of the most widely used algorithms for solving large eigenvalue problems when only a specified eigenpair is required. The global convergence property is the major advantage of this method. Inverse iteration consists of two major steps. One is to generate a sequence of vectors by solving linear systems with some scheme, the other is to ‘normalize’ the vector in the sequence generated by the previous step. For simplicity, we call the process for generating these vectors the ‘outer iteration’. There are two CCC 1070–5325/97/050425–13 $17.50 Received September 1996 ©1997 by John Wiley & Sons, Ltd. Revised January 1997 426 Yu-Ling Lai, Kun-Yi Lin and Wen-Wei Lin ways to solve linear systems, i.e., direct methods and iterative methods. In this paper we are concerned with large sparse matrices. Hence, iterative methods would be more appropriate to solve these linear systems than direct methods. We call the process for generating the vectors by any iterative method for solving the linear system, the ‘inner iteration’. No doubt, the greatest computational cost of inverse iteration is solving the linear systems in the inner iteration process. Our numerical experience indicates that the residual in the outer iteration seldom varies much even if there is a small error made in the inner iteration. It should be stressed that this was also observed by Morgan and Scott in [9]. This important and interesting message motivates us to foster a concept of ‘relaxation’—the core idea in this paper. Namely, the concept is to try to reduce computational costs by controlling the tolerance in each inner-loop process. It is known that inverse iteration with a fixed shift has linear convergence. To accelerate the convergence rate, inverse iteration with variant shifts or Rayleigh quotient iteration (RQI) [10,15,17] has been developed and proven to be mathematically equivalent to Newton’s method [11]. Therefore, it is locally quadratically convergent. It is natural also to incorporate the concept of inexact inversion to the inverse iteration with variant shifts or RQI. However, when the approximate eigenvalue is close to the desired eigenvalue, the shifted matrix is very ill-conditioned. Thus, any iterative method applied to the linear systems may converge very slowly due to the large condition number. On the other hand, subspace approaches, including methods of Lanczos, Arnoldi and Davidson, can be viewed as accelerated inverse iteration. The idea of inexact inversion in combination with the subspace method has been studied in [2,3,8,9,13]. Especially, in [13] Sleijpen and Van der Vorst have shown that the Jacobi–Davidson method is an inexact Newton process [4] when the approximate solutions of approximate systems are used. Thus, in this paper, we focus on finding a good stopping tolerance for the inner iteration of inverse iteration. This may help the readers to design good strategies for the inexact inversion in the Jacobi–Davidson method as well as other subspace methods. This paper is organized as follows. In Section 2, we propose an inexact inverse iteration and prove the relaxation technique preserves the linear convergence property of inverse iteration. In Section 3, two practical formulas for the stopping tolerance are presented. Gen- eralized inexact inverse iteration algorithms for solving generalized eigenvalue problems are discussed. Finally, we present some numerical experiments and conclusions in Section 4. 2. Inexact inverse iteration Throughout this paper, we use the abbreviation ‘INVIT’ to denote the so called ‘inverse iteration’. In this section, we aim at finding a good stopping tolerance for the inner iteration of INVIT such that the computational cost can be much reduced and the convergence behaviour of the outer iteration of INVIT will not be affected too much. For this, we propose the following algorithm. Numer. Linear Algebra Appl., Vol. 4, 425–437 (1997) © 1997 by John Wiley & Sons, Ltd An Inexact Inverse Iteration for Large Sparse Eigenvalue Problems 427 Algorithm 2.1. Inexact INVIT: given a tolerance TOL,ε0 1and a linear functional `. = For k 0, 1, = ··· Solve Avk 1 uk until Avk 1 uk εk + = k + − k≤ βk 1 `(vk 1) + = + uk 1 vk 1/βk 1 + = + + rk 1 Auk 1 uk 1/βk 1 + = + − + + Compute εk 1 + If rk 1 <TOL, then stop k + k 1 It is shown in the next theorem that if εk 1 is bounded by k 1 , then the algorithm + + βi λn 1 i 1 | − | will converge to the smallest modulus eigenvalue and the associated= eigenvector. Q n n Theorem 2.1. (Main theorem) Let A R × . Suppose that there exists a non-singular ∈ n matrix X [z1,z2,...,zn] with zi C and zi 2 1, i 1, ,n such that = ∈ k k = ∀ = ··· AX XD, where D diag(λ1,λ2,...,λn) with λ1 ... λn 1 > λn >0. = n = | |≥ ≥| −| | | λn 1 M Let u αizi, ρ and X M.If αn > ,βk 0 and εk 0 λn 1 − 2 1 ρ 1 = i 1 =| − | k k = | | − 6= + = 1 = k 1 Pfor all k in Algorithm 2.1, then + β λ i 1 i n 1 = | − | Q zn 1 lim uk and lim βk k = `(zn) k = λn →∞ →∞ Proof Let dk 1 Avk uk 1. Since dk 1 2 εk 1, we write dk 1 ξk 1yk 1, where − = − − k − k ≤ − − = − − ξk 1 εk 1 and yk 1 2 1. Thus, we get − ≤ − k − k = 1 1 vk A− uk 1 A− dk 1 = − + − By induction on k, we have the formula k 1 1 k k k 1 − 1 vk [A− u0 A− d0 β1A− + d1 ( βi)A− dk 1] = k 1 + + +···+ − i −1 βi i 1 = Y= 1 : Q wk (2.1) = πk 1 − k k k j 1 j where wk A− u0 πj 1A− + − dj 1 and πj i 1 βi. (Note that π0 1.) It = + j 1 − − = = = yields P= Q k k k j 1 A− u0 πj 1A− + − dj 1 vk + j 1 − − wk uk = (2.2) = `(v ) = Pk = `(w ) k k k j 1 k `(A− u0 πj 1A− + − dj 1) + j 1 − − = n P (j) (j) Let yj δi zi. It is readily proved that δi <M, i, j. Using the relation dj = i 1 | | ∀ = P= © 1997 by John Wiley & Sons, Ltd. Numer. Linear Algebra Appl., Vol. 4, 425–437 (1997) 428 Yu-Ling Lai, Kun-Yi Lin and Wen-Wei Lin n (j) ξj δi zi, the following two equations are derived. i 1 P= n n k k k λn k A u α λ− z λ α ( ) z (2.3) − 0 i i i n− i λ i = i 1 = i 1 i X= X= k k n k j 1 k j 1 (j 1) πj 1A− + − dj 1 πj 1 λi− + − ξj 1δi − zi − − − − j 1 = j 1 i 1 X= X= X= k n k λn k 1 j j 1 (j 1) λn− πj 1 ( ) + − λn− ξj 1δi − zi. (2.4) − λ − = j 1 i 1 i X= X= From (2.2), (2.3) and (2.4) it follows that n k n k λn k λn k 1 j j 1 (j 1) λ ( α ( ) z π ( ) λ − ξ δ − z ) n− i λi i j 1 λi + − n j 1 i i i 1 + j 1 − i 1 − uk = = = = Pn Pk Pn k λn k λn k 1 j j 1 (j 1) λ− `( α ( ) z π ( ) λ − ξ δ − z ) n i λi i j 1 λi + − n j 1 i i i 1 + j 1 − i 1 − = = = n P P P α ( λn )kz (k) (k) i λi i 1 2 i 1 + + = (2.5) = Pn `( α ( λn )kz (k) (k)) i λi i 1 2 i 1 + + P= where k (k) j 1 (j 1) : πj 1λn− ξj 1δn − zn 1 − − = j 1 X= and k n 1 (k) − λn k 1 j j 1 (j 1) : πj 1 ( ) + − λn− ξj 1δi − zi 2 − λ − = j 1 i 1 i X= X= Since ρ<1, we get k k j 1 (j 1) j 1 πj 1λn− ξj 1δn − πj 1λn− Mξj 1 − − − − j 1 | |≤j 1| | X= X= k j 1 1 πj 1λn− M ≤ | − | · j 1 j 1 i −1 βiλn 1 X= = | − | k Q1 ρk M M ρj 1 M − as k (2.6) − 1 ρ 1 ρ = j 1 = · → →∞ X= − − It follows that (k) converges to czn for some constant c as k (2.7) 1 →∞ Numer.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-