
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and smallest module, denoted byλ 1 andλ n respectively, as well as their associated eigenvectors. Solving such a problem is of great interest in several real-life applica- tions (geosysmic, machine and structural vibrations, electric network analy- sis, quantum mechanics,...) where the computation ofλ n (and its associated eigenvectorx n) arises in the determination of the proper frequency (and the corresponding fundamental mode) of a given physical system. e shall come back to this point in Section 5."2. $aving approximations ofλ 1 andλ n can also be useful in the analysis of numerical methods. %or instance, if & is symmetric and positive de'nite, one can compute the optimal value of the acceleration parameter of the (ichardson method and estimate its error reducing factor (see )hapter 4), as well as perform the stability analysis of discreti+ation methods for systems of ordinary di,erential equations (see )hapter 11). 5.3.1 Approximation of the Eigenvalue of Largest Module -et &∈C n×n be a diagonali+able matrix and let .∈C n×n be the matrix of its right eigenvectorsx i, fori/",...,n. -et us also suppose that the eigenvalues of & are ordered as |λ1|>|λ 2| ≥ |λ3|...≥|λ n|, (5."0) whereλ 1 has algebraic multiplicity equal to 1. 1nder these assumptions,λ 1 is called the dominant eigenvalue of matrix &. 2iven an arbitrary initial vectorq (0) ∈C n of unit 3uclidean norm, consider fork/",#,... the following iteration based on the computation of powers of matrices, commonly known as the power method4 z(k) /&q(k−1), (k) (k) (k) q /z /�z �2, (5.17) ν(k) /(q(k))H &q(k). -et us analy+e the convergence properties of method (!.17). 6y induction on k one can check that k (0) (k) & q q / k (0) , k≥". (5.18 ) �& q �2 This relation explains the role played by the powers of & in the method. n 6ecause & is diagonali+able, its eigenvectorsx i form a basis ofC 8 it is thus possible to representq (0) as 5.3 The Power Method 193 n (0) q / αixi, αi ∈C, i/", . , n. (!."9) i=1 � :oreover, since &xi /λ ixi, we have n α λ k &kq(0) /α λk x ; i i x , k/",#,... (5.20) 1 1 1 α λ i � i=2 1 1 � � � � k (0) Since|λ i/λ1|< " fori/#, . , n, ask increases the vector & q (and thus alsoq (k), due to (5.18)), tends to assume an increasingly signi'cant component in the direction of the eigenvectorx 1, while its components in the other directionsx j decrease. Using (!.18) and (5.#<), we get α λk(x ;y (k)) x ;y (k) q(k) / 1 1 1 /µ 1 , k (k) k (k) �α1λ1 (x1 ;y )�2 �x1 ;y �2 k (k) whereµ k is the sign ofα 1λ1 andy denotes a vector that vanishes ask→∞. Ask→∞, the vectorq (k) thus aligns itself along the direction of eigen- vectorx 1, and the following error estimate holds at each stepk. Theorem 5.6 Let&∈C n×n be a diagonalizable matrix whose eigenvalues satisfy (!."6). Assuming thatα 1 �/ <, there exists a constantC>< such that λ k �q=(k) −x � ≤C 2 , k≥", (5.#1) 1 2 λ � 1 � � � � � where � � (k) k (0) n k (k) q �& q �2 αi λi q= / /x ; xi, k/",#,... (5.#2) α λk 1 α λ 1 1 i=2 1 1 � � � Proof. ince A is diagonaliza"le# without losing generalit$# we can pic% up the nonsingular matrix & in such a wa$ that its columns have unit Euclidean length# that is x i 2 ' 1 fori'1,...,n. (rom )5.2*+ it thus fo llows that � � n k n k αi λi αi λi x1 , xi x 1 2 ' xi 2 � α1 λ1 − � � α1 λ1 � i=2 � � � � i=2 � � � n 1/2 � n 1/2 α 2 λ 2k λ k α 2 i i 2 i , ≤ � α1 λ1 � ≤ λ1 � α1 � i=2 � � � � � � i=2 � � � � � � n 1/2 � � 2 that is )5.21+ withC' )αi/α1+ . � � i=2 � � 19- 5 Approximation of Eigenvalues and Eigenvectors (k) 3stimate (!.21) expresses the convergence of the sequence q= towardsx 1. Therefore the sequence of (ayleigh quotients H (k) H (k) (k) 2 (k) (k) (k) ((q= ) &q= )/�q= �2 / q &q /ν � � (k) will converge toλ 1. As a consequence, limk→∞ ν /λ 1, and the convergence will be faster when the ratio|λ 2/λ1| is smaller. >f the matrix & is real and symmetric it can be proved, always assuming thatα 1 �/ 0, that (see ?2-89], pp. *<6-407) λ 2k |λ −ν (k)| ≤ |λ −λ | tan2(θ ) 2 , (5.23) 1 1 n 0 λ � 1 � � � T (0) � � where cos(θ0)/|x 1 q | �/ 0. >nequality (5.#A) outlines� � that the convergence (k) of the sequenceν toλ 1 is quadratic with respect to the ratio|λ 2/λ1| (we refer to Section 5.A.A for numerical results). e conclude the section by providing a stopping criterion for the iteration (5.17). %or this purpose, let us introduce the residual at stepk r(k) /&q(k) −ν (k)q(k), k≥", (k) (k) (k) H n×n (k) and, forε> 0, the matrixε3 /−r q ∈C with�3 �2 / 1. Since k k k � � ε3( )q( ) /−r ( ), k≥", (5.#* ) we obtain &;ε3 (k) q(k) /ν (k)q(k). &s a result, at each step of the power methodν (k) is an eigenvalue of the perturbed matrix&;ε3 (k). %rom (5.#4) and � � (k) from de'nition (1.20) it also follows thatε/�r �2 fork/",#,.... Blugging this identity back into (!."0) and approximating the partial derivative in (!.10) (k) by the incremental ratio|λ 1 −ν |/ε, we get (k) (k) �r �2 |λ1 −ν | � , k≥", (5.25) |cos(θ λ)| whereθ λ is the angle between the right and the left eigenvectors,x 1 andy 1, associated withλ 1. Cotice that, if & is an hermitian matrix, then cos(θλ) / 1, so that (5.#!) yields an estimate which is analogue to (5.13). >n practice, in order to employ the estimate (5.#!) it is necessary at each stepk to replace|cos(θ λ)| with the module of the scalar product between two (k) (k) approximationsq and! ofx 1 andy 1, computed by the power method. The following a posteriori estimate is thus obtained �r(k)� |λ −ν (k)| � 2 , k≥". (5.26) 1 |(!(k))H q(k)| 3xamples of applications of (5.#6) will be provided in Section !.A.3. 5.3 The Power Method 195 5.3.2 Inverse Iteration >n this section we look for an approximation of the eigenvalue of a matrix &∈C n×n which is closest to a given numberµ∈C, whereµ �∈σ(&). %or this, −1 −1 the power iteration (!."7) can be applied to the matrix (:µ) / (&−µ>) , yielding the so-called inverse iteration or inverse power method. The number µ is called a shift. −1 −1 The eigenvalues of :µ areξ i /(λi −µ) 8 let us assume that there exists an integerm such that |λm −µ|<|λ i −µ|,∀i/",...,n andi�/ m. (5.27) This amounts to requiring that the eigenvalueλ m which is closest toµ has multiplicity equal to 1. :oreover, (5.#7) shows thatξ m is the eigenvalue of −1 :µ with largest module8 in particular, ifµ / <,λ m turns out to be the eigenvalue of & with smallest module. 2iven an arbitrary initial vectorq (0) ∈C n of unit 3uclidean norm, for k/",#,... the following sequence is constructed4 (A−µ>)z (k) /q (k−1), (k) (k) (k) q /z /�z �2, (5.28) σ(k) /(q(k))H &q(k). Cotice that the eigenvectors of :µ are the same as those of & since −1 :µ /.(Λ−µ> n). , whereΛ / diag(λ 1, . , λn). %or this reason, the (ayleigh quotient in (!.#8) is computed directly on the matrix & (and not −1 on :µ ). The main di,erence with respect to (5.17) is that at each step k a linear system with coefficient matrix :µ /&−µ> must be solved. %or numerical convenience, the -1 factori+ation of :µ is computed once for all atk / 1, so that at each s tep only two triangular systems are to be solved, with a cost of the order ofn 2 Eops. &lthough being more computationally expensive than the power method (5.17), the inverse iteration has the advantage that it can converge to any de- sired eigenvalue of & (namely, the one closest to the shiftµ). >nverse iteration is thus ideally suited for re'ning an initial estimateµ of an eigenvalue of &, which can be obtained, for instance, by applying the locali+ation techniques introduced in Section !.1. Inverse iteration can be also e,ectively employed to compute the eigenvector associated with a given (approximate) eigenvalue, as described in Section 5.7.1. >n view of the convergence analysis of the iteration (!.28) we assume that & is diagonalizable, so thatq (0) can be represented in the form (5.19). Broceeding in the same way as in the power method, we let n k (k) αi ξi q= /x m ; xi, αm ξm i ,i� m =1�= � � 19. 5 Approximation of Eigenvalues and Eigenvectors −1 wherex i are the eigenvectors of :µ (and thus also of &), whileα i are as in (5."9). &s a consequence, recalling the definition ofξ i and using (!.27), we get (k) (k) lim q= /x m, lim σ /λ m.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-