AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P

Total Page:16

File Type:pdf, Size:1020Kb

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O'Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis ² How to use the Arnoldi Basis ² FOM (usually called the Arnoldi iteration) ² GMRES ² Givens Rotations ² Relation between FOM and GMRES iterates ² Practicalities ² Convergence Results for GMRES ² The Arnoldi Basis Recall: The Arnoldi basis v1; : : : ; vn is obtained column by column from the ² relation AV = VH where H is upper Hessenberg and VT V = I. In particular, hij = (Avj ; vi). ² The vectors vj can be formed by a (modi¯ed) Gram Schmidt process. ² For three algorithms for computing the recurrence, see Sec 6.3.1 and 6.3.2. Facts about the Arnoldi Basis Let's convince ourselves of the following facts: 1 The ¯rst m columns of V form an orthogonal basis for m(A; v ). ² K 1 Let Vm denote the ¯rst m columns of V, and Hm denote the leading ² m m block of H. Then £ T AVm = VmHm + hm+1;mvm+1em: The algorithm breaks down if hm ;m = 0. In this case, we cannot get a ² +1 new vector vm+1. This breakdown means that m (A; v ) = m(A; v ). In other words, ² K +1 1 K 1 m(A; v ) is an invariant subspace of A of dimension m. K 1 How to use the Arnoldi Basis We want to solve Ax = b. We choose x = 0 and v = b=¯, where ¯ = b . 0 1 k k Let's choose xm m(A; v ). 2 K 1 Two choices: Projection: ² Make the residual orthogonal to m(A; v1). (Section 6.4.1) This is the Full OrthogonalizationK Method (FOM). Minimization: ² Minimize the norm of the residual over all choices of x in m(A; v1). (Section 6.5) This is the Generalized Minimum Residual MethoK d (GMRES). FOM (usually called the Arnoldi iteration) The Arnoldi relation: T AVm = VmHm + hm+1;mvm+1em: If xm m(A; v ), then we can express it as xm = Vmym for some vector ym. 2 K 1 The residual is rm = b Axm ¡ = ¯v1 AVmym ¡ T = ¯v (VmHm + hm ;mvm e )ym: 1 ¡ +1 +1 m 2 To enforce orthogonality, we want T 0 = Vmrm T T T T = ¯V v (V VmHm + hm ;mV vm e )ym: m 1 ¡ m +1 m +1 m = ¯e Hmym: 1 ¡ Therefore, given the Arnoldi relation at step m, we solve the linear system Hmym = ¯e1 and set xm = Vmym; rm = b Axm ¡ = b AVmym ¡ T = ¯Vme1 (VmHm + hm+1;mvm+1em)ym ¡ T = Vm(¯e1 Hmym) hm+1;mvm+1emym ¡ T ¡ = hm ;mvm e ym ¡ +1 +1 m = hm ;m(ym)mvm : ¡ +1 +1 Therefore, the norm of the residual is just hm ;m(ym)m , which can be j +1 j computed without forming xm. GMRES The Arnoldi relation: T AVm = VmHm + hm+1;mvm+1em: The iterate: xm = Vmym for some vector ym. De¯ne H¹m to be the leading (m + 1) m block of H. £ The residual: T rm = ¯v (VmHm + hm ;mvm e )ym 1 ¡ +1 +1 m = ¯v Vm H¹mym 1 ¡ +1 = Vm (¯e H¹mym): +1 1 ¡ The norm of the residual: rm = Vm (¯e H¹mym) = ¯e H¹mym : k k k +1 1 ¡ k k 1 ¡ k We minimize this quantity w.r.t. ym to get the GMRES iterate. 3 Givens Rotations The linear system involving Hm is solved using m Givens rotations. The least squares problems involving H¹ m is solved using 1 additional Givens rotation. Relation between FOM and GMRES iterates F FOM: Hmym = ¯e1, or T F T HmHmym = ¯Hme1: G GMRES: minimize ¯e H¹my by solving the normal equations k 1 ¡ mk ¹ T ¹ G ¹ T Hm Hmym = ¯Hm e1: Note that ¹ Hm Hm = T ; hm ;me · +1 m ¸ T ¹ T ¹ so HmHm and Hm Hm di®er by a matrix of rank 2. Therefore (Sec 6.5.7 and Sec 6.5.8), If we compute the FOM iterate, we can easily get the GMRES iterate, and ² vice versa. It can be shown that the smallest residual computed by FOM in iterations ² 1; : : : ; m is within a factor of pm of the smallest for GMRES. There is a recursion for computing the GMRES iterates given the FOM ² iterates: G 2 G 2 F x = s x ¡ c x ; m m m 1 ¡ m m 2 2 where sm + cm = 1 and cm is the cosine used in the last step of the reduction of H¹ m to upper triangular form. Practicalities Practicality 1: GMRES and FOM can stagnate. 4 It is possible (but unlikely) that GMRES and FOM make no progress at all until the last iteration. Example: Let eT 1 A = ; x = e ; b = e : I 0 true n 1 · ¸ (e is the vector with every entry equal to 1.) Then m(A; b) = span e1; : : : ; em , so the solution vector is orthogonal to m(A; b) foKr m < n. f g K This is called complete stagnation. Partial stagnation can also occur. The usual remedy is preconditioning (discussed later). Practicality 2: m must be kept relatively small. As m gets big, The work per iteration becomes overwhelming, since the upper Hessenberg ² part of Hm is generally dense. In fact, the work for the orthogonalization grows as O(m2n) The orthogonality relations are increasingly contaminated by round-o® ² error, and this can cause the computation to compute bad approximations to the true GMRES/FOM iterates. Two remedies: restarting (Sec 6.4.1 and Sec 6.5.5) ² truncating the orthogonalization (Sec 6.4.2 and Sec 6.5.6) ² Restarting In the restarting algorithm, we run the Arnoldi iteration for a ¯xed number of steps, perhaps m = 100. If the resulting solution is not good enough, we repeat, by setting v1 to be the normalized residual. This limits the work per iteration, but removes the ¯nite-termination property. In fact, due to stagnation, it is possible (but unlikely) that the iteration never makes any progress at all! 5 Truncating the orthogonalization In this algorithm, we modify the Arnoldi iteration so that we only orthogonalize against the latest k vectors, rather than orthogonalizing against all of the old vectors. ¹ approx The matrix Hm then has only k 1 nonzeros above the main diagonal, and the work for orthogonalization is reduced¡ from O(m2n) to O(kmn). We are still working with the same Krylov subspace, but The basis vectors v ; : : : ; vm are not exactly orthogonal. ² 1 We are approximating the matrix H¹ m, so we don't minimize the residual ² (exactly) or make the residual (exactly) orthogonal to the Krylov subspace. All is not lost, though. In Section 6.5.6, Saad works out the relation between the GMRES residual norm and the one produced from the truncated method (quasi-GMRES). Note: Quasi-GMRES is not often used. ² Restarted-GMRES is the standard algorithm. ² Practicality 3: Breakdown of the Arnoldi iteration is a good thing. If the Arnoldi iteration breaks down, with hm+1;m = 0, so that there is no new vector vm+1, then Hm = H¹ m, so both GMRES and FOM compute the same iterate. ² G F More importantly, Arnoldi breaks down if and only if xm = xm = xtrue. ² This follows from the fact that the norm of the GMRES residual is hm ;m(ym)m . j +1 j Practicality 4: These algorithms also work in complex arithmetic. If A or b is complex rather than real, the algorithms still work, as long as we remember to use conjugate transpose instead of transpose. ² 6 use complex Givens rotations (Section 6.5.9): ² c¹ s¹ x = s c y 0 · ¡ ¸ · ¸ · ¸ if y x s = ; c = : x 2 + y2 x 2 + y2 j j j j p p Convergence Results for GMRES (Sec 6.11.4) Two de¯nitions: We say that a matrix A is positive de¯nite if xT Ax > 0. for all x = 0. ² 6 { If A is symmetric, this means that all of its eigenvalues are positive. { For a nonsymmetric matrix, this means that the symmetric part of A, 1 T which is 2 (A + A ), is positive de¯nite. The algorithm that restarts GMRES every m iterations is called ² GMRES(m). GMRES(m) converges for positive de¯nite matrices. Proof: The iteration can't stagnate, because it makes progress at the ¯rst iteration. In particular, if we seek x = ®v1 to minimize r 2 = b ®Av 2 k k k ¡ 1k = bT b 2®bT Av + ®2vT AT Av ; ¡ 1 1 1 where b = ¯v1 and ¯ > 0 then T b Av1 ® = T T > 0; v1 A Av1 so the residual norm is reduced. By working a little harder, we can show that it reduced enough to guarantee convergence. [] GMRES(m) is an optimal polynomial method. The iterates take the form ² xm = x0 + qm¡1(A)r0; where qm¡ is a polynomial of degree at most m 1. 1 ¡ 7 Saying that x is in m(A; r0) is equivalent to saying x = qm¡1(A)r0 for ² some polynomial ofKdegree at most m 1. ¡ The resulting residual is ² rm = r qm¡ (A)Ar 0 ¡ 1 0 = (I qm¡ (A)A)r ¡ 1 0 = pm(A)r0; where pm is a polynomial of degree m satisfying pm(0) = 1. since GMRES(m) minimizes the norm of the residual, it must pick the ² optimal polynomial pm. Let A = X¤X¡1, where ¤ is diagonal and contains the eigenvalues of A. ² (In other words, we are assuming that A has a complete set of linearly independent eigenvectors, the columns of X.) Then rm = min pm(A)r0 k k pm(0)=1 k k ¡1 = min Xpm(¤)X r0 pm(0)=1 k k ¡1 X X min pm(¤) r0 · k k k k pm(0)=1 k kk k ¡1 X X min max pm(¸j) r0 : · k k k k pm(0)=1 j=1;:::;n j jk k Therefore, any polynomial that is small on the eigenvalues can give us a bound on the GMRES(m) residual norm, since GMRES(m) picks the optimal polynomial.
Recommended publications
  • A Distributed and Parallel Asynchronous Unite and Conquer Method to Solve Large Scale Non-Hermitian Linear Systems Xinzhe Wu, Serge Petiton
    A Distributed and Parallel Asynchronous Unite and Conquer Method to Solve Large Scale Non-Hermitian Linear Systems Xinzhe Wu, Serge Petiton To cite this version: Xinzhe Wu, Serge Petiton. A Distributed and Parallel Asynchronous Unite and Conquer Method to Solve Large Scale Non-Hermitian Linear Systems. HPC Asia 2018 - International Conference on High Performance Computing in Asia-Pacific Region, Jan 2018, Tokyo, Japan. 10.1145/3149457.3154481. hal-01677110 HAL Id: hal-01677110 https://hal.archives-ouvertes.fr/hal-01677110 Submitted on 8 Jan 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Distributed and Parallel Asynchronous Unite and Conquer Method to Solve Large Scale Non-Hermitian Linear Systems Xinzhe WU Serge G. Petiton Maison de la Simulation, USR 3441 Maison de la Simulation, USR 3441 CRIStAL, UMR CNRS 9189 CRIStAL, UMR CNRS 9189 Université de Lille 1, Sciences et Technologies Université de Lille 1, Sciences et Technologies F-91191, Gif-sur-Yvette, France F-91191, Gif-sur-Yvette, France [email protected] [email protected] ABSTRACT the iteration number. These methods are already well implemented Parallel Krylov Subspace Methods are commonly used for solv- in parallel to profit from the great number of computation cores ing large-scale sparse linear systems.
    [Show full text]
  • Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations
    NASA Contractor Report 198342 /" ICASE Report No. 96-40 J ICA IMPLICITLY RESTARTED ARNOLDI/LANCZOS METHODS FOR LARGE SCALE EIGENVALUE CALCULATIONS Danny C. Sorensen NASA Contract No. NASI-19480 May 1996 Institute for Computer Applications in Science and Engineering NASA Langley Research Center Hampton, VA 23681-0001 Operated by Universities Space Research Association National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23681-0001 IMPLICITLY RESTARTED ARNOLDI/LANCZOS METHODS FOR LARGE SCALE EIGENVALUE CALCULATIONS Danny C. Sorensen 1 Department of Computational and Applied Mathematics Rice University Houston, TX 77251 sorensen@rice, edu ABSTRACT Eigenvalues and eigenfunctions of linear operators are important to many areas of ap- plied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fu- eled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now avail- able, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large- scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case.
    [Show full text]
  • Krylov Subspaces
    Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra is the amount of memory needed to store a large matrix and the amount of time needed to read its entries. Methods using Krylov subspaces avoid this difficulty by studying how a matrix acts on vectors, making it unnecessary in many cases to create the matrix itself. More specifically, we can construct a Krylov subspace just by knowing how a linear transformation acts on vectors, and with these subspaces we can closely approximate eigenvalues of the transformation and solutions to associated linear systems. The Arnoldi iteration is an algorithm for finding an orthonormal basis of a Krylov subspace. Its outputs can also be used to approximate the eigenvalues of the original matrix. The Arnoldi Iteration The order-N Krylov subspace of A generated by x is 2 n−1 Kn(A; x) = spanfx;Ax;A x;:::;A xg: If the vectors fx;Ax;A2x;:::;An−1xg are linearly independent, then they form a basis for Kn(A; x). However, this basis is usually far from orthogonal, and hence computations using this basis will likely be ill-conditioned. One way to find an orthonormal basis for Kn(A; x) would be to use the modi- fied Gram-Schmidt algorithm from Lab TODO on the set fx;Ax;A2x;:::;An−1xg. More efficiently, the Arnold iteration integrates the creation of fx;Ax;A2x;:::;An−1xg with the modified Gram-Schmidt algorithm, returning an orthonormal basis for Kn(A; x).
    [Show full text]
  • Iterative Methods for Linear System
    Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline ●Basics: Matrices and their properties Eigenvalues, Condition Number ●Iterative Methods Direct and Indirect Methods ●Krylov Subspace Methods Ritz Galerkin: CG Minimum Residual Approach : GMRES/MINRES Petrov-Gaelerkin Method: BiCG, QMR, CGS 1st April JASS 2009 2 Basics ●Linear system of equations Ax = b ●A Hermitian matrix (or self-adjoint matrix ) is a square matrix with complex entries which is equal to its own conjugate transpose, 3 2 + i that is, the element in the ith row and jth column is equal to the A = complex conjugate of the element in the jth row and ith column, for 2 − i 1 all indices i and j • Symmetric if a ij = a ji • Positive definite if, for every nonzero vector x XTAx > 0 • Quadratic form: 1 f( x) === xTAx −−− bTx +++ c 2 ∂ f( x) • Gradient of Quadratic form: ∂x 1 1 1 f' (x) = ⋮ = ATx + Ax − b ∂ 2 2 f( x) ∂xn 1st April JASS 2009 3 Various quadratic forms Positive-definite Negative-definite matrix matrix Singular Indefinite positive-indefinite matrix matrix 1st April JASS 2009 4 Various quadratic forms 1st April JASS 2009 5 Eigenvalues and Eigenvectors For any n×n matrix A, a scalar λ and a nonzero vector v that satisfy the equation Av =λv are said to be the eigenvalue and the eigenvector of A. ●If the matrix is symmetric , then the following properties hold: (a) the eigenvalues of A are real (b) eigenvectors associated with distinct eigenvalues are orthogonal ●The matrix A is positive definite (or positive semidefinite) if and only if all eigenvalues of A are positive (or nonnegative).
    [Show full text]
  • Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations
    https://ntrs.nasa.gov/search.jsp?R=19960048075 2020-06-16T03:31:45+00:00Z NASA Contractor Report 198342 /" ICASE Report No. 96-40 J ICA IMPLICITLY RESTARTED ARNOLDI/LANCZOS METHODS FOR LARGE SCALE EIGENVALUE CALCULATIONS Danny C. Sorensen NASA Contract No. NASI-19480 May 1996 Institute for Computer Applications in Science and Engineering NASA Langley Research Center Hampton, VA 23681-0001 Operated by Universities Space Research Association National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23681-0001 IMPLICITLY RESTARTED ARNOLDI/LANCZOS METHODS FOR LARGE SCALE EIGENVALUE CALCULATIONS Danny C. Sorensen 1 Department of Computational and Applied Mathematics Rice University Houston, TX 77251 sorensen@rice, edu ABSTRACT Eigenvalues and eigenfunctions of linear operators are important to many areas of ap- plied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fu- eled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now avail- able, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large- scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class.
    [Show full text]
  • The Influence of Orthogonality on the Arnoldi Method
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 309 (2000) 307±323 www.elsevier.com/locate/laa The in¯uence of orthogonality on the Arnoldi method T. Braconnier *, P. Langlois, J.C. Rioual IREMIA, Universite de la Reunion, DeÂpartement de Mathematiques et Informatique, BP 7151, 15 Avenue Rene Cassin, 97715 Saint-Denis Messag Cedex 9, La Reunion, France Received 1 February 1999; accepted 22 May 1999 Submitted by J.L. Barlow Abstract Many algorithms for solving eigenproblems need to compute an orthonormal basis. The computation is commonly performed using a QR factorization computed using the classical or the modi®ed Gram±Schmidt algorithm, the Householder algorithm, the Givens algorithm or the Gram±Schmidt algorithm with iterative reorthogonalization. For the eigenproblem, although textbooks warn users about the possible instability of eigensolvers due to loss of orthonormality, few theoretical results exist. In this paper we prove that the loss of orthonormality of the computed basis can aect the reliability of the computed eigenpair when we use the Arnoldi method. We also show that the stopping criterion based on the backward error and the value computed using the Arnoldi method can dier because of the loss of orthonormality of the computed basis of the Krylov subspace. We also give a bound which quanti®es this dierence in terms of the loss of orthonormality. Ó 2000 Elsevier Science Inc. All rights reserved. Keywords: QR factorization; Backward error; Eigenvalues; Arnoldi method 1. Introduction The aim of the most commonly used eigensolvers is to approximate a basis of an invariant subspace associated with some eigenvalues.
    [Show full text]
  • Power Method and Krylov Subspaces
    Power method and Krylov subspaces Introduction When calculating an eigenvalue of a matrix A using the power method, one starts with an initial random vector b and then computes iteratively the sequence Ab, A2b,...,An−1b normalising and storing the result in b on each iteration. The sequence converges to the eigenvector of the largest eigenvalue of A. The set of vectors 2 n−1 Kn = b, Ab, A b,...,A b , (1) where n < rank(A), is called the order-n Krylov matrix, and the subspace spanned by these vectors is called the order-n Krylov subspace. The vectors are not orthogonal but can be made so e.g. by Gram-Schmidt orthogonalisation. For the same reason that An−1b approximates the dominant eigenvector one can expect that the other orthogonalised vectors approximate the eigenvectors of the n largest eigenvalues. Krylov subspaces are the basis of several successful iterative methods in numerical linear algebra, in particular: Arnoldi and Lanczos methods for finding one (or a few) eigenvalues of a matrix; and GMRES (Generalised Minimum RESidual) method for solving systems of linear equations. These methods are particularly suitable for large sparse matrices as they avoid matrix-matrix opera- tions but rather multiply vectors by matrices and work with the resulting vectors and matrices in Krylov subspaces of modest sizes. Arnoldi iteration Arnoldi iteration is an algorithm where the order-n orthogonalised Krylov matrix Qn of a matrix A is built using stabilised Gram-Schmidt process: • start with a set Q = {q1} of one random normalised vector q1 • repeat for k =2 to n : – make a new vector qk = Aqk−1 † – orthogonalise qk to all vectors qi ∈ Q storing qi qk → hi,k−1 – normalise qk storing qk → hk,k−1 – add qk to the set Q By construction the matrix Hn made of the elements hjk is an upper Hessenberg matrix, h1,1 h1,2 h1,3 ··· h1,n h2,1 h2,2 h2,3 ··· h2,n 0 h3,2 h3,3 ··· h3,n Hn = , (2) .
    [Show full text]
  • Finding Eigenvalues: Arnoldi Iteration and the QR Algorithm
    Finding Eigenvalues: Arnoldi Iteration and the QR Algorithm Will Wheeler Algorithms Interest Group Jul 13, 2016 Outline Finding the largest eigenvalue I Largest eigenvalue power method I So much work for one eigenvector. What about others? I More eigenvectors orthogonalize Krylov matrix I build orthogonal list Arnoldi iteration Solving the eigenvalue problem I Eigenvalues diagonal of triangular matrix I Triangular matrix QR algorithm I QR algorithm QR decomposition I Better QR decomposition Hessenberg matrix I Hessenberg matrix Arnoldi algorithm Power Method How do we find the largest eigenvalue of an m × m matrix A? I Start with a vector b and make a power sequence: b; Ab; A2b;::: I Higher eigenvalues dominate: n n n n A (v1 + v2) = λ1v1 + λ2v2 ≈ λ1v1 Vector sequence (normalized) converged? I Yes: Eigenvector I No: Iterate some more Krylov Matrix Power method throws away information along the way. I Put the sequence into a matrix K = [b; Ab; A2b;:::; An−1b] (1) = [xn; xn−1; xn−2;:::; x1] (2) I Gram-Schmidt approximates first n eigenvectors v1 = x1 (3) x2 · x1 v2 = x2 − x1 (4) x1 · x1 x3 · x1 x3 · x2 v3 = x3 − x1 − x2 (5) x1 · x1 x2 · x2 ::: (6) I Why not orthogonalize as we build the list? Arnoldi Iteration I Start with a normalized vector q1 xk = Aqk−1 (next power) (7) k−1 X yk = xk − (qj · xk ) qj (Gram-Schmidt) (8) j=1 qk = yk = jyk j (normalize) (9) I Orthonormal vectors fq1;:::; qng span Krylov subspace n−1 (Kn = spanfq1; Aq1;:::; A q1g) I These are not the eigenvectors of A, but make a similarity y (unitary) transformation Hn = QnAQn Arnoldi Iteration I Start with a normalized vector q1 xk = Aqk−1 (next power) (10) k−1 X yk = xk − (qj · xk ) qj (Gram-Schmidt) (11) j=1 qk = yk =jyk j (normalize) (12) I Orthonormal vectors fq1;:::; qng span Krylov subspace n−1 (Kn = spanfq1; Aq1;:::; A q1g) I These are not the eigenvectors of A, but make a similarity y (unitary) transformation Hn = QnAQn Arnoldi Iteration - Construct Hn We can construct the matrix Hn along the way.
    [Show full text]
  • LARGE-SCALE COMPUTATION of PSEUDOSPECTRA USING ARPACK and EIGS∗ 1. Introduction. the Matrices in Many Eigenvalue Problems
    SIAM J. SCI. COMPUT. c 2001 Society for Industrial and Applied Mathematics Vol. 23, No. 2, pp. 591–605 LARGE-SCALE COMPUTATION OF PSEUDOSPECTRA USING ARPACK AND EIGS∗ THOMAS G. WRIGHT† AND LLOYD N. TREFETHEN† Abstract. ARPACK and its Matlab counterpart, eigs, are software packages that calculate some eigenvalues of a large nonsymmetric matrix by Arnoldi iteration with implicit restarts. We show that at a small additional cost, which diminishes relatively as the matrix dimension increases, good estimates of pseudospectra in addition to eigenvalues can be obtained as a by-product. Thus in large- scale eigenvalue calculations it is feasible to obtain routinely not just eigenvalue approximations, but also information as to whether or not the eigenvalues are likely to be physically significant. Examples are presented for matrices with dimension up to 200,000. Key words. Arnoldi, ARPACK, eigenvalues, implicit restarting, pseudospectra AMS subject classifications. 65F15, 65F30, 65F50 PII. S106482750037322X 1. Introduction. The matrices in many eigenvalue problems are too large to allow direct computation of their full spectra, and two of the iterative tools available for computing a part of the spectrum are ARPACK [10, 11]and its Matlab counter- part, eigs.1 For nonsymmetric matrices, the mathematical basis of these packages is the Arnoldi iteration with implicit restarting [11, 23], which works by compressing the matrix to an “interesting” Hessenberg matrix, one which contains information about the eigenvalues and eigenvectors of interest. For general information on large-scale nonsymmetric matrix eigenvalue iterations, see [2, 21, 29, 31]. For some matrices, nonnormality (nonorthogonality of the eigenvectors) may be physically important [30].
    [Show full text]
  • Comparison of Numerical Methods and Open-Source Libraries for Eigenvalue Analysis of Large-Scale Power Systems
    applied sciences Article Comparison of Numerical Methods and Open-Source Libraries for Eigenvalue Analysis of Large-Scale Power Systems Georgios Tzounas , Ioannis Dassios * , Muyang Liu and Federico Milano School of Electrical and Electronic Engineering, University College Dublin, Belfield, Dublin 4, Ireland; [email protected] (G.T.); [email protected] (M.L.); [email protected] (F.M.) * Correspondence: [email protected] Received: 30 September 2020; Accepted: 24 October 2020; Published: 28 October 2020 Abstract: This paper discusses the numerical solution of the generalized non-Hermitian eigenvalue problem. It provides a comprehensive comparison of existing algorithms, as well as of available free and open-source software tools, which are suitable for the solution of the eigenvalue problems that arise in the stability analysis of electric power systems. The paper focuses, in particular, on methods and software libraries that are able to handle the large-scale, non-symmetric matrices that arise in power system eigenvalue problems. These kinds of eigenvalue problems are particularly difficult for most numerical methods to handle. Thus, a review and fair comparison of existing algorithms and software tools is a valuable contribution for researchers and practitioners that are interested in power system dynamic analysis. The scalability and performance of the algorithms and libraries are duly discussed through case studies based on real-world electrical power networks. These are a model of the All-Island Irish Transmission System with 8640 variables; and, a model of the European Network of Transmission System Operators for Electricity, with 146,164 variables. Keywords: eigenvalue analysis; large non-Hermitian matrices; numerical methods; open-source libraries 1.
    [Show full text]
  • Eigenvalue Problems Last Time …
    Eigenvalue Problems Last Time … Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection 휆2, 푤2 Today … Small deviation into eigenvalue problems … Formulation Standard eigenvalue problem: Given a 푛 × 푛 matrix 퐴, find scalar 휆 and a nonzero vector 푥 such that 퐴푥 = 휆푥 휆 is a eigenvalue, and 푥 is the corresponding eigenvector Spectrum = 휆(퐴) = set of eigenvalues of 퐴 Spectral radius = 퐴 = max 휆 ∶ 휆 ∈ 휆 퐴 Characteristic Polynomial Equation 퐴푥 = 휆푥 is equivalent to 퐴 − 휆퐼 푥 = 0 Eigenvalues of 퐴 are roots of the characteristic polynomial det 퐴 − 휆퐼 = 0 The characteristic polynomial is a powerful theoretical tool but usually not useful computationally Considerations Properties of eigenvalue problem affecting choice of algorithm Are all eigenvalues needed, or only a few? Are only eigenvalues needed, or are corresponding eigenvectors also needed? Is matrix real or complex? Is matrix relatively small and dense, or large and sparse? Does matrix have any special properties, such as symmetry? Problem Transformations Shift: If 퐴푥 = 휆푥 and is any scalar, then 퐴 − 퐼 푥 = 휆 − 푥 1 Inversion: If 퐴 is nonsingular and 퐴푥 = 휆푥, then 휆 ≠ 0 and 퐴−1푥 = 푥 휆 Powers: If 퐴푥 = 휆푥, then 퐴푘푥 = 휆푘푥 Polynomial: If 퐴푥 = 휆푥 and 푝(푡) is a polynomial, then 푝 퐴 푥 = 푝 휆 푥 Similarity Transforms 퐵 is similar to 퐴 if there is a nonsingular matrix 푇, such that 퐵 = 푇−1퐴푇 Then, 퐵푦 = 휆푦 ⇒ 푇−1퐴푇푦 = 휆푦 ⇒ 퐴 푇푦 = 휆 푇푦 Similarity transformations preserve eigenvalues and eigenvectors are easily recovered Diagonal form Eigenvalues
    [Show full text]
  • Lecture 33. the Arnoldi Iteration
    Lecture 33. The Arnoldi Iteration Despite the many names and acronyms that have proliferated in the field of Krylov subspaee rnatrix iterations, these algorithrIls are built upon a eOIIlIIlon foundation of a few fuudarnental ideas. One can take various approaehes to describing this foundation. Ours will be to consider the Arnoldi proeess, a. Gram-Schmidt-style iteration for transforming a matrix to Hessenberg form. The Arnoldi/Gram-Schmidt Analogy Suppose, to pass the time while marooned on a desert island, you challenged yourself t.o devise an algoritlnll to reduce a nOllhermitian Illatrix to Hes.senberg form by orthogonal similarity transformations, proceeding column by column from a prescribed first column q,. To your surprise, you would probably find you could solve this problem in an hour and still have time to gather coconuts for dinner. The method you would come up with goes by the name of the Arnoldi iteration. If A is hermitian, the Hessenberg matrix becomes tridiagonal, an n-tcrm recurrence relation becomes a three-term recurrence relation, and the name changes to the Lanc:zos iteration, to be discussed in Lecture 36. Here is an analogy. For computing the QR factorization A = (JR of a llWtrix A, \ve have discussed t\'w methods in this book: Householder reflec- tions, which triangularhe A by a succession of orthogonal operations, and GraIn-Schmidt orthogonalization, which A by a succession of triangular operations. Though Householder reflections lead to a more nearly 250 LECTURE 33. THE ARNOLDI ITERATION 251 orthogonal matrix Q in the presence of rounding errors, the Gram Schmidt process has the advantage that it can be stopped part-way, leaving one with a reduced QR factorization of the first.
    [Show full text]