Chapter 4 Eigenvalue Problems

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 4 Eigenvalue Problems Chapter 4 Eigenvalue Problems In this chapter we now look at a third class of important problems in Numerical Linear Algebra, which consist in finding the eigenvalues and eigenvectors of a given m m matrix A, if and when they exist. As discussed in Chapter 1, nu- merical methods⇥ for finding eigenvalues and eigenvectors di↵er significantly from what one may do analytically (i.e. construction and solution of the characteris- tic polynomial). Instead, eigenvalue algorithms are always based on iterative methods. In what follows, we first illustrate how very simply iterative methods can actually work to find specific eigenvalues and eigenvectors of a matrix A. For simplicity, these methods assume the matrix A is real and symmetric, so the eigenvalues are real and symmetric too, and the eigenvectors are orthogonal. Later, we relax these conditions to construct eigenvalue revealing algorithms that can find all the eigenvalues, real or complex, of any matrix A. Before we proceed, however, let’s see a few example of applied mathematical problems where we want to find the eigenvalues of a matrix. 1. Eigenvalue problems in applied mathematics The following examples are very basic examples that come up in simple ODE and PDE problems, that you may encounter in AMS 212A and AMS 214 for instance. Other more complex examples come up all the time in fluid dynamics, control theory, etc. 1.1. A simple Dynamical Systems problem Consider the set of m nonlinear autonomous ODEs for m variables written as x˙ i = fi(x) for i =1...m (4.1) T where x =(x1,x2,...,xm) , and the functions fi are any nonlinear function of the coefficients of x. Suppose a fixed point of this system is known, for which fi(x?) = 0 for all i. Then, to study the stability of this fixed point, we consider a small displacement ✏ away from it such that m m @fi @fi fi(x? + ✏)=fi(x?)+ ✏j = ✏j (4.2) @xj @xj i=1 x? i=1 x? X X 91 92 The ODE system becomes, near the fixed point, m @fi ✏˙i = ✏j for i =1...m (4.3) @xj i=1 x? X or in other words ✏˙ = J✏ (4.4) where J is the Jacobian matrix of the original system at the fixed point. This is λt a simple linear system now, and we can look for solutions of the kind ✏i e , which implies solving for the value(s) of λ for which / J✏ = λ✏ (4.5) If any of the eigenvalues λ has a positive real part, then the fixed point is unstable. 1.2. A simple PDE problem Suppose you want to solve the di↵usion equation problem @f @ @f = D(x) (4.6) @t @x @x This problem is slightly more complicated than usual because the di↵usion co- efficient is a function of x. The first step would consist in looking for separable solutions of the kind f(x, t)=A(x)B(t) (4.7) where it is easy to show that dB = λB (4.8) dt − d dA D(x) = λA (4.9) dx dx − where, on physical grounds, we can argue that λ 0. If the domain is periodic, say of period 2⇡, we can expand the solution A(x≥) and the di↵usion coefficient D(x) in Fourier modes as imx imx A(x)= ame and D(x)= dme (4.10) m m X X where the d are known, but the a are not. The equation for A becomes { m} { m} d d einx ima eimx = λ a eikx (4.11) dx n m − k " n m # X X Xk and then i(m+n)x ikx m(m + n)dname = λ ake (4.12) m,n X Xk 93 Projecting onto the mode eikx we then get mkdk mam Bkmam = λak (4.13) − ⌘ m m X X or, in other words, we have another matrix eigenvalue problem Bv = λv where the coefficients of the matrix B were given above, and the elements of v are the Fourier coefficients an . The solutions to that problem yield both the desired λs and the eigenmodes{ }A(x), which can then be used to construct the solution to the PDE. Many other examples of eigenvalue problems exist. You are unlikely to go through your PhD without having to solve at least one! 1.3. Localizing Eigenvalues: Gershgorin Theorem For some purposes it suffices to know crude information on eigenvalues, instead of determining their values exactly. For example, we might merely wish to know rough estimations of their locations, such as bounding circles or disks. The simplest such “bound” can be obtained as ⇢(A) A . (4.14) || || This can be easily shown if we take λ to be λ = ⇢(A), and if we let x be an associated eigenvector x = 1 (recall we can| always| normalize eigenvectors!). Then || || ⇢(A)= λ = λx = Ax A x = A . (4.15) | | || || || || || || · || || || || A more accurate way of locating eigenvalues is given by Gershgorin’s The- orem which is stated as the following: Theorem: (Gershgorin’s Theorem) Let A = a be an n n matrix and let { ij} ⇥ λ be an eigenvalue of A.Thenλ belongs to one of the circles Zi given by Zk = z R or C : z akk rk , (4.16) { 2 | − | } where n r = a ,k =1, ,n. (4.17) k | kj| ··· j=1,j=k X6 Moreover, if m of the circles form a connected set S, disjoint from the remain- ing n m circles, then S contains exactly m of the eigenvalues of A, counted according− to their algebraic multiplicity. Proof: Let Ax = λx. Let k be the subscript of a component of x such that x = max x = x , then we see that the k-th component satisfies | k| i | i| || || n λxk = akjxj, (4.18) Xj=1 94 so that n (λ a )x = a x . (4.19) − kk k kj j j=1,j=k X6 Therefore λ a x a x a x . (4.20) | − kk|·| k| | kj|·| j| | kj|·|| || j=1,j=k j=1,j=k X6 X6 ⇤ Example: Consider the matrix 41 0 A = 10 1 . (4.21) 2 11−4 3 − 4 5 Then the eigenvalues must be contained in the circles Z : λ 4 1+0=1, (4.22) 1 | − | Z : λ 1+1=2, (4.23) 2 | | Z : λ +4 1+1=2. (4.24) 3 | | Note that Z is disjoint from Z Z , therefore there exists a single eigenvalue 1 2 [ 3 in Z1. Indeed, if we compute the true eigenvalues, we get λ(A)= 3.76010, 0.442931, 4.20303 . (4.25) {− − } ⇤ 2. Invariant Transformations As before we seek for a simpler form whose eigenvalues and eigenvectors are determined in easier ways. To do this we need to identify what types of trans- formations leave eigenvalues (or eigenvectors) unchanged or easily recoverable, and for what types of matrices the eigenvalues (or eigenvectors) are easily de- termined. Shift: A shift subtracts a constant scalar σ from each diagonal entry of a • matrix, e↵ectively shifting the origin. Ax = λx = (A σI)x =(λ σ)x. (4.26) ) − − Thus the eigenvalues of the matrix A σI are translated, or shifted, from those of A by σ, but the eigenvectors− are unchanged. Inversion: If A is nonsingular and Ax = λx with x = 0,then • 6 1 1 A− x = x. (4.27) λ 1 Thus the eigenvalues of A− are reciprocals of the eigenvalues of A, and the eigenvectors are unchanged. 95 Powers: Raising power of a matrix also raises the same power of the eigen- • values, but keeps the eigenvectors unchanged. Ax = λx = A2x = λ2x = = Akx = λkx. (4.28) ) )··· ) Polynomials: More generally, if • p(t)=c + c t + c t2 + c tk (4.29) 0 1 2 ··· k is a polynomial of degree k,thenwedefine p(A)=c I + c A + c A2 + c Ak. (4.30) 0 1 2 ··· k Now if Ax = λx then p(A)x = p(λ)x. Similarity: We already have seen this in Eq. 1.39 – Eq. 1.43. • 3. Iterative ideas See Chapter 27 from the textbook In this section, as discussed above, all matrices are assumed to be real and symmetric. 3.1. The Power Iteration We can very easily construct a simple algorithm to reveal the eigenvector corre- sponding to the largest eigenvalue of a matrix. To do so, we simply apply the matrix A over, and over, and over again on any initial seed vector x.Bythe properties of the eigenvalues and eigenvectors of real, symmetric matrices, we know that the eigenvectors vi , for i =1...m, form an orthogonal basis in which the vector x can be written{ } as m x = ↵ivi (4.31) Xi=1 Then m m Ax = λ ↵ v Anx = λn↵ v (4.32) i i i ) i i i Xi=1 Xi=1 If we call the eigenvalue with the largest norm λ1,then m n n n λi A x = λ1 ↵ivi (4.33) λ1 Xi=1 ✓ ◆ where, by construction, λi/λ1 < 1 for i>1. As n , all but the first term in that sum tend to zero,| which| implies that !1 n n lim A x =limλ ↵1v1 (4.34) n n 1 !1 !1 96 which is aligned in the direction of the first eigenvector v1. In general, we see that the iteration yields a sequence x(n+1) = Anx converges to the eigenvector { } { } v1 with normalization (n+1) n+1 x λ1 ↵1v1 xn+1 = v1. (4.35) ⌘ x(n+1) ⇡ λn+1↵ v ± || || || 1 1 1|| To approximate the corresponding value λ1, we compute (n+1) T T T 2 λ = xn+1Axn+1 ( v1) A( v1)=( v1) λ1( v1)=λ v1 = λ1.
Recommended publications
  • Efficient Algorithms for High-Dimensional Eigenvalue
    Efficient Algorithms for High-dimensional Eigenvalue Problems by Zhe Wang Department of Mathematics Duke University Date: Approved: Jianfeng Lu, Advisor Xiuyuan Cheng Jonathon Mattingly Weitao Yang Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mathematics in the Graduate School of Duke University 2020 ABSTRACT Efficient Algorithms for High-dimensional Eigenvalue Problems by Zhe Wang Department of Mathematics Duke University Date: Approved: Jianfeng Lu, Advisor Xiuyuan Cheng Jonathon Mattingly Weitao Yang An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mathematics in the Graduate School of Duke University 2020 Copyright © 2020 by Zhe Wang All rights reserved Abstract The eigenvalue problem is a traditional mathematical problem and has a wide appli- cations. Although there are many algorithms and theories, it is still challenging to solve the leading eigenvalue problem of extreme high dimension. Full configuration interaction (FCI) problem in quantum chemistry is such a problem. This thesis tries to understand some existing algorithms of FCI problem and propose new efficient algorithms for the high-dimensional eigenvalue problem. In more details, we first es- tablish a general framework of inexact power iteration and establish the convergence theorem of full configuration interaction quantum Monte Carlo (FCIQMC) and fast randomized iteration (FRI). Second, we reformulate the leading eigenvalue problem as an optimization problem, then compare the show the convergence of several coor- dinate descent methods (CDM) to solve the leading eigenvalue problem. Third, we propose a new efficient algorithm named Coordinate descent FCI (CDFCI) based on coordinate descent methods to solve the FCI problem, which produces some state-of- the-art results.
    [Show full text]
  • Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations
    https://ntrs.nasa.gov/search.jsp?R=19960048075 2020-06-16T03:31:45+00:00Z NASA Contractor Report 198342 /" ICASE Report No. 96-40 J ICA IMPLICITLY RESTARTED ARNOLDI/LANCZOS METHODS FOR LARGE SCALE EIGENVALUE CALCULATIONS Danny C. Sorensen NASA Contract No. NASI-19480 May 1996 Institute for Computer Applications in Science and Engineering NASA Langley Research Center Hampton, VA 23681-0001 Operated by Universities Space Research Association National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23681-0001 IMPLICITLY RESTARTED ARNOLDI/LANCZOS METHODS FOR LARGE SCALE EIGENVALUE CALCULATIONS Danny C. Sorensen 1 Department of Computational and Applied Mathematics Rice University Houston, TX 77251 sorensen@rice, edu ABSTRACT Eigenvalues and eigenfunctions of linear operators are important to many areas of ap- plied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fu- eled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now avail- able, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large- scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class.
    [Show full text]
  • Accelerated Stochastic Power Iteration
    Accelerated Stochastic Power Iteration CHRISTOPHER DE SAy BRYAN HEy IOANNIS MITLIAGKASy CHRISTOPHER RE´ y PENG XU∗ yDepartment of Computer Science, Stanford University ∗Institute for Computational and Mathematical Engineering, Stanford University cdesa,bryanhe,[email protected], [email protected], [email protected] July 11, 2017 Abstract Principal component analysis (PCA) is one of the most powerful tools in machine learning. The simplest method for PCA, the power iteration, requires O(1=∆) full-data passes to recover the principal component of a matrix withp eigen-gap ∆. Lanczos, a significantly more complex method, achieves an accelerated rate of O(1= ∆) passes. Modern applications, however, motivate methods that only ingest a subset of available data, known as the stochastic setting. In the online stochastic setting, simple 2 2 algorithms like Oja’s iteration achieve the optimal sample complexity O(σ =p∆ ). Unfortunately, they are fully sequential, and also require O(σ2=∆2) iterations, far from the O(1= ∆) rate of Lanczos. We propose a simple variant of the power iteration with an added momentum term, that achieves both the optimal sample and iteration complexity.p In the full-pass setting, standard analysis shows that momentum achieves the accelerated rate, O(1= ∆). We demonstrate empirically that naively applying momentum to a stochastic method, does not result in acceleration. We perform a novel, tight variance analysis that reveals the “breaking-point variance” beyond which this acceleration does not occur. By combining this insight with modern variance reduction techniques, we construct stochastic PCAp algorithms, for the online and offline setting, that achieve an accelerated iteration complexity O(1= ∆).
    [Show full text]
  • Numerical Linear Algebra Program Lecture 4 Basic Methods For
    Program Lecture 4 Numerical Linear Algebra • Basic methods for eigenproblems. Basic iterative methods • Power method • Shift-and-invert Power method • QR algorithm • Basic iterative methods for linear systems • Richardson’s method • Jacobi, Gauss-Seidel and SOR • Iterative refinement Gerard Sleijpen and Martin van Gijzen • Steepest decent and the Minimal residual method October 5, 2016 1 October 5, 2016 2 National Master Course National Master Course Delft University of Technology Basic methods for eigenproblems The Power method The eigenvalue problem The Power method is the classical method to compute in modulus largest eigenvalue and associated eigenvector of a Av = λv matrix. can not be solved in a direct way for problems of order > 4, since Multiplying with a matrix amplifies strongest the eigendirection the eigenvalues are the roots of the characteristic equation corresponding to the in modulus largest eigenvalues. det(A − λI) = 0. Successively multiplying and scaling (to avoid overflow or underflow) yields a vector in which the direction of the largest Today we will discuss two iterative methods for solving the eigenvector becomes more and more dominant. eigenproblem. October 5, 2016 3 October 5, 2016 4 National Master Course National Master Course Algorithm Convergence (1) The Power method for an n × n matrix A. Let the n eigenvalues λi with eigenvectors vi, Avi = λivi, be n ordered such that |λ1| ≥ |λ2|≥ . ≥ |λn|. u0 ∈ C is given • Assume the eigenvectors v ,..., vn form a basis. for k = 1, 2, ... 1 • Assume |λ1| > |λ2|. uk = Auk−1 Each arbitrary starting vector u0 can be written as: uk = uk/kukk2 e(k) ∗ λ = uk−1uk u0 = α1v1 + α2v2 + ..
    [Show full text]
  • Arxiv:1105.1185V1 [Math.NA] 5 May 2011 Ento 2.2
    ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU Abstract. We examine some numerical iterative methods for computing the eigenvalues and eigenvectors of real matrices. The five methods examined here range from the simple power iteration method to the more complicated QR iteration method. The derivations, procedure, and advantages of each method are briefly discussed. 1. Introduction Eigenvalues and eigenvectors play an important part in the applications of linear algebra. The naive method of finding the eigenvalues of a matrix involves finding the roots of the characteristic polynomial of the matrix. In industrial sized matrices, however, this method is not feasible, and the eigenvalues must be obtained by other means. Fortunately, there exist several other techniques for finding eigenvalues and eigenvectors of a matrix, some of which fall under the realm of iterative methods. These methods work by repeatedly refining approximations to the eigenvectors or eigenvalues, and can be terminated whenever the approximations reach a suitable degree of accuracy. Iterative methods form the basis of much of modern day eigenvalue computation. In this paper, we outline five such iterative methods, and summarize their derivations, procedures, and advantages. The methods to be examined are the power iteration method, the shifted inverse iteration method, the Rayleigh quotient method, the simultaneous iteration method, and the QR method. This paper is meant to be a survey over existing algorithms for the eigenvalue computation problem. Section 2 of this paper provides a brief review of some of the linear algebra background required to understand the concepts that are discussed. In section 3, the iterative methods are each presented, in order of complexity, and are studied in brief detail.
    [Show full text]
  • Lecture 12 — February 26 and 28 12.1 Introduction
    EE 381V: Large Scale Learning Spring 2013 Lecture 12 | February 26 and 28 Lecturer: Caramanis & Sanghavi Scribe: Karthikeyan Shanmugam and Natalia Arzeno 12.1 Introduction In this lecture, we focus on algorithms that compute the eigenvalues and eigenvectors of a real symmetric matrix. Particularly, we are interested in finding the largest and smallest eigenvalues and the corresponding eigenvectors. We study two methods: Power method and the Lanczos iteration. The first involves multiplying the symmetric matrix by a randomly chosen vector, and iteratively normalizing and multiplying the matrix by the normalized vector from the previous step. The convergence is geometric, i.e. the `1 distance between the true and the computed largest eigenvalue at the end of every step falls geometrically in the number of iterations and the rate depends on the ratio between the second largest and the largest eigenvalue. Some generalizations of the power method to compute the largest k eigenvalues and the eigenvectors will be discussed. The second method (Lanczos iteration) terminates in n iterations where each iteration in- volves estimating the largest (smallest) eigenvalue by maximizing (minimizing) the Rayleigh coefficient over vectors drawn from a suitable subspace. At each iteration, the dimension of the subspace involved in the optimization increases by 1. The sequence of subspaces used are Krylov subspaces associated with a random initial vector. We study the relation between Krylov subspaces and tri-diagonalization of a real symmetric matrix. Using this connection, we show that an estimate of the extreme eigenvalues can be computed at each iteration which involves eigen-decomposition of a tri-diagonal matrix.
    [Show full text]
  • AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration
    AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All eigenvalue solvers must be iterative Iterative algorithms have multiple facets: 1 Basic idea behind the algorithms 2 Convergence and techniques to speed-up convergence 3 Efficiency of implementation 4 Termination criteria We will focus on first two aspects Xiangmin Jiao Numerical Analysis I 2 / 10 Simplification: Real Symmetric Matrices We will consider eigenvalue problems for real symmetric matrices, i.e. T m×m m A = A 2 R , and Ax = λx for x 2 R p ∗ T T I Note: x = x , and kxk = x x A has real eigenvalues λ1,λ2, ::: , λm and orthonormal eigenvectors q1, q2, ::: , qm, where kqj k = 1 Eigenvalues are often also ordered in a particular way (e.g., ordered from large to small in magnitude) In addition, we focus on symmetric tridiagonal form I Why? Because phase 1 of two-phase algorithm reduces matrix into tridiagonal form Xiangmin Jiao Numerical Analysis I 3 / 10 Rayleigh Quotient m The Rayleigh quotient of x 2 R is the scalar xT Ax r(x) = xT x For an eigenvector x, its Rayleigh quotient is r(x) = xT λx=xT x = λ, the corresponding eigenvalue of x For general x, r(x) = α that minimizes kAx − αxk2. 2 x is eigenvector of A() rr(x) = x T x (Ax − r(x)x) = 0 with x 6= 0 r(x) is smooth and rr(qj ) = 0 for any j, and therefore is quadratically accurate: 2 r(x) − r(qJ ) = O(kx − qJ k ) as x ! qJ for some J Xiangmin Jiao Numerical Analysis I 4 / 10 Power
    [Show full text]
  • Math 504 (Fall 2011)
    Instructor: Emre Mengi Math 504 (Fall 2011) Study Guide for Weeks 11-14 This homework concerns the following topics. • Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture 24) • Similarity transformations (Trefethen&Bau, Lecture 24) • Power, inverse and Rayleigh iterations (Trefethen&Bau, Lecture 27) • QR algorithm with and without shifts (Trefethen&Bau, Lecture 28&29) • Simultaneous power iteration and its equivalence with the QR algorithm (Trefethen&Bau, Lectures 28&29) • The implicit QR algorithm • The Arnoldi Iteration (Trefethen&Bau, Lectures 33&34) • GMRES (Trefethen&Bau, Lecture 35) Homework 5 (Assigned on Dec 26th, Mon; Due on Jan 10th, Tue by 11:00) Please turn in the solutions only to the questions marked with (*). The rest is to practice on your own. Attach Matlab output, m-files, print-outs whenever they are necessary. The Dec 10, 11:00 deadline is tight. 1. (*) Consider the matrices 2 2 4 −5 3 1 7 A = and A = 0 1 3 1 3 5 2 4 5 0 0 1 (a) Find the eigenvalues of A1 and the eigenspace associated with each of its eigenvalues. (b) Find the eigenvalues of A2 together with their algebraic and geometric multiplicities. (c) Find a Schur factorization for A1. (d) Let v0 and v1 be two linearly independent eigenvectors of A1. Suppose also that fqkg denotes the sequence of vectors generated by the inverse iteration with shift σ = 2 and 2 starting with an initial vector q0 = α0v0 + α1v1 2 C where α0; α1 are nonzero scalars. Determine the subspace that spanfqkg is approaching as k ! 1.
    [Show full text]
  • Comparison of Numerical Methods and Open-Source Libraries for Eigenvalue Analysis of Large-Scale Power Systems
    applied sciences Article Comparison of Numerical Methods and Open-Source Libraries for Eigenvalue Analysis of Large-Scale Power Systems Georgios Tzounas , Ioannis Dassios * , Muyang Liu and Federico Milano School of Electrical and Electronic Engineering, University College Dublin, Belfield, Dublin 4, Ireland; [email protected] (G.T.); [email protected] (M.L.); [email protected] (F.M.) * Correspondence: [email protected] Received: 30 September 2020; Accepted: 24 October 2020; Published: 28 October 2020 Abstract: This paper discusses the numerical solution of the generalized non-Hermitian eigenvalue problem. It provides a comprehensive comparison of existing algorithms, as well as of available free and open-source software tools, which are suitable for the solution of the eigenvalue problems that arise in the stability analysis of electric power systems. The paper focuses, in particular, on methods and software libraries that are able to handle the large-scale, non-symmetric matrices that arise in power system eigenvalue problems. These kinds of eigenvalue problems are particularly difficult for most numerical methods to handle. Thus, a review and fair comparison of existing algorithms and software tools is a valuable contribution for researchers and practitioners that are interested in power system dynamic analysis. The scalability and performance of the algorithms and libraries are duly discussed through case studies based on real-world electrical power networks. These are a model of the All-Island Irish Transmission System with 8640 variables; and, a model of the European Network of Transmission System Operators for Electricity, with 146,164 variables. Keywords: eigenvalue analysis; large non-Hermitian matrices; numerical methods; open-source libraries 1.
    [Show full text]
  • Eigenvalue Problems Last Time …
    Eigenvalue Problems Last Time … Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection 휆2, 푤2 Today … Small deviation into eigenvalue problems … Formulation Standard eigenvalue problem: Given a 푛 × 푛 matrix 퐴, find scalar 휆 and a nonzero vector 푥 such that 퐴푥 = 휆푥 휆 is a eigenvalue, and 푥 is the corresponding eigenvector Spectrum = 휆(퐴) = set of eigenvalues of 퐴 Spectral radius = 퐴 = max 휆 ∶ 휆 ∈ 휆 퐴 Characteristic Polynomial Equation 퐴푥 = 휆푥 is equivalent to 퐴 − 휆퐼 푥 = 0 Eigenvalues of 퐴 are roots of the characteristic polynomial det 퐴 − 휆퐼 = 0 The characteristic polynomial is a powerful theoretical tool but usually not useful computationally Considerations Properties of eigenvalue problem affecting choice of algorithm Are all eigenvalues needed, or only a few? Are only eigenvalues needed, or are corresponding eigenvectors also needed? Is matrix real or complex? Is matrix relatively small and dense, or large and sparse? Does matrix have any special properties, such as symmetry? Problem Transformations Shift: If 퐴푥 = 휆푥 and is any scalar, then 퐴 − 퐼 푥 = 휆 − 푥 1 Inversion: If 퐴 is nonsingular and 퐴푥 = 휆푥, then 휆 ≠ 0 and 퐴−1푥 = 푥 휆 Powers: If 퐴푥 = 휆푥, then 퐴푘푥 = 휆푘푥 Polynomial: If 퐴푥 = 휆푥 and 푝(푡) is a polynomial, then 푝 퐴 푥 = 푝 휆 푥 Similarity Transforms 퐵 is similar to 퐴 if there is a nonsingular matrix 푇, such that 퐵 = 푇−1퐴푇 Then, 퐵푦 = 휆푦 ⇒ 푇−1퐴푇푦 = 휆푦 ⇒ 퐴 푇푦 = 휆 푇푦 Similarity transformations preserve eigenvalues and eigenvectors are easily recovered Diagonal form Eigenvalues
    [Show full text]
  • Parallel Numerical Algorithms Chapter 5 – Eigenvalue Problems Section 5.2 – Eigenvalue Computation
    Basics Power Iteration QR Iteration Krylov Methods Other Methods Parallel Numerical Algorithms Chapter 5 – Eigenvalue Problems Section 5.2 – Eigenvalue Computation Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 1 / 42 Basics Power Iteration QR Iteration Krylov Methods Other Methods Outline 1 Basics 2 Power Iteration 3 QR Iteration 4 Krylov Methods 5 Other Methods Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 2 / 42 Basics Power Iteration Definitions QR Iteration Transformations Krylov Methods Parallel Algorithms Other Methods Eigenvalues and Eigenvectors Given n × n matrix A, find scalar λ and nonzero vector x such that Ax = λx λ is eigenvalue and x is corresponding eigenvector A always has n eigenvalues, but they may be neither real nor distinct May need to compute only one or few eigenvalues, or all n eigenvalues May or may not need corresponding eigenvectors Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 3 / 42 Basics Power Iteration Definitions QR Iteration Transformations Krylov Methods Parallel Algorithms Other Methods Problem Transformations Shift : for scalar σ, eigenvalues of A − σI are eigenvalues of A shifted by σ, λi − σ Inversion : for nonsingular A, eigenvalues of A−1 are reciprocals of eigenvalues of A, 1/λi Powers : for integer k > 0, eigenvalues of Ak are kth k powers of eigenvalues of A, λi Polynomial : for polynomial p(t), eigenvalues
    [Show full text]
  • Dynamical Systems and Non-Hermitian Iterative Eigensolvers∗
    SIAM J. NUMER. ANAL. c 2009 Society for Industrial and Applied Mathematics Vol. 47, No. 2, pp. 1445–1473 DYNAMICAL SYSTEMS AND NON-HERMITIAN ITERATIVE EIGENSOLVERS∗ MARK EMBREE† AND RICHARD B. LEHOUCQ‡ Abstract. Simple preconditioned iterations can provide an efficient alternative to more elabo- rate eigenvalue algorithms. We observe that these simple methods can be viewed as forward Euler discretizations of well-known autonomous differential equations that enjoy appealing geometric prop- erties. This connection facilitates novel results describing convergence of a class of preconditioned eigensolvers to the leftmost eigenvalue, provides insight into the role of orthogonality and biorthog- onality, and suggests the development of new methods and analyses based on more sophisticated discretizations. These results also highlight the effect of preconditioning on the convergence and stability of the continuous-time system and its discretization. Key words. eigenvalues, dynamical systems, inverse iteration, preconditioned eigensolvers, geometric invariants AMS subject classifications. 15A18, 37C10, 65F15, 65L20 DOI. 10.1137/07070187X 1. Introduction. Suppose we seek a small number of eigenvalues (and the asso- ciated eigenspace) of the non-Hermitian matrix A ∈ Cn×n, having at our disposal a n×n n nonsingular matrix N ∈ C that approximates A. Given a starting vector p0 ∈ C , compute −1 (1.1) pj+1 = pj + N (θj − A)pj, where θj − A is shorthand for Iθj − A,and (Apj, pj) θj = (pj, pj) for some inner product (·, ·). Knyazev, Neymeyr, and others have studied this iteration for Hermitian positive definite A; see [21, 22] and references therein for convergence analysis and numerical experiments. Clearly the choice of N will influence the behavior of this iteration.
    [Show full text]