Nonlinear Eigenvalue Problems

Total Page:16

File Type:pdf, Size:1020Kb

Nonlinear Eigenvalue Problems 115 Nonlinear Eigenvalue Problems 115.1 Basic Properties :::::::::::::::::::::::::::::::::::::::: 115-2 115.2 Analytic matrix functions ::::::::::::::::::::::::::::: 115-3 115.3 Variational Characterization of Eigenvalues :::::::: 115-7 115.4 General Rayleigh Functionals :::::::::::::::::::::::: 115-9 115.5 Methods for dense eigenvalue problems ::::::::::::: 115-10 115.6 Iterative projection methods:::::::::::::::::::::::::: 115-13 115.7 Methods using invariant pairs :::::::::::::::::::::::: 115-17 Heinrich Voss 115.8 The infinite Arnoldi method :::::::::::::::::::::::::: 115-20 Hamburg University of Technology References:::::::::::::::::::::::::::::::::::::::::::::::::::::: 115-22 This chapter considers the nonlinear eigenvalue problem to find a parameter λ such that the linear system T (λ)x = 0 (115.1) has a nontrivial solution x, where T (·): D ! Cn×n is a family of matrices depending on a complex parameter λ 2 D. It generalizes the linear eigenvalue problem Ax = λx, A 2 Cn×n, where T (λ) = λI − A, and the generalized linear eigenvalue problem where T (λ) = λB − A, A; B 2 Cn×n. Nonlinear eigenvalue problems T (λ)x = 0 arise in a variety of applications in science and engineering, such as the dynamic analysis of structures, vibrations of fluid{solid structures, the electronic behavior of quantum dots, and delay eigenvalue problems, to name just a few. Due to its wide range of applications, the quadratic eigenvalue problem T (λ)x = λ2Mx + λCx + Kx = 0 is of particular interest, but also polynomial, rational and more general eigenvalue problems appear. A standard approach for investigating or numerically solving polynomial eigenvalue problems is linearization where the original problem is transformed into a generalized linear eigenvalue problem with the same spectrum. Details on linearization and structure preservation are discussed in Chapter 102, Matrix Polynomials. This chapter is concerned with the general nonlinear eigenvalue problem which in general can not be linearized. Unlike for linear and polynomial eigenvalue problems there may exist infinitely many eigenvalues. In practice, however, one is usually interested only in a few eigenvalues close to a target value or a line in the complex plane. If T is linear then T (λ) = T (0)+λT 0(0) has the form of a generalized eigenvalue problem, and in the general case linerization gives the approximation T (λ) = T (0) + λT 0(0) + O(λ2), which is again a generalized linear eigenvalue problem. Hence, it is not surprising, that the (elementwise) derivative T 0(λ) of T (λ) plays an important role in the analysis of nonlinear eigenvalue problems. We tacitly assume in the whole chapter that whenever a derivative T 0(λ^) appears, T is analytic in a neighborhood of λ^ or in the real case T : D ! Rn×n, D ⊂ R that T is differentiable in a neighborhood of λ^. k · k always denotes the Euclidean and spectral norm, respectively, and we use the notation [x; y] := [xT ; yT ]T for column vectors. 115-1 115-2 Handbook of Linear Algebra 115.1 Basic Properties This section presents basic properties of the nonlinear eigenvalue problem (115.1) Definitions: As for a linear eigenvalue problem, λ^ 2 D is called an eigenvalue of T (·) if T (λ^)x = 0 has a nontrivial solution x^ 6= 0. Then x^ is called a corresponding eigenvector or right eigenvector, and (λ,^ x^) is called eigenpair of T (·). Any nontrivial solution y^ 6= 0 of the adjoint equation T (λ^)∗y = 0 is called left eigenvector of T (·) and the vector-scalar-vector triplet (y^; λ,^ x^) is called eigentriplet of T (·). The eigenvalue problem (115.1) is regular if det T (λ) 6≡ 0, and otherwise it is called singular. The spectrum σ(T (·)) of T (·) is the set of all eigenvalues of T (·). ` ^ d An eigenvalue λ of T (·) has algebraic multiplicity k if ` det(T (λ)) = 0 for ` = 0; : : : ; k− dλ λ=λ^ dk 1 and k det(T (λ)) 6= 0. dλ λ=λ^ An eigenvalue λ^ is simple if its algebraic multiplicity is one. The geometric multiplicity of an eigenvalue λ^ is the dimension of the kernel ker(T (λ^)) of T (λ^). An eigenvalue λ^ is called semi-simple if its algebraic and geometric mutiplicity coincide. n×n T T (·): J ! R is real symmetric if T (λ) = T (λ) for every λ 2 J ⊂ R. n×n T T (·): D ! C is complex symmetric if T (λ) = T (λ) for every λ 2 D. n×n ∗ T (·): D ! C is Hermitian if D is symmetric with respect to the real line and T (λ) = T (λ¯) for every λ 2 D. Facts: 1. For A 2 Cn×n and T (λ) = λI − A, the terms eigenvalue, (left and right) eigen- vector, eigenpair, eigentriplet, spectrum, algebraic and geometric multiplicity and semi-simple have their standard meaning. 2. For linear eigenvalue problems, { eigenvectors corresponding to distinct eigenvalues are linearly independent, which is not the case for nonlinear eigenvalue problems (cf. Example 1). { left and right eigenvectors corresponding to distinct eigenvalues are orthogonal, which does not hold for nonlinear eigenproblems (cf. Example 2). { the algebraic multiplicities of eigenvalues sum up to the dimension of the prob- lem, whereas for nonlinear problems there may exist an infinite number of eigen- values (cf. Example 2) and an eigenvalue may have any algebraic multiplicity (cf. Example 3). 3. [Sch08] If λ^ is an algebraically simple eigenvalue of T (·), then λ^ is geometrically simple. 4. [Neu85, Sch08] Let (y^; λ,^ x^) be an eigentriplet of T (·). Then λ^ is algebraically simple if and only if λ^ is geometrically simple and y^∗T 0(λ^)x^ 6= 0. 5. [Sch08] Let D ⊂ C and E ⊂ Cd be open sets. Let T : D ×E ! Cn×n be continuously differentiable, and let λ^ be a simple eigenvalue of T (·; 0) and x^ and y^ right and left eigenvectors with unit norm. Then the first order perturbation expansion at λ^ reads as follows: d ^ 1 X ∗ @T ^ λ(") − λ = "jy^ (λ, 0)x^ + o(k"k): ∗ 0 ^ @" y^ T (λ, 0)x^ j=1 j Nonlinear Eigenvalue Problem 115-3 The normwise condition number for λ^ is given by v u d 2 jλ(") − λ^j 1 uX @T κ(λ^) = lim sup = t y^∗ (λ,^ 0)x^ : k"k ∗ 0 ^ @" k"k!0 jy^ T (λ, 0)x^j j=1 j 6. [Sch08] Let (y^; λ,^ x^) be an eigentriplet of T (·) with simple eigenvalue λ^. Then for sufficiently small jλ^ − λj 1 x^y^∗ T (λ)−1 = + O(1): λ − λ^ y^∗T 0(λ^)x^ 7. [Neu85] Let λ^ be a simple eigenvalue of T (·), and let x^ be a right eigenvector normal- ized such that e∗x^ = 1 for some vector e. Then the matrix B := T (λ^) + T 0(λ^)xe^ ∗ is nonsingular. 8. If T (·) is real symmetric and λ is a real eigenvalue, then left and right eigenvectors corresponding to λ coincide. 9. If T (·) is complex symmetric and x is a right eigenvector, then x¯ is a left eigenvector corresponding to the same eigenvalue. 10. If T (·) is Hermitian, then eigenvalues are real (and left and right eigenvectors corre- sponding to λ coincide) or they come in pairs, i.e. if (y; λ, x) is an eigentriplet of T (·), then this is also true for (x; λ,¯ y). Examples: 1. For the quadratic eigenvalue problem T (λ)x = 0 with 0 1 7 −5 1 0 T (λ) := + λ + λ2 (115.2) −2 3 10 −8 0 1 the distinct eigenvalues λ = 1 and λ = 2 share the eigenvector [1; 2]. " 2 # eiλ 1 p 2. Let T (λ)x := x = 0. Then T (λ)x = 0 has a countable set of eigenvalues 2kπ, 1 1 ^ k 2 N [ f0g. λ = 0 is an algebraically double and geometrically simple eigenvaluep with ∗ 0 ^ left and right eigenvectors x^ = y^ = [1; −1], and y^ T (0)x^ = 0. Every λk = 2kπ, k 6= 0 is algebraicallyp and geometrically simple with the same eigenvectors x^; y^ as before, and ∗ 0 ^ y^ T (λk)x^ = 2 2kπi 6= 0. k 3. T (λ) = (λ ), k 2 N has the eigenvalue λ^ = 0 with algebraic multiplicity k. 115.2 Analytic matrix functions In this section we consider the eigenvalue problem (115.1) where T (·): D ! Cn×n is a regular matrix function which is analytic in a neighborhood of an eigenvalue λ^. Definitions: A sequence of vectors x0; x1;:::; xr−1 is called a Jordan chain (of length r) corresponding to λ^ if x0 6= 0 and ` k X 1 d T (λ^) x = 0 for ` = 0; : : : ; r − 1: k `−k k! dλ ^ k=0 λ=λ 115-4 Handbook of Linear Algebra x0 is an eigenvector and x1;:::; xr−1 are generalized eigenvectors. Let x0 be an eigenvector corresponding to an eigenvalue λ^. The maximal length of a Jordan chain that starts with x0 is called the multiplicity of x0. An eigenvalue λ^ is is said to be normal if it is a discrete point in σ(T (·))) and the multiplicity of each corresponding eigenvector is finite. n An analytic function x : D ! C is called root function of T (·) at λ^ 2 D if T (λ^)x(λ^) = 0 and x(λ^) 6= 0. The multiplicity of λ^ as a zero of T (λ)x(λ) is called the multiplicity of x(·). The rank of an eigenvector x0 is the maximum of the multiplicities of all root functions x(·) such that x(λ^) = x0. A root function x(·) is called a maximal root function if the multiplicity of x(·) is equal to the rank of x0 := x(λ^). (1) ^ (1) P1 (1) ^ j Let x0 2 ker T (λ) be an eigenvector with maximal rank and let x (λ) = j=0 xj (λ − λ) (1) ^ (1) (k) be a maximal root function such that x (λ) = x0 .
Recommended publications
  • Deflation by Restriction for the Inverse-Free Preconditioned Krylov Subspace Method
    NUMERICAL ALGEBRA, doi:10.3934/naco.2016.6.55 CONTROL AND OPTIMIZATION Volume 6, Number 1, March 2016 pp. 55{71 DEFLATION BY RESTRICTION FOR THE INVERSE-FREE PRECONDITIONED KRYLOV SUBSPACE METHOD Qiao Liang Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA Qiang Ye∗ Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA (Communicated by Xiaoqing Jin) Abstract. A deflation by restriction scheme is developed for the inverse-free preconditioned Krylov subspace method for computing a few extreme eigen- values of the definite symmetric generalized eigenvalue problem Ax = λBx. The convergence theory for the inverse-free preconditioned Krylov subspace method is generalized to include this deflation scheme and numerical examples are presented to demonstrate the convergence properties of the algorithm with the deflation scheme. 1. Introduction. The definite symmetric generalized eigenvalue problem for (A; B) is to find λ 2 R and x 2 Rn with x 6= 0 such that Ax = λBx (1) where A; B are n × n symmetric matrices and B is positive definite. The eigen- value problem (1), also referred to as a pencil eigenvalue problem (A; B), arises in many scientific and engineering applications, such as structural dynamics, quantum mechanics, and machine learning. The matrices involved in these applications are usually large and sparse and only a few of the eigenvalues are desired. Iterative methods such as the Lanczos algorithm and the Arnoldi algorithm are some of the most efficient numerical methods developed in the past few decades for computing a few eigenvalues of a large scale eigenvalue problem, see [1, 11, 19].
    [Show full text]
  • A Generalized Eigenvalue Algorithm for Tridiagonal Matrix Pencils Based on a Nonautonomous Discrete Integrable System
    A generalized eigenvalue algorithm for tridiagonal matrix pencils based on a nonautonomous discrete integrable system Kazuki Maedaa, , Satoshi Tsujimotoa ∗ aDepartment of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan Abstract A generalized eigenvalue algorithm for tridiagonal matrix pencils is presented. The algorithm appears as the time evolution equation of a nonautonomous discrete integrable system associated with a polynomial sequence which has some orthogonality on the support set of the zeros of the characteristic polynomial for a tridiagonal matrix pencil. The convergence of the algorithm is discussed by using the solution to the initial value problem for the corresponding discrete integrable system. Keywords: generalized eigenvalue problem, nonautonomous discrete integrable system, RII chain, dqds algorithm, orthogonal polynomials 2010 MSC: 37K10, 37K40, 42C05, 65F15 1. Introduction Applications of discrete integrable systems to numerical algorithms are important and fascinating top- ics. Since the end of the twentieth century, a number of relationships between classical numerical algorithms and integrable systems have been studied (see the review papers [1–3]). On this basis, new algorithms based on discrete integrable systems have been developed: (i) singular value algorithms for bidiagonal matrices based on the discrete Lotka–Volterra equation [4, 5], (ii) Pade´ approximation algorithms based on the dis- crete relativistic Toda lattice [6] and the discrete Schur flow [7], (iii) eigenvalue algorithms for band matrices based on the discrete hungry Lotka–Volterra equation [8] and the nonautonomous discrete hungry Toda lattice [9], and (iv) algorithms for computing D-optimal designs based on the nonautonomous discrete Toda (nd-Toda) lattice [10] and the discrete modified KdV equation [11].
    [Show full text]
  • Analysis of a Fast Hankel Eigenvalue Algorithm
    Analysis of a Fast Hankel Eigenvalue Algorithm a b Franklin T Luk and Sanzheng Qiao a Department of Computer Science Rensselaer Polytechnic Institute Troy New York USA b Department of Computing and Software McMaster University Hamilton Ontario LS L Canada ABSTRACT This pap er analyzes the imp ortant steps of an O n log n algorithm for nding the eigenvalues of a complex Hankel matrix The three key steps are a Lanczostype tridiagonalization algorithm a fast FFTbased Hankel matrixvector pro duct pro cedure and a QR eigenvalue metho d based on complexorthogonal transformations In this pap er we present an error analysis of the three steps as well as results from numerical exp eriments Keywords Hankel matrix eigenvalue decomp osition Lanczos tridiagonalization Hankel matrixvector multipli cation complexorthogonal transformations error analysis INTRODUCTION The eigenvalue decomp osition of a structured matrix has imp ortant applications in signal pro cessing In this pap er nn we consider a complex Hankel matrix H C h h h h n n h h h h B C n n B C B C H B C A h h h h n n n n h h h h n n n n The authors prop osed a fast algorithm for nding the eigenvalues of H The key step is a fast Lanczos tridiago nalization algorithm which employs a fast Hankel matrixvector multiplication based on the Fast Fourier transform FFT Then the algorithm p erforms a QRlike pro cedure using the complexorthogonal transformations in the di agonalization to nd the eigenvalues In this pap er we present an error analysis and discuss
    [Show full text]
  • Introduction to Mathematical Programming
    Introduction to Mathematical Programming Ming Zhong Lecture 24 October 29, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 18 Singular Value Decomposition (SVD) Table of Contents 1 Singular Value Decomposition (SVD) Ming Zhong (JHU) AMS Fall 2018 2 / 18 Singular Value Decomposition (SVD) Matrix Decomposition n×n We have discussed several decomposition techniques (for A 2 R ), A = LU when A is non-singular (Gaussian elimination). A = LL> when A is symmetric positive definite (Cholesky). A = QR for any real square matrix (unique when A is non-singular). A = PDP−1 if A has n linearly independent eigen-vectors). A = PJP−1 for any real square matrix. We are now ready to discuss the Singular Value Decomposition (SVD), A = UΣV∗; m×n m×m n×n m×n for any A 2 C , U 2 C , V 2 C , and Σ 2 R . Ming Zhong (JHU) AMS Fall 2018 3 / 18 Singular Value Decomposition (SVD) Some Brief History Going back in time, It was originally developed by differential geometers, equivalence of bi-linear forms by independent orthogonal transformation. Eugenio Beltrami in 1873, independently Camille Jordan in 1874, for bi-linear forms. James Joseph Sylvester in 1889 did SVD for real square matrices, independently; singular values = canonical multipliers of the matrix. Autonne in 1915 did SVD via polar decomposition. Carl Eckart and Gale Young in 1936 did the proof of SVD of rectangular and complex matrices, as a generalization of the principal axis transformaton of the Hermitian matrices. Erhard Schmidt in 1907 defined SVD for integral operators. Emile´ Picard in 1910 is the first to call the numbers singular values.
    [Show full text]
  • Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses October 2018 Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm Brendan E. Gavin University of Massachusetts Amherst Follow this and additional works at: https://scholarworks.umass.edu/dissertations_2 Part of the Numerical Analysis and Computation Commons, and the Numerical Analysis and Scientific Computing Commons Recommended Citation Gavin, Brendan E., "Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm" (2018). Doctoral Dissertations. 1341. https://doi.org/10.7275/12360224 https://scholarworks.umass.edu/dissertations_2/1341 This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. INEXACT AND NONLINEAR EXTENSIONS OF THE FEAST EIGENVALUE ALGORITHM A Dissertation Presented by BRENDAN GAVIN Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY September 2018 Electrical and Computer Engineering c Copyright by Brendan Gavin 2018 All Rights Reserved INEXACT AND NONLINEAR EXTENSIONS OF THE FEAST EIGENVALUE ALGORITHM A Dissertation Presented by BRENDAN GAVIN Approved as to style and content by: Eric Polizzi, Chair Zlatan Aksamija, Member
    [Show full text]
  • Direct Solvers for Symmetric Eigenvalue Problems
    John von Neumann Institute for Computing Direct Solvers for Symmetric Eigenvalue Problems Bruno Lang published in Modern Methods and Algorithms of Quantum Chemistry, Proceedings, Second Edition, J. Grotendorst (Ed.), John von Neumann Institute for Computing, Julich,¨ NIC Series, Vol. 3, ISBN 3-00-005834-6, pp. 231-259, 2000. c 2000 by John von Neumann Institute for Computing Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above. http://www.fz-juelich.de/nic-series/ DIRECT SOLVERS FOR SYMMETRIC EIGENVALUE PROBLEMS BRUNO LANG Aachen University of Technology Computing Center Seffenter Weg 23, 52074 Aachen, Germany E-mail: [email protected] This article reviews classical and recent direct methods for computing eigenvalues and eigenvectors of symmetric full or banded matrices. The ideas underlying the methods are presented, and the properties of the algorithms with respect to ac- curacy and performance are discussed. Finally, pointers to relevant software are given. This article reviews classical, as well as recent state-of-the-art, direct solvers for standard and generalized symmetric eigenvalue problems. In Section 1 we explain what direct solvers for symmetric eigenvalue problems are. Section 2 describes what we may reasonably expect from an eigenvalue solver in terms of accuracy and how algorithms should be structured in order to minimize the computing time, and introduces two basic tools on which most eigensolvers are based, namely similarity transformations and deflation.
    [Show full text]
  • Structured Eigenvalue Problems – Structure-Preserving Algorithms, Structured Error Analysis Draft of Chapter for the Second Edition of Handbook of Linear Algebra
    Structured Eigenvalue Problems { Structure-Preserving Algorithms, Structured Error Analysis Draft of chapter for the second edition of Handbook of Linear Algebra Heike Faßbender TU Braunschweig 1 Introduction Many eigenvalue problems arising in practice are structured due to (physical) properties induced by the original problem. Structure can also be introduced by discretization and linearization techniques. Preserving this structure can help preserve physically relevant symmetries in the eigenvalues of the matrix and may improve the accuracy and efficiency of an eigenvalue computation. This is well-known for symmetric matrices A = AT 2 Rn×n: Every eigenvalue is real and every right eigenvector is also a left eigenvector belonging to the same eigenvalue. Many numerical methods, such as QR, Arnoldi and Jacobi- Davidson automatically preserve symmetric matrices (and hence compute only real eigenvalues), so unavoidable round-off errors cannot result in the computation of complex-valued eigenvalues. Algorithms tailored to symmet- ric matrices (e.g., divide and conquer or Lanczos methods) take much less computational effort and sometimes achieve high relative accuracy in the eigenvalues and { having the right representation of A at hand { even in the eigenvectors. Another example is matrices for which the complex eigenvalues with nonzero real part theoretically appear in a pairing λ, λ, λ−1; λ−1. Using a general eigenvalue algorithm such as QR or Arnoldi results here in com- puted eigenvalues which in general do not display this eigenvalue pairing any longer. This is due to the fact, that each eigenvalue is subject to unstructured rounding errors, so that each eigenvalue is altered in a slightly different way 0 Unstructured error Structured error Figure 1: Effect of unstructured and structured rounding errors, round = original eigenvalue, pentagon = unstructured perturbation, star = structured perturbation (see left picture in Figure 1).
    [Show full text]
  • 5. the Symmetric Eigenproblem and Singular Value Decomposition
    The Symmetric Eigenproblem and Singular Value Decomposition 5.1. Introduction We discuss perturbation theory (in section 5.2), algorithms (in sections 5.3 and 5.4), and applications (in section 5.5 and elsewhere) of the symmetric eigenvalue problem. We also discuss its close relative, the SVD. Since the eigendecomposition of the symmetric matrix H = [ A OT ] and the SVD of A are very simply related (see Theorem 3.3), most of the perturbation theorems and algorithms for the symmetric eigenproblem extend to the SVD. As discussed at the beginning of Chapter 4, one can roughly divide the algorithms for the symmetric eigenproblem (and SVD) into two groups: direct methods and iterative methods. This chapter considers only direct methods, which are intended to compute all (or a selected subset) of the eigenvalues and (optionally) eigenvectors, costing O(n3 ) operations for dense matrices. Iterative methods are discussed in Chapter 7. Since there has been a great deal of recent progress in algorithms and applications of symmetric eigenproblems, we will highlight three examples: • A high-speed algorithm for the symmetric eigenproblem based on divide- and-conquer is discussed in section 5.3.3. This is the fastest available al- gorithm for finding all eigenvalues and all eigenvectors of a large dense or banded symmetric matrix (or the SVD of a general matrix). It is signif- icantly faster than the previous "workhorse" algorithm, QR iteration. 17 • High-accuracy algorithms based on the dqds and Jacobi algorithms are discussed in sections 5.2.1, 5.4.2, and 5.4.3. These algorithms can find tiny eigenvalues (or singular values) more accurately than alternative 17 There is yet more recent work [201, 203] on an algorithm based on inverse iteration Downloaded 12/26/12 to 128.95.104.109.
    [Show full text]
  • AM 205: Lecture 22
    AM 205: lecture 22 I Final project proposal due by 6pm on Thu Nov 17. Email Chris or the TFs to set up a meeting. Those who have completed this will see four proposal points on Canvas. I Today: eigenvalue algorithms, QR algorithm Sensitivity of Eigenvalue Problems Weyl's Theorem: Let λ1 ≤ λ2 ≤ · · · ≤ λn and λ~1 ≤ λ~2 ≤ · · · ≤ λ~n be the eigenvalues of hermitian matrices A and A + δA, respectively. Then max jλi − λ~i j ≤ kδAk2. i=1;:::;n Hence in the hermitian case, each perturbed eigenvalue must be in the disk1 of its corresponding unperturbed eigenvalue! 1In fact, eigenvalues of a hermitian matrix are real, so disk here is actually an interval in R Sensitivity of Eigenvalue Problems The Bauer{Fike Theorem relates to perturbations of the whole spectrum We can also consider perturbations of individual eigenvalues n×n Suppose, for simplicity, that A 2 C is symmetric, and consider the perturbed eigenvalue problem (A + E)(v + ∆v) = (λ + ∆λ)(v + ∆v) Expanding this equation, dropping second order terms, and using Av = λv gives A∆v + Ev ≈ ∆λv + λ∆v Sensitivity of Eigenvalue Problems Premultiply A∆v + Ev ≈ ∆λv + λ∆v by v ∗ to obtain v ∗A∆v + v ∗Ev ≈ ∆λv ∗v + λv ∗∆v Noting that v ∗A∆v = (v ∗A∆v)∗ = ∆v ∗Av = λ∆v ∗v = λv ∗∆v leads to v ∗Ev v ∗Ev ≈ ∆λv ∗v; or∆ λ = v ∗v Sensitivity of Eigenvalue Problems Finally, we obtain ∗ jv Evj kvk2kEvk2 j∆λj ≈ 2 ≤ 2 = kEk2; kvk2 kvk2 so that j∆λj . kEk2 We observe that I perturbation bound does not depend on cond(V ) when we consider only an individual eigenvalue I this individual eigenvalue perturbation bound
    [Show full text]
  • Krylov Type Methods for Large Scale Eigenvalue Computations
    University of Zagreb Department of Mathematics Doctor of Philosophy Dissertation Krylov Type Methods for Large Scale Eigenvalue Computations by Zvonimir Bujanovi´c Supervisor: prof. Zlatko Drmaˇc Zagreb, 2011. Contents Contents i Introduction iii 1 The Eigenvalue Problem 1 1.1 Notation . 1 1.2 Eigenvalues, eigenvectors and eigenspaces . 3 1.3 Applications of the eigenvalue problem . 8 1.3.1 Electromagnetic fields in cavities . 8 1.3.2 The PageRank algorithm . 12 1.4 An overview of the perturbation theory . 15 1.4.1 Perturbations of eigenvalues . 16 1.4.2 Perturbations of invariant spaces . 19 1.5 Approximations from a subspace . 22 1.5.1 Evaluating quality of a subspace . 22 1.5.2 Choosing approximations for eigenpairs . 26 2 Arnoldi-type Algorithms 35 2.1 The power method . 36 2.2 Simultaneous iteration . 41 2.2.1 The QR algorithm . 44 2.3 Krylov subspaces . 49 2.3.1 Ritz approximations from Krylov subspaces . 52 2.4 The basic Arnoldi algorithm . 57 2.4.1 The Hermitian case: the Lanczos algorithm . 59 2.5 Restarting the Arnoldi algorithm . 61 2.5.1 The implicit restart . 63 2.5.2 The Krylov-Schur algorithm . 69 2.5.3 Convergence of the restarted Arnoldi method . 71 3 Using Krylov{Schur algorithm with arbitrary shifts 75 i Contents 3.1 A more general restarting scheme . 76 3.2 Restarting and the pole placement problem . 79 3.3 Matrix balancing and the computation of the Ritz values . 85 4 Ritz values of normal matrices 91 4.1 Minimax theorem and the Cauchy interlacing property .
    [Show full text]
  • PFEAST: a High Performance Sparse Eigenvalue Solver Using Distributed-Memory Linear Solvers
    PFEAST: A High Performance Sparse Eigenvalue Solver Using Distributed-Memory Linear Solvers James Kestyn∗, Vasileios Kalantzisy, Eric Polizzi∗, Yousef Saady ∗Electrical and Computer Engineering Department, University of Massachusetts, Amherst, MA, U.S.A. yComputer Science and Engineering Department, University of Minnesota, Minneapolis, MN, U.S.A. Abstract—The FEAST algorithm and eigensolver for interior computing interior eigenpairs that makes use of a rational eigenvalue problems naturally possesses three distinct levels filter obtained from an approximation of the spectral pro- of parallelism. The solver is then suited to exploit modern jector. FEAST can be applied for solving both standard and computer architectures containing many interconnected proces- sors. This paper highlights a recent development within the generalized forms of Hermitian or non-Hermitian problems, software package that allows the dominant computational task, and belongs to the family of contour integration eigensolvers solving a set of complex linear systems, to be performed with a [32], [33], [3], [14], [15], [4]. Once a given search interval distributed memory solver. The software, written with a reverse- is selected, FEAST’s main computational task consists of communication-interface, can now be interfaced with any generic a numerical quadrature computation that involves solving MPI linear-system solver using a customized data distribution for the eigenvector solutions. This work utilizes two common independent linear systems along a complex contour. The “black-box” distributed memory linear-systems solvers (Cluster- algorithm can exploit natural parallelism at three different MKL-Pardiso and MUMPS), as well as our own application- levels: (i) search intervals can be treated separately (no over- specific domain-decomposition MPI solver, for a collection of 3- lap), (ii) linear systems can be solved independently across dimensional finite-element systems.
    [Show full text]
  • Eigenvalue Computation in the 20Th Century Gene H
    Journal of Computational and Applied Mathematics 123 (2000) 35–65 www.elsevier.nl/locate/cam Eigenvalue computation in the 20th century Gene H. Goluba; 1, Henk A. van der Vorstb; ∗ aSCCM, Stanford University, Stanford, USA bDepartment of Mathematics, Utrecht University, P.O. Box 80.010, 3508 TA, Utrecht, The Netherlands Received 5 March 2000 Abstract This paper sketches the main research developments in the area of computational methods for eigenvalue problems during the 20th century. The earliest of such methods dates back to work of Jacobi in the middle of the 19th century. Since computing eigenvalues and vectors is essentially more complicated than solving linear systems, it is not surprising that highly signiÿcant developments in this area started with the introduction of electronic computers around 1950. In the early decades of this century, however, important theoretical developments had been made from which computational techniques could grow. Research in this area of numerical linear algebra is very active, since there is a heavy demand for solving complicated problems associated with stability and perturbation analysis for practical applications. For standard problems, powerful tools are available, but there still remain many open problems. It is the intention of this contribution to sketch the main developments of this century, especially as they relate to one another, and to give an impression of the state of the art at the turn of our century. c 2000 Elsevier Science B.V. All rights reserved. MSC: 65F15; 65N25 Keywords: QR; QZ; Lanczos’ method; Arnoldi’s method; Jacobi–Davidson; Power iteration; RKS; Jacobi’s method; SVD; Perturbation theory 1.
    [Show full text]