Superlinear Convergence of Randomized Block Lanczos Algorithm

Total Page:16

File Type:pdf, Size:1020Kb

Superlinear Convergence of Randomized Block Lanczos Algorithm Superlinear Convergence of Randomized Block Lanczos Algorithm Qiaochu Yuan Ming Gu Bo Li Department of Mathematics Department of Mathematics Department of Mathematics UC Berkeley UC Berkeley UC Berkeley Berkeley, CA, USA Berkeley, CA, USA Berkeley, CA, USA [email protected] [email protected] bo [email protected] Abstract—The low rank approximation of matrices is a crucial ing behind the era of moderately sized matrices and entering component in many data mining applications today. A competi- an age of web-scale datasets and big-data applications. The tive algorithm for this class of problems is the randomized block matrices arising from such are often extraordinarily large, ex- Lanczos algorithm - an amalgamation of the traditional block 6 Lanczos algorithm with a randomized starting matrix. While ceeding the order of 10 in one or both of the dimensions [9]– empirically this algorithm performs quite well, there has been [11], and have much higher computational efficiency demands scant new theoretical results on its convergence behavior and on the algorithms. Secondly, while the truncated SVD may be approximation accuracy, and past results have been restricted to the final desired object for previous scientific computing ques- certain parameter settings. In this paper, we present a unified tions, for big-data applications, it is usually an intermediate singular value convergence analysis for this algorithm, for all valid choices of the block size parameter. We present novel representation for the overall classification or regression task. results on the rate of singular value convergence and show that Empirically, the final accuracy of the task only weakly depends under certain spectrum regimes, the convergence is superlinear. on the accuracy of the matrix approximation [12]. Thus, while Additionally, we provide results from numerical experiments that previous variants of truncated SVD algorithms focused on validate our analysis. computing up to full double precision, newer iterations of these Index Terms—low-rank approximation, randomized block Lanczos, block size, singular values. algorithms aimed at big-data applications can comfortably get by with only 2-3 digits of accuracy. These considerations have led to the development of ran- I. INTRODUCTION domized variants of traditional SVD algorithms suited to large, The low rank approximation of matrices is a crucial com- sparse matrices, in particular randomized subspace iteration ponent in many data mining applications today. In addition (RSI) and randomized block Lanczos (RBL) [13]–[16]. By to functioning as a stand alone technique for dimensional- applying either a randomized sketching or projecting operation ity reduction [1], denoising [2], signal processing [3], data on the original matrix, these algorithms balance reducing compression [4], and more, it has also been incorporated into computational complexity with producing an acceptably ac- more complex algorithms as a computational subroutine [5], curate approximation. While empirically they have shown to [6]. As part of large scale modern data processing, low rank be effective and have been widely adopted by popular software approximations help to reveal important structural information packages, e.g. [17], there has been scant new theoretical work in the raw data and to transform the data into forms that are on the convergence guarantees of the latter algorithm, the more efficient for computation, transmission, and storage. better performing but more complicated randomized block arXiv:1808.06287v1 [math.NA] 20 Aug 2018 The singular value decomposition (SVD) is a matrix factor- Lanczos algorithm. ization of both theoretical and practical importance, and it has In this paper, we present novel theoretical convergence a number of useful properties related to matrix nearness and results concerning the rate of singular value convergence for rank. In particular, it is used to identify nearby matrices of the RBL algorithm, along with numerical experiments sup- lower rank, and, leaving aside the question of computational porting these results. Our analysis presents a unified singular complexity, it is known that the rank-k truncated SVD is the value convergence theory for variants of the Block Lanczos “gold standard” for approximating a matrix by another matrix algorithm, for all valid parameter choices of block size b. of rank at most k [7]. To our knowledge, all previous results in the literature are While procedures for computing the exact rank-k truncated applicable only for the choice of b k, the target rank. SVD have existed since the 1960s [8], the computational cost We present a generalized theorem, applicable≥ to all block of these algorithms are prohibitive at the scale of many of sizes b, which coincide asymptotically with previous results today’s datasets. The recent applications of low rank matrix for the case b k, while providing equally strong rates of approximation techniques to big-data problems differ in both convergence for≥ the case b < k. the computation efficiency requirement and the accuracy re- In Section II, we present the randomized block Lanczos quirement of the algorithms. Firstly, we are increasingly leav- algorithm and discuss some previous convergence results for this algorithm. In Section III, we dive into our main theoretical B. The Algorithm result and its derivation, followed by corollaries for special The randomized block Lancos algorithm is a straightforward cases. In Section IV, we investigate the behavior of this combination of the classical block Lanczos algorithm [18] with algorithm for different parameter settings and empirically the added element of a randomized starting matrix V = AΩ. verify the results of the previous section. Finally, we give The pseudocode for this algorithm is outlined in Algo- concluding remarks in Section V. rithm 1. Of the parameters of the algorithm, k (target rank) is II. BACKGROUND problem dependent, while b (block size), q (no. of iterations) A. Preliminaries are chosen by the user to control the quality and computational cost of the approximation. The algorithm requires the choices Throughout this paper, our analysis assumes exact arith- of b, q to satisfy qb k, to ensure that the Krylov subspace metics. be at least k dimensional.≥ We denote matrices by bold-faced uppercase letters, e.g. M, entries of matrices by the plain-faced lowercase letter that Algorithm 1 randomized block Lanczos algorithm pseu- the entry belongs to, e.g. m11, and block submatrices by the docode bold-faced or script-faced uppercase letter that the submatrix A Rm×n belongs to subscripted by position, possibly with subscripts, Ω ∈ Rn×b , random Gaussian matrix ∈ e.g. M11, 11 or Ma×b. Double numerical subscripts denote Input: k , target rank M the position of the element or the submatrix, i.e. M11 and m11 b , block size are the topmost leftmost subblock or entry of M respectively. q , number of Lanczos iterations m×n m n subscripts denote the dimensions of a submatrix, when Output: Bk R , a rank-k approximation to A × ∈ such information is relevant, i.e. Ma×b denote a subblock of 1: Form the block column Krylov subspace matrix M that has dimensions a b. K = AΩ (AAT )AΩ (AAT )qAΩ . × ··· Constants are denoted by script-faced uppercase or lower- 2: Compute an orthonormal basis Q for the column span of case letters, e.g. or α, when it is asymptotically insignificant, K, using e.g. QR qr(K). C ← i.e. constant with respect to the convergence parameter. 3: Project A onto the Krylov subspace by computing The SVD of a matrix A is defined as the factorization B = QQT A. A = UΣVT (1) 4: Compute k-truncated SVD Bk = svdk (B) = T T svdk QQ A = Q svdk Q A . where U u u and V v v are · = 1 n = 1 n 5: Return Bk. orthogonal matrices··· whose columns are the set of··· left and right singular vectors respectively, and Σ is a diagonal matrix whose We present the algorithm pseudocode in this form in order entries Σ = σ are the singular values ordered descendingly ii i to highlight the mathematical ideas that are at the core of this σ σ 0. 1 n algorithm. It is well known that a naive implementation of The≥···≥ rank-k truncated≥ SVD of a matrix is defined as any Lanczos algorithm is plagued by loss of orthogonality of svdk (A)= UkΣkVk (2) the Lanczos vectors due to roundoff errors [19]. A practical implementation of Algorithm 1 should involve, at the very where Uk = u1 uk and Vk = v1 vk contain the first k left··· and right singular vectors respectively,··· least, a reorganization of the computation to use the three- term recurrence and bidiagonalization [20], and reorthogonal- and Σk = diag(σ1, , σk). The ith singular values··· of an arbitrary matrix M is denoted izations of the Lanczos vectors at each step using one of the numerous schemes that has been proposed [20]–[22]. by σi(M), or simply σi when it is clear from context the matrix in question. C. Previous Work The pth degree Chebyshev polynomial is defined by the Historically, the the classical Lanczos algorithm was de- recurrence veloped as an eigenvalue algorithm for symmetric matrices. T0(x) 1 (3) Its convergence analysis focused on theorems concerning the ≡ approximation quality of the approximant’s eigenvalues as a T1(x) x (4) ≡ function of k, the target rank. The analysis relied heavily Tp(x) 2pTp− (x) Tp− (x) (5) ≡ 1 − 2 on the analysis of the k-dimensional Krylov subspace and Alternatively, they may be expressed as the choice of the associated k-degree Chebyshev polynomial. p −p Classical results in this line of inquiry include those by Kaniel 1 2 2 Tp(x)= x + x 1 + x + x 1 (6) 2 − − [23], Paige [24], Underwood [25], Saad [26]. p p More recently, while there has been much work on the for x > 1, and estimated as | | analysis of randomized algorithms, such efforts have been 1 p focused mostly on RBL’s simpler cousins, such as randomized Tp(1 + ǫ) 1+ ǫ + √2ǫ (7) ≈ 2 power iteration or randomized subspace iteration [12], [15].
Recommended publications
  • Deflation by Restriction for the Inverse-Free Preconditioned Krylov Subspace Method
    NUMERICAL ALGEBRA, doi:10.3934/naco.2016.6.55 CONTROL AND OPTIMIZATION Volume 6, Number 1, March 2016 pp. 55{71 DEFLATION BY RESTRICTION FOR THE INVERSE-FREE PRECONDITIONED KRYLOV SUBSPACE METHOD Qiao Liang Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA Qiang Ye∗ Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA (Communicated by Xiaoqing Jin) Abstract. A deflation by restriction scheme is developed for the inverse-free preconditioned Krylov subspace method for computing a few extreme eigen- values of the definite symmetric generalized eigenvalue problem Ax = λBx. The convergence theory for the inverse-free preconditioned Krylov subspace method is generalized to include this deflation scheme and numerical examples are presented to demonstrate the convergence properties of the algorithm with the deflation scheme. 1. Introduction. The definite symmetric generalized eigenvalue problem for (A; B) is to find λ 2 R and x 2 Rn with x 6= 0 such that Ax = λBx (1) where A; B are n × n symmetric matrices and B is positive definite. The eigen- value problem (1), also referred to as a pencil eigenvalue problem (A; B), arises in many scientific and engineering applications, such as structural dynamics, quantum mechanics, and machine learning. The matrices involved in these applications are usually large and sparse and only a few of the eigenvalues are desired. Iterative methods such as the Lanczos algorithm and the Arnoldi algorithm are some of the most efficient numerical methods developed in the past few decades for computing a few eigenvalues of a large scale eigenvalue problem, see [1, 11, 19].
    [Show full text]
  • A Generalized Eigenvalue Algorithm for Tridiagonal Matrix Pencils Based on a Nonautonomous Discrete Integrable System
    A generalized eigenvalue algorithm for tridiagonal matrix pencils based on a nonautonomous discrete integrable system Kazuki Maedaa, , Satoshi Tsujimotoa ∗ aDepartment of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan Abstract A generalized eigenvalue algorithm for tridiagonal matrix pencils is presented. The algorithm appears as the time evolution equation of a nonautonomous discrete integrable system associated with a polynomial sequence which has some orthogonality on the support set of the zeros of the characteristic polynomial for a tridiagonal matrix pencil. The convergence of the algorithm is discussed by using the solution to the initial value problem for the corresponding discrete integrable system. Keywords: generalized eigenvalue problem, nonautonomous discrete integrable system, RII chain, dqds algorithm, orthogonal polynomials 2010 MSC: 37K10, 37K40, 42C05, 65F15 1. Introduction Applications of discrete integrable systems to numerical algorithms are important and fascinating top- ics. Since the end of the twentieth century, a number of relationships between classical numerical algorithms and integrable systems have been studied (see the review papers [1–3]). On this basis, new algorithms based on discrete integrable systems have been developed: (i) singular value algorithms for bidiagonal matrices based on the discrete Lotka–Volterra equation [4, 5], (ii) Pade´ approximation algorithms based on the dis- crete relativistic Toda lattice [6] and the discrete Schur flow [7], (iii) eigenvalue algorithms for band matrices based on the discrete hungry Lotka–Volterra equation [8] and the nonautonomous discrete hungry Toda lattice [9], and (iv) algorithms for computing D-optimal designs based on the nonautonomous discrete Toda (nd-Toda) lattice [10] and the discrete modified KdV equation [11].
    [Show full text]
  • Analysis of a Fast Hankel Eigenvalue Algorithm
    Analysis of a Fast Hankel Eigenvalue Algorithm a b Franklin T Luk and Sanzheng Qiao a Department of Computer Science Rensselaer Polytechnic Institute Troy New York USA b Department of Computing and Software McMaster University Hamilton Ontario LS L Canada ABSTRACT This pap er analyzes the imp ortant steps of an O n log n algorithm for nding the eigenvalues of a complex Hankel matrix The three key steps are a Lanczostype tridiagonalization algorithm a fast FFTbased Hankel matrixvector pro duct pro cedure and a QR eigenvalue metho d based on complexorthogonal transformations In this pap er we present an error analysis of the three steps as well as results from numerical exp eriments Keywords Hankel matrix eigenvalue decomp osition Lanczos tridiagonalization Hankel matrixvector multipli cation complexorthogonal transformations error analysis INTRODUCTION The eigenvalue decomp osition of a structured matrix has imp ortant applications in signal pro cessing In this pap er nn we consider a complex Hankel matrix H C h h h h n n h h h h B C n n B C B C H B C A h h h h n n n n h h h h n n n n The authors prop osed a fast algorithm for nding the eigenvalues of H The key step is a fast Lanczos tridiago nalization algorithm which employs a fast Hankel matrixvector multiplication based on the Fast Fourier transform FFT Then the algorithm p erforms a QRlike pro cedure using the complexorthogonal transformations in the di agonalization to nd the eigenvalues In this pap er we present an error analysis and discuss
    [Show full text]
  • Introduction to Mathematical Programming
    Introduction to Mathematical Programming Ming Zhong Lecture 24 October 29, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 18 Singular Value Decomposition (SVD) Table of Contents 1 Singular Value Decomposition (SVD) Ming Zhong (JHU) AMS Fall 2018 2 / 18 Singular Value Decomposition (SVD) Matrix Decomposition n×n We have discussed several decomposition techniques (for A 2 R ), A = LU when A is non-singular (Gaussian elimination). A = LL> when A is symmetric positive definite (Cholesky). A = QR for any real square matrix (unique when A is non-singular). A = PDP−1 if A has n linearly independent eigen-vectors). A = PJP−1 for any real square matrix. We are now ready to discuss the Singular Value Decomposition (SVD), A = UΣV∗; m×n m×m n×n m×n for any A 2 C , U 2 C , V 2 C , and Σ 2 R . Ming Zhong (JHU) AMS Fall 2018 3 / 18 Singular Value Decomposition (SVD) Some Brief History Going back in time, It was originally developed by differential geometers, equivalence of bi-linear forms by independent orthogonal transformation. Eugenio Beltrami in 1873, independently Camille Jordan in 1874, for bi-linear forms. James Joseph Sylvester in 1889 did SVD for real square matrices, independently; singular values = canonical multipliers of the matrix. Autonne in 1915 did SVD via polar decomposition. Carl Eckart and Gale Young in 1936 did the proof of SVD of rectangular and complex matrices, as a generalization of the principal axis transformaton of the Hermitian matrices. Erhard Schmidt in 1907 defined SVD for integral operators. Emile´ Picard in 1910 is the first to call the numbers singular values.
    [Show full text]
  • Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses October 2018 Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm Brendan E. Gavin University of Massachusetts Amherst Follow this and additional works at: https://scholarworks.umass.edu/dissertations_2 Part of the Numerical Analysis and Computation Commons, and the Numerical Analysis and Scientific Computing Commons Recommended Citation Gavin, Brendan E., "Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm" (2018). Doctoral Dissertations. 1341. https://doi.org/10.7275/12360224 https://scholarworks.umass.edu/dissertations_2/1341 This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. INEXACT AND NONLINEAR EXTENSIONS OF THE FEAST EIGENVALUE ALGORITHM A Dissertation Presented by BRENDAN GAVIN Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY September 2018 Electrical and Computer Engineering c Copyright by Brendan Gavin 2018 All Rights Reserved INEXACT AND NONLINEAR EXTENSIONS OF THE FEAST EIGENVALUE ALGORITHM A Dissertation Presented by BRENDAN GAVIN Approved as to style and content by: Eric Polizzi, Chair Zlatan Aksamija, Member
    [Show full text]
  • Direct Solvers for Symmetric Eigenvalue Problems
    John von Neumann Institute for Computing Direct Solvers for Symmetric Eigenvalue Problems Bruno Lang published in Modern Methods and Algorithms of Quantum Chemistry, Proceedings, Second Edition, J. Grotendorst (Ed.), John von Neumann Institute for Computing, Julich,¨ NIC Series, Vol. 3, ISBN 3-00-005834-6, pp. 231-259, 2000. c 2000 by John von Neumann Institute for Computing Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above. http://www.fz-juelich.de/nic-series/ DIRECT SOLVERS FOR SYMMETRIC EIGENVALUE PROBLEMS BRUNO LANG Aachen University of Technology Computing Center Seffenter Weg 23, 52074 Aachen, Germany E-mail: [email protected] This article reviews classical and recent direct methods for computing eigenvalues and eigenvectors of symmetric full or banded matrices. The ideas underlying the methods are presented, and the properties of the algorithms with respect to ac- curacy and performance are discussed. Finally, pointers to relevant software are given. This article reviews classical, as well as recent state-of-the-art, direct solvers for standard and generalized symmetric eigenvalue problems. In Section 1 we explain what direct solvers for symmetric eigenvalue problems are. Section 2 describes what we may reasonably expect from an eigenvalue solver in terms of accuracy and how algorithms should be structured in order to minimize the computing time, and introduces two basic tools on which most eigensolvers are based, namely similarity transformations and deflation.
    [Show full text]
  • Structured Eigenvalue Problems – Structure-Preserving Algorithms, Structured Error Analysis Draft of Chapter for the Second Edition of Handbook of Linear Algebra
    Structured Eigenvalue Problems { Structure-Preserving Algorithms, Structured Error Analysis Draft of chapter for the second edition of Handbook of Linear Algebra Heike Faßbender TU Braunschweig 1 Introduction Many eigenvalue problems arising in practice are structured due to (physical) properties induced by the original problem. Structure can also be introduced by discretization and linearization techniques. Preserving this structure can help preserve physically relevant symmetries in the eigenvalues of the matrix and may improve the accuracy and efficiency of an eigenvalue computation. This is well-known for symmetric matrices A = AT 2 Rn×n: Every eigenvalue is real and every right eigenvector is also a left eigenvector belonging to the same eigenvalue. Many numerical methods, such as QR, Arnoldi and Jacobi- Davidson automatically preserve symmetric matrices (and hence compute only real eigenvalues), so unavoidable round-off errors cannot result in the computation of complex-valued eigenvalues. Algorithms tailored to symmet- ric matrices (e.g., divide and conquer or Lanczos methods) take much less computational effort and sometimes achieve high relative accuracy in the eigenvalues and { having the right representation of A at hand { even in the eigenvectors. Another example is matrices for which the complex eigenvalues with nonzero real part theoretically appear in a pairing λ, λ, λ−1; λ−1. Using a general eigenvalue algorithm such as QR or Arnoldi results here in com- puted eigenvalues which in general do not display this eigenvalue pairing any longer. This is due to the fact, that each eigenvalue is subject to unstructured rounding errors, so that each eigenvalue is altered in a slightly different way 0 Unstructured error Structured error Figure 1: Effect of unstructured and structured rounding errors, round = original eigenvalue, pentagon = unstructured perturbation, star = structured perturbation (see left picture in Figure 1).
    [Show full text]
  • 5. the Symmetric Eigenproblem and Singular Value Decomposition
    The Symmetric Eigenproblem and Singular Value Decomposition 5.1. Introduction We discuss perturbation theory (in section 5.2), algorithms (in sections 5.3 and 5.4), and applications (in section 5.5 and elsewhere) of the symmetric eigenvalue problem. We also discuss its close relative, the SVD. Since the eigendecomposition of the symmetric matrix H = [ A OT ] and the SVD of A are very simply related (see Theorem 3.3), most of the perturbation theorems and algorithms for the symmetric eigenproblem extend to the SVD. As discussed at the beginning of Chapter 4, one can roughly divide the algorithms for the symmetric eigenproblem (and SVD) into two groups: direct methods and iterative methods. This chapter considers only direct methods, which are intended to compute all (or a selected subset) of the eigenvalues and (optionally) eigenvectors, costing O(n3 ) operations for dense matrices. Iterative methods are discussed in Chapter 7. Since there has been a great deal of recent progress in algorithms and applications of symmetric eigenproblems, we will highlight three examples: • A high-speed algorithm for the symmetric eigenproblem based on divide- and-conquer is discussed in section 5.3.3. This is the fastest available al- gorithm for finding all eigenvalues and all eigenvectors of a large dense or banded symmetric matrix (or the SVD of a general matrix). It is signif- icantly faster than the previous "workhorse" algorithm, QR iteration. 17 • High-accuracy algorithms based on the dqds and Jacobi algorithms are discussed in sections 5.2.1, 5.4.2, and 5.4.3. These algorithms can find tiny eigenvalues (or singular values) more accurately than alternative 17 There is yet more recent work [201, 203] on an algorithm based on inverse iteration Downloaded 12/26/12 to 128.95.104.109.
    [Show full text]
  • AM 205: Lecture 22
    AM 205: lecture 22 I Final project proposal due by 6pm on Thu Nov 17. Email Chris or the TFs to set up a meeting. Those who have completed this will see four proposal points on Canvas. I Today: eigenvalue algorithms, QR algorithm Sensitivity of Eigenvalue Problems Weyl's Theorem: Let λ1 ≤ λ2 ≤ · · · ≤ λn and λ~1 ≤ λ~2 ≤ · · · ≤ λ~n be the eigenvalues of hermitian matrices A and A + δA, respectively. Then max jλi − λ~i j ≤ kδAk2. i=1;:::;n Hence in the hermitian case, each perturbed eigenvalue must be in the disk1 of its corresponding unperturbed eigenvalue! 1In fact, eigenvalues of a hermitian matrix are real, so disk here is actually an interval in R Sensitivity of Eigenvalue Problems The Bauer{Fike Theorem relates to perturbations of the whole spectrum We can also consider perturbations of individual eigenvalues n×n Suppose, for simplicity, that A 2 C is symmetric, and consider the perturbed eigenvalue problem (A + E)(v + ∆v) = (λ + ∆λ)(v + ∆v) Expanding this equation, dropping second order terms, and using Av = λv gives A∆v + Ev ≈ ∆λv + λ∆v Sensitivity of Eigenvalue Problems Premultiply A∆v + Ev ≈ ∆λv + λ∆v by v ∗ to obtain v ∗A∆v + v ∗Ev ≈ ∆λv ∗v + λv ∗∆v Noting that v ∗A∆v = (v ∗A∆v)∗ = ∆v ∗Av = λ∆v ∗v = λv ∗∆v leads to v ∗Ev v ∗Ev ≈ ∆λv ∗v; or∆ λ = v ∗v Sensitivity of Eigenvalue Problems Finally, we obtain ∗ jv Evj kvk2kEvk2 j∆λj ≈ 2 ≤ 2 = kEk2; kvk2 kvk2 so that j∆λj . kEk2 We observe that I perturbation bound does not depend on cond(V ) when we consider only an individual eigenvalue I this individual eigenvalue perturbation bound
    [Show full text]
  • Krylov Type Methods for Large Scale Eigenvalue Computations
    University of Zagreb Department of Mathematics Doctor of Philosophy Dissertation Krylov Type Methods for Large Scale Eigenvalue Computations by Zvonimir Bujanovi´c Supervisor: prof. Zlatko Drmaˇc Zagreb, 2011. Contents Contents i Introduction iii 1 The Eigenvalue Problem 1 1.1 Notation . 1 1.2 Eigenvalues, eigenvectors and eigenspaces . 3 1.3 Applications of the eigenvalue problem . 8 1.3.1 Electromagnetic fields in cavities . 8 1.3.2 The PageRank algorithm . 12 1.4 An overview of the perturbation theory . 15 1.4.1 Perturbations of eigenvalues . 16 1.4.2 Perturbations of invariant spaces . 19 1.5 Approximations from a subspace . 22 1.5.1 Evaluating quality of a subspace . 22 1.5.2 Choosing approximations for eigenpairs . 26 2 Arnoldi-type Algorithms 35 2.1 The power method . 36 2.2 Simultaneous iteration . 41 2.2.1 The QR algorithm . 44 2.3 Krylov subspaces . 49 2.3.1 Ritz approximations from Krylov subspaces . 52 2.4 The basic Arnoldi algorithm . 57 2.4.1 The Hermitian case: the Lanczos algorithm . 59 2.5 Restarting the Arnoldi algorithm . 61 2.5.1 The implicit restart . 63 2.5.2 The Krylov-Schur algorithm . 69 2.5.3 Convergence of the restarted Arnoldi method . 71 3 Using Krylov{Schur algorithm with arbitrary shifts 75 i Contents 3.1 A more general restarting scheme . 76 3.2 Restarting and the pole placement problem . 79 3.3 Matrix balancing and the computation of the Ritz values . 85 4 Ritz values of normal matrices 91 4.1 Minimax theorem and the Cauchy interlacing property .
    [Show full text]
  • PFEAST: a High Performance Sparse Eigenvalue Solver Using Distributed-Memory Linear Solvers
    PFEAST: A High Performance Sparse Eigenvalue Solver Using Distributed-Memory Linear Solvers James Kestyn∗, Vasileios Kalantzisy, Eric Polizzi∗, Yousef Saady ∗Electrical and Computer Engineering Department, University of Massachusetts, Amherst, MA, U.S.A. yComputer Science and Engineering Department, University of Minnesota, Minneapolis, MN, U.S.A. Abstract—The FEAST algorithm and eigensolver for interior computing interior eigenpairs that makes use of a rational eigenvalue problems naturally possesses three distinct levels filter obtained from an approximation of the spectral pro- of parallelism. The solver is then suited to exploit modern jector. FEAST can be applied for solving both standard and computer architectures containing many interconnected proces- sors. This paper highlights a recent development within the generalized forms of Hermitian or non-Hermitian problems, software package that allows the dominant computational task, and belongs to the family of contour integration eigensolvers solving a set of complex linear systems, to be performed with a [32], [33], [3], [14], [15], [4]. Once a given search interval distributed memory solver. The software, written with a reverse- is selected, FEAST’s main computational task consists of communication-interface, can now be interfaced with any generic a numerical quadrature computation that involves solving MPI linear-system solver using a customized data distribution for the eigenvector solutions. This work utilizes two common independent linear systems along a complex contour. The “black-box” distributed memory linear-systems solvers (Cluster- algorithm can exploit natural parallelism at three different MKL-Pardiso and MUMPS), as well as our own application- levels: (i) search intervals can be treated separately (no over- specific domain-decomposition MPI solver, for a collection of 3- lap), (ii) linear systems can be solved independently across dimensional finite-element systems.
    [Show full text]
  • Eigenvalue Computation in the 20Th Century Gene H
    Journal of Computational and Applied Mathematics 123 (2000) 35–65 www.elsevier.nl/locate/cam Eigenvalue computation in the 20th century Gene H. Goluba; 1, Henk A. van der Vorstb; ∗ aSCCM, Stanford University, Stanford, USA bDepartment of Mathematics, Utrecht University, P.O. Box 80.010, 3508 TA, Utrecht, The Netherlands Received 5 March 2000 Abstract This paper sketches the main research developments in the area of computational methods for eigenvalue problems during the 20th century. The earliest of such methods dates back to work of Jacobi in the middle of the 19th century. Since computing eigenvalues and vectors is essentially more complicated than solving linear systems, it is not surprising that highly signiÿcant developments in this area started with the introduction of electronic computers around 1950. In the early decades of this century, however, important theoretical developments had been made from which computational techniques could grow. Research in this area of numerical linear algebra is very active, since there is a heavy demand for solving complicated problems associated with stability and perturbation analysis for practical applications. For standard problems, powerful tools are available, but there still remain many open problems. It is the intention of this contribution to sketch the main developments of this century, especially as they relate to one another, and to give an impression of the state of the art at the turn of our century. c 2000 Elsevier Science B.V. All rights reserved. MSC: 65F15; 65N25 Keywords: QR; QZ; Lanczos’ method; Arnoldi’s method; Jacobi–Davidson; Power iteration; RKS; Jacobi’s method; SVD; Perturbation theory 1.
    [Show full text]