Krylov Type Methods for Large Scale Eigenvalue Computations
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Krylov Subspaces
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra is the amount of memory needed to store a large matrix and the amount of time needed to read its entries. Methods using Krylov subspaces avoid this difficulty by studying how a matrix acts on vectors, making it unnecessary in many cases to create the matrix itself. More specifically, we can construct a Krylov subspace just by knowing how a linear transformation acts on vectors, and with these subspaces we can closely approximate eigenvalues of the transformation and solutions to associated linear systems. The Arnoldi iteration is an algorithm for finding an orthonormal basis of a Krylov subspace. Its outputs can also be used to approximate the eigenvalues of the original matrix. The Arnoldi Iteration The order-N Krylov subspace of A generated by x is 2 n−1 Kn(A; x) = spanfx;Ax;A x;:::;A xg: If the vectors fx;Ax;A2x;:::;An−1xg are linearly independent, then they form a basis for Kn(A; x). However, this basis is usually far from orthogonal, and hence computations using this basis will likely be ill-conditioned. One way to find an orthonormal basis for Kn(A; x) would be to use the modi- fied Gram-Schmidt algorithm from Lab TODO on the set fx;Ax;A2x;:::;An−1xg. More efficiently, the Arnold iteration integrates the creation of fx;Ax;A2x;:::;An−1xg with the modified Gram-Schmidt algorithm, returning an orthonormal basis for Kn(A; x). -
Accelerating the LOBPCG Method on Gpus Using a Blocked Sparse Matrix Vector Product
Accelerating the LOBPCG method on GPUs using a blocked Sparse Matrix Vector Product Hartwig Anzt and Stanimire Tomov and Jack Dongarra Innovative Computing Lab University of Tennessee Knoxville, USA Email: [email protected], [email protected], [email protected] Abstract— the computing power of today’s supercomputers, often accel- erated by coprocessors like graphics processing units (GPUs), This paper presents a heterogeneous CPU-GPU algorithm design and optimized implementation for an entire sparse iter- becomes challenging. ative eigensolver – the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) – starting from low-level GPU While there exist numerous efforts to adapt iterative lin- data structures and kernels to the higher-level algorithmic choices ear solvers like Krylov subspace methods to coprocessor and overall heterogeneous design. Most notably, the eigensolver technology, sparse eigensolvers have so far remained out- leverages the high-performance of a new GPU kernel developed side the main focus. A possible explanation is that many for the simultaneous multiplication of a sparse matrix and a of those combine sparse and dense linear algebra routines, set of vectors (SpMM). This is a building block that serves which makes porting them to accelerators more difficult. Aside as a backbone for not only block-Krylov, but also for other from the power method, algorithms based on the Krylov methods relying on blocking for acceleration in general. The subspace idea are among the most commonly used general heterogeneous LOBPCG developed here reveals the potential of eigensolvers [1]. When targeting symmetric positive definite this type of eigensolver by highly optimizing all of its components, eigenvalue problems, the recently developed Locally Optimal and can be viewed as a benchmark for other SpMM-dependent applications. -
Deflation by Restriction for the Inverse-Free Preconditioned Krylov Subspace Method
NUMERICAL ALGEBRA, doi:10.3934/naco.2016.6.55 CONTROL AND OPTIMIZATION Volume 6, Number 1, March 2016 pp. 55{71 DEFLATION BY RESTRICTION FOR THE INVERSE-FREE PRECONDITIONED KRYLOV SUBSPACE METHOD Qiao Liang Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA Qiang Ye∗ Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA (Communicated by Xiaoqing Jin) Abstract. A deflation by restriction scheme is developed for the inverse-free preconditioned Krylov subspace method for computing a few extreme eigen- values of the definite symmetric generalized eigenvalue problem Ax = λBx. The convergence theory for the inverse-free preconditioned Krylov subspace method is generalized to include this deflation scheme and numerical examples are presented to demonstrate the convergence properties of the algorithm with the deflation scheme. 1. Introduction. The definite symmetric generalized eigenvalue problem for (A; B) is to find λ 2 R and x 2 Rn with x 6= 0 such that Ax = λBx (1) where A; B are n × n symmetric matrices and B is positive definite. The eigen- value problem (1), also referred to as a pencil eigenvalue problem (A; B), arises in many scientific and engineering applications, such as structural dynamics, quantum mechanics, and machine learning. The matrices involved in these applications are usually large and sparse and only a few of the eigenvalues are desired. Iterative methods such as the Lanczos algorithm and the Arnoldi algorithm are some of the most efficient numerical methods developed in the past few decades for computing a few eigenvalues of a large scale eigenvalue problem, see [1, 11, 19]. -
Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions
Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions Xin Liu∗ Zaiwen Weny Yin Zhangz March 22, 2012 Abstract In many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value de- compositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique to significantly accelerate the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm, and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms. Keywords. subspace optimization, dominant singular value decomposition, Krylov subspace, eigen- value decomposition 1 Introduction Singular value decomposition (SVD) is a fundamental and enormously useful tool in matrix computations, such as determining the pseudo-inverse, the range or null space, or the rank of a matrix, solving regular or total least squares data fitting problems, or computing low-rank approximations to a matrix, just to mention a few. The need for computing SVDs also frequently arises from diverse fields in statistics, signal processing, data mining or compression, and from various dimension-reduction models of large-scale dynamic systems. Usually, instead of acquiring all the singular values and vectors of a matrix, it suffices to compute a set of dominant (i.e., the largest) singular values and their corresponding singular vectors in order to obtain the most valuable and relevant information about the underlying dataset or system. -
Power Method and Krylov Subspaces
Power method and Krylov subspaces Introduction When calculating an eigenvalue of a matrix A using the power method, one starts with an initial random vector b and then computes iteratively the sequence Ab, A2b,...,An−1b normalising and storing the result in b on each iteration. The sequence converges to the eigenvector of the largest eigenvalue of A. The set of vectors 2 n−1 Kn = b, Ab, A b,...,A b , (1) where n < rank(A), is called the order-n Krylov matrix, and the subspace spanned by these vectors is called the order-n Krylov subspace. The vectors are not orthogonal but can be made so e.g. by Gram-Schmidt orthogonalisation. For the same reason that An−1b approximates the dominant eigenvector one can expect that the other orthogonalised vectors approximate the eigenvectors of the n largest eigenvalues. Krylov subspaces are the basis of several successful iterative methods in numerical linear algebra, in particular: Arnoldi and Lanczos methods for finding one (or a few) eigenvalues of a matrix; and GMRES (Generalised Minimum RESidual) method for solving systems of linear equations. These methods are particularly suitable for large sparse matrices as they avoid matrix-matrix opera- tions but rather multiply vectors by matrices and work with the resulting vectors and matrices in Krylov subspaces of modest sizes. Arnoldi iteration Arnoldi iteration is an algorithm where the order-n orthogonalised Krylov matrix Qn of a matrix A is built using stabilised Gram-Schmidt process: • start with a set Q = {q1} of one random normalised vector q1 • repeat for k =2 to n : – make a new vector qk = Aqk−1 † – orthogonalise qk to all vectors qi ∈ Q storing qi qk → hi,k−1 – normalise qk storing qk → hk,k−1 – add qk to the set Q By construction the matrix Hn made of the elements hjk is an upper Hessenberg matrix, h1,1 h1,2 h1,3 ··· h1,n h2,1 h2,2 h2,3 ··· h2,n 0 h3,2 h3,3 ··· h3,n Hn = , (2) . -
Section 5.3: Other Krylov Subspace Methods
Section 5.3: Other Krylov Subspace Methods Jim Lambers March 15, 2021 • So far, we have only seen Krylov subspace methods for symmetric positive definite matrices. • We now generalize these methods to arbitrary square invertible matrices. Minimum Residual Methods • In the derivation of the Conjugate Gradient Method, we made use of the Krylov subspace 2 k−1 K(r1; A; k) = spanfr1;Ar1;A r1;:::;A r1g to obtain a solution of Ax = b, where A was a symmetric positive definite matrix and (0) (0) r1 = b − Ax was the residual associated with the initial guess x . • Specifically, the Conjugate Gradient Method generated a sequence of approximate solutions x(k), k = 1; 2;:::; where each iterate had the form k (k) (0) (0) X x = x + Qkyk = x + qjyjk; k = 1; 2;:::; (1) j=1 and the columns q1; q2;:::; qk of Qk formed an orthonormal basis of the Krylov subspace K(b; A; k). • In this section, we examine whether a similar approach can be used for solving Ax = b even when the matrix A is not symmetric positive definite. (k) • That is, we seek a solution x of the form of equation (??), where again the columns of Qk form an orthonormal basis of the Krylov subspace K(b; A; k). • To generate this basis, we cannot simply generate a sequence of orthogonal polynomials using a three-term recurrence relation, as before, because the bilinear form T hf; gi = r1 f(A)g(A)r1 does not satisfy the requirements for a valid inner product when A is not symmetric positive definite. -
A Generalized Eigenvalue Algorithm for Tridiagonal Matrix Pencils Based on a Nonautonomous Discrete Integrable System
A generalized eigenvalue algorithm for tridiagonal matrix pencils based on a nonautonomous discrete integrable system Kazuki Maedaa, , Satoshi Tsujimotoa ∗ aDepartment of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan Abstract A generalized eigenvalue algorithm for tridiagonal matrix pencils is presented. The algorithm appears as the time evolution equation of a nonautonomous discrete integrable system associated with a polynomial sequence which has some orthogonality on the support set of the zeros of the characteristic polynomial for a tridiagonal matrix pencil. The convergence of the algorithm is discussed by using the solution to the initial value problem for the corresponding discrete integrable system. Keywords: generalized eigenvalue problem, nonautonomous discrete integrable system, RII chain, dqds algorithm, orthogonal polynomials 2010 MSC: 37K10, 37K40, 42C05, 65F15 1. Introduction Applications of discrete integrable systems to numerical algorithms are important and fascinating top- ics. Since the end of the twentieth century, a number of relationships between classical numerical algorithms and integrable systems have been studied (see the review papers [1–3]). On this basis, new algorithms based on discrete integrable systems have been developed: (i) singular value algorithms for bidiagonal matrices based on the discrete Lotka–Volterra equation [4, 5], (ii) Pade´ approximation algorithms based on the dis- crete relativistic Toda lattice [6] and the discrete Schur flow [7], (iii) eigenvalue algorithms for band matrices based on the discrete hungry Lotka–Volterra equation [8] and the nonautonomous discrete hungry Toda lattice [9], and (iv) algorithms for computing D-optimal designs based on the nonautonomous discrete Toda (nd-Toda) lattice [10] and the discrete modified KdV equation [11]. -
Analysis of a Fast Hankel Eigenvalue Algorithm
Analysis of a Fast Hankel Eigenvalue Algorithm a b Franklin T Luk and Sanzheng Qiao a Department of Computer Science Rensselaer Polytechnic Institute Troy New York USA b Department of Computing and Software McMaster University Hamilton Ontario LS L Canada ABSTRACT This pap er analyzes the imp ortant steps of an O n log n algorithm for nding the eigenvalues of a complex Hankel matrix The three key steps are a Lanczostype tridiagonalization algorithm a fast FFTbased Hankel matrixvector pro duct pro cedure and a QR eigenvalue metho d based on complexorthogonal transformations In this pap er we present an error analysis of the three steps as well as results from numerical exp eriments Keywords Hankel matrix eigenvalue decomp osition Lanczos tridiagonalization Hankel matrixvector multipli cation complexorthogonal transformations error analysis INTRODUCTION The eigenvalue decomp osition of a structured matrix has imp ortant applications in signal pro cessing In this pap er nn we consider a complex Hankel matrix H C h h h h n n h h h h B C n n B C B C H B C A h h h h n n n n h h h h n n n n The authors prop osed a fast algorithm for nding the eigenvalues of H The key step is a fast Lanczos tridiago nalization algorithm which employs a fast Hankel matrixvector multiplication based on the Fast Fourier transform FFT Then the algorithm p erforms a QRlike pro cedure using the complexorthogonal transformations in the di agonalization to nd the eigenvalues In this pap er we present an error analysis and discuss -
Introduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 24 October 29, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 18 Singular Value Decomposition (SVD) Table of Contents 1 Singular Value Decomposition (SVD) Ming Zhong (JHU) AMS Fall 2018 2 / 18 Singular Value Decomposition (SVD) Matrix Decomposition n×n We have discussed several decomposition techniques (for A 2 R ), A = LU when A is non-singular (Gaussian elimination). A = LL> when A is symmetric positive definite (Cholesky). A = QR for any real square matrix (unique when A is non-singular). A = PDP−1 if A has n linearly independent eigen-vectors). A = PJP−1 for any real square matrix. We are now ready to discuss the Singular Value Decomposition (SVD), A = UΣV∗; m×n m×m n×n m×n for any A 2 C , U 2 C , V 2 C , and Σ 2 R . Ming Zhong (JHU) AMS Fall 2018 3 / 18 Singular Value Decomposition (SVD) Some Brief History Going back in time, It was originally developed by differential geometers, equivalence of bi-linear forms by independent orthogonal transformation. Eugenio Beltrami in 1873, independently Camille Jordan in 1874, for bi-linear forms. James Joseph Sylvester in 1889 did SVD for real square matrices, independently; singular values = canonical multipliers of the matrix. Autonne in 1915 did SVD via polar decomposition. Carl Eckart and Gale Young in 1936 did the proof of SVD of rectangular and complex matrices, as a generalization of the principal axis transformaton of the Hermitian matrices. Erhard Schmidt in 1907 defined SVD for integral operators. Emile´ Picard in 1910 is the first to call the numbers singular values. -
Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm
University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses October 2018 Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm Brendan E. Gavin University of Massachusetts Amherst Follow this and additional works at: https://scholarworks.umass.edu/dissertations_2 Part of the Numerical Analysis and Computation Commons, and the Numerical Analysis and Scientific Computing Commons Recommended Citation Gavin, Brendan E., "Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm" (2018). Doctoral Dissertations. 1341. https://doi.org/10.7275/12360224 https://scholarworks.umass.edu/dissertations_2/1341 This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. INEXACT AND NONLINEAR EXTENSIONS OF THE FEAST EIGENVALUE ALGORITHM A Dissertation Presented by BRENDAN GAVIN Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY September 2018 Electrical and Computer Engineering c Copyright by Brendan Gavin 2018 All Rights Reserved INEXACT AND NONLINEAR EXTENSIONS OF THE FEAST EIGENVALUE ALGORITHM A Dissertation Presented by BRENDAN GAVIN Approved as to style and content by: Eric Polizzi, Chair Zlatan Aksamija, Member -
A Brief Introduction to Krylov Space Methods for Solving Linear Systems
A Brief Introduction to Krylov Space Methods for Solving Linear Systems Martin H. Gutknecht1 ETH Zurich, Seminar for Applied Mathematics [email protected] With respect to the “influence on the development and practice of science and engineering in the 20th century”, Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton’s method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. -
Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1
Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1 Richard Barrett2, Michael Berry3, Tony F. Chan4, James Demmel5, June M. Donato6, Jack Dongarra3,2, Victor Eijkhout7, Roldan Pozo8, Charles Romine9, and Henk Van der Vorst10 This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied Mathematics (http://www.siam.org/books). 1This work was supported in part by DARPA and ARO under contract number DAAL03-91-C-0047, the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615, the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-84OR21400, and the Stichting Nationale Computer Faciliteit (NCF) by Grant CRG 92.03. 2Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830- 6173. 3Department of Computer Science, University of Tennessee, Knoxville, TN 37996. 4Applied Mathematics Department, University of California, Los Angeles, CA 90024-1555. 5Computer Science Division and Mathematics Department, University of California, Berkeley, CA 94720. 6Science Applications International Corporation, Oak Ridge, TN 37831 7Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX 78758 8National Institute of Standards and Technology, Gaithersburg, MD 9Office of Science and Technology Policy, Executive Office of the President 10Department of Mathematics, Utrecht University, Utrecht, the Netherlands. ii How to Use This Book We have divided this book into five main chapters. Chapter 1 gives the motivation for this book and the use of templates. Chapter 2 describes stationary and nonstationary iterative methods.