Eigenvalue Algorithms for Symmetric Hierarchical Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Eigenvalue Algorithms for Symmetric Hierarchical Matrices Eigenvalue Algorithms for Symmetric Hierarchical Matrices Thomas Mach MAX PLANCK INSTITUTE FOR DYNAMICS OF COMPLEX TECHNICAL SYSTEMS MAGDEBURG Eigenvalue Algorithms for Symmetric Hierarchical Matrices Dissertation submitted to Department of Mathematics at Chemnitz University of Technology in accordance with the requirements for the degree Dr. rer. nat. ν( 0.5) = 4 -6− -4 -2 -1 0 1 2 4 6 a0 = 6.5 b0 = 5.5 − µ = 0.5 1 − ν( 3.5) = 2 -6− -4 -2 -1 0 1 2 4 6 a1 = 6.5 b1 = 0.5 − µ = 3.5 − 2 − ν( 2) = 4 1, 2, 3, 4, 5, 6, 7, 8 -6 − -4 -2 -1 0 1 2 4 6 { } a2 = 3.5 b2 = 0.5 S∗(r) − µ = 2 − 3 − r = 5, 6, 7, 8 ν( 2.75) = 3 1, 2, 3, 4 { } -6− -4 -2 -1 0 1 2 4 6 { } S(r) a3 = 3.5 b3 = 2 −µ = 2.75 − 4 − 1, 2 3, 4 5, 6 7, 8 { } { } { } { } 1 2 3 4 5 6 7 8 { } { } { } { } { } { } { } { } φ1 φ2 φ3 φ4 φ5 φ6 φ7 φ8 τ = σ = supp (φ1, φ2) dist (τ, σ) supp (φ7, φ8) diam (τ) presented by: Dipl.-Math. techn. Thomas Mach Advisor: Prof. Dr. Peter Benner Reviewer: Prof. Dr. Steffen B¨orm March 13, 2012 ACKNOWLEDGMENTS Personal Thanks. I thank Peter Benner for giving me the time and the freedom to follow several ideas and for helping me with new ideas. I would also like to thank my colleagues in Peter Benner's groups in Chemnitz and Magdeburg. All of them helped me a lot during the last years. The following four will be highlighted, since they are most important for me. I would like to thank Jens Saak for explaining me countless things and for providing his LATEX framework that I used for this document. I stressed our compute cluster specialist, Martin K¨ohler,a lot with questions and request. I thank him for his help. Further, I am greatly indebted to Ulrike Baur, who explained me many details concerning hierarchical matrices. I want to thank Martin Stoll for pushing me forward during the final year. I also want to thank Jessica G¨ordesand Steffen B¨orm(CAU Kiel) for discussing eigen- value algorithms for -matrices, Marc Van Barel (KU Leuven) for pointing out the H` relations to semiseparable matrices, and Lars Grasedyck (RWTH Aachen) for a discus- sion on QR decompositions and the QR-like algorithms for -matrices. H I am indebted to Tom Barker for carefully proof reading this thesis. Finally this work would not be possible without adequate recreation by (extensive) sports activities. Therefore, I thank my friends Benjamin Schulz, Jens Lang, and Tobias Breiten for cycling and jogging with me. Last but not least I thank my sister for proof reading and my parents, who made all of this possible in the first place. Financial Support. This research was supported by a grant from the Free State of Saxony (S¨achsisches Landesstipendium) for two years. CONTENTS List of Figures xi List of Tables xiii List of Algorithms xv List of Acronyms xvii List of Symbols xix Publications xxi 1 Introduction1 1.1 Notation.....................................2 1.2 Structure of this Thesis............................3 2 Basics 5 2.1 Linear Algebra and Eigenvalues........................6 2.1.1 The Eigenvalue Problem........................7 2.1.2 Dense Matrix Algorithms.......................9 2.2 Integral Operators and Integral Equations.................. 14 2.2.1 Definitions............................... 14 2.2.2 Example - BEM............................ 16 2.3 Introduction to Hierarchical Arithmetic................... 17 2.3.1 Main Idea................................ 17 2.3.2 Definitions............................... 19 2.3.3 Hierarchical Arithmetic........................ 24 2.3.4 Simple Hierarchical Matrices ( -Matrices)............. 30 H` 2.4 Examples.................................... 33 2.4.1 FEM Example............................. 33 2.4.2 BEM Example............................. 36 2.4.3 Randomly Generated Examples.................... 37 viii Contents 2.4.4 Application Based Examples..................... 38 2.4.5 One-Dimensional Integral Equation.................. 38 2.5 Related Matrix Formats............................ 39 2.5.1 2-Matrices............................... 40 H 2.5.2 Diagonal plus Semiseparable Matrices................ 40 2.5.3 Hierarchically Semiseparable Matrices................ 42 2.6 Review of Existing Eigenvalue Algorithms.................. 44 2.6.1 Projection Method........................... 44 2.6.2 Divide-and-Conquer for (1)-Matrices............... 45 H` 2.6.3 Transforming Hierarchical into Semiseparable Matrices....... 46 2.7 Compute Cluster Otto............................. 47 3 QR Decomposition of Hierarchical Matrices 49 3.1 Introduction................................... 49 3.2 Review of Known QR Decompositions for -Matrices........... 50 H 3.2.1 Lintner's -QR Decomposition.................... 50 H 3.2.2 Bebendorf's -QR Decomposition.................. 52 H 3.3 A new Method for Computing the -QR Decomposition.......... 54 H 3.3.1 Leaf Block-Column........................... 54 3.3.2 Non-Leaf Block Column........................ 56 3.3.3 Complexity............................... 57 3.3.4 Orthogonality.............................. 60 3.3.5 Comparison to QR Decompositions for Sparse Matrices...... 61 3.4 Numerical Results............................... 62 3.4.1 Lintner's -QR decomposition.................... 62 H 3.4.2 Bebendorf's -QR decomposition.................. 66 H 3.4.3 The new -QR decomposition.................... 66 H 3.5 Conclusions................................... 67 4 QR-like Algorithms for Hierarchical Matrices 69 4.1 Introduction................................... 70 4.1.1 LR Cholesky Algorithm........................ 70 4.1.2 QR Algorithm............................. 70 4.1.3 Complexity............................... 71 4.2 LR Cholesky Algorithm for Hierarchical Matrices.............. 72 4.2.1 Algorithm................................ 72 4.2.2 Shift Strategy.............................. 72 4.2.3 Deflation................................ 73 4.2.4 Numerical Results........................... 73 4.3 LR Cholesky Algorithm for Diagonal plus Semiseparable Matrices.... 75 4.3.1 Theorem................................ 75 4.3.2 Application to Tridiagonal and Band Matrices........... 79 4.3.3 Application to Matrices with Rank Structure............ 79 4.3.4 Application to -Matrices....................... 80 H Contents ix 4.3.5 Application to -Matrices...................... 82 H` 4.3.6 Application to 2-Matrices...................... 83 H 4.4 Numerical Examples.............................. 84 4.5 The Unsymmetric Case............................ 84 4.6 Conclusions................................... 88 5 Slicing the Spectrum of Hierarchical Matrices 89 5.1 Introduction................................... 89 5.2 Slicing the Spectrum by LDLT Factorization................ 91 5.2.1 The Function ν(M µI)....................... 91 − 5.2.2 LDLT Factorization of -Matrices.................. 92 H` 5.2.3 Start-Interval [a; b]........................... 96 5.2.4 Complexity............................... 96 5.3 Numerical Results............................... 97 5.4 Possible Extensions............................... 100 5.4.1 LDLT Slicing Algorithm for HSS Matrices.............. 103 5.4.2 LDLT Slicing Algorithm for -Matrices............... 103 H 5.4.3 Parallelization............................. 105 5.4.4 Eigenvectors.............................. 107 5.5 Conclusions................................... 107 6 Computing Eigenvalues by Vector Iterations 109 6.1 Power Iteration................................. 109 6.1.1 Power Iteration for Hierarchical Matrices.............. 110 6.1.2 Inverse Iteration............................ 111 6.2 Preconditioned Inverse Iteration for Hierarchical Matrices......... 111 6.2.1 Preconditioned Inverse Iteration................... 113 6.2.2 The Approximate Inverse of an -Matrix.............. 115 H 6.2.3 The Approximate Cholesky Decomposition of an -Matrix.... 116 H 6.2.4 PINVIT for -Matrices........................ 117 H 6.2.5 The Interior of the Spectrum..................... 120 6.2.6 Numerical Results........................... 123 6.2.7 Conclusions............................... 130 7 Comparison of the Algorithms and Numerical Results 133 7.1 Theoretical Comparison............................ 133 7.2 Numerical Comparison............................. 135 8 Conclusions 141 Theses 143 Bibliography 145 Index 153 LIST OF FIGURES 2.1 Dense, sparse and data-sparse matrices.................... 11 1 2.2 Singular values of ∆2−;h............................. 18 1 2.3 Singular values of ∆2−;h............................. 18 2.4 -tree T ..................................... 20 H I 2.5 Example of basis functions........................... 20 2.6 -matrix based on T from Figure 2.4..................... 23 H I 2.7 -matrix structure diagram.......................... 30 H 2.8 Structure of an -matrix........................... 31 H3 2.9 Eigenvectors of FEM32............................. 35 2.10 Eigenvectors of FEM3D8............................ 36 2.11 Set diagram of different matrix formats.................... 39 2.12 Eigenvalues before and after projection.................... 45 2.13 Block structure condition............................ 46 2.14 Photos of Linux-cluster Otto.......................... 47 3.1 Explanation of Line5 and6 of Algorithm 3.5................. 58 3.2 Explanation of Line 16 and 17 of Algorithm 3.5............... 58 3.3 Computation time for the FEM example series................ 63 3.4 Accuracy and orthogonality for the FEM example series........... 63 3.5 Computation time for the BEM example series................ 64 3.6 Accuracy and orthogonality for the BEM example series........... 64 3.7 Computation time for the
Recommended publications
  • Fast Solution of Sparse Linear Systems with Adaptive Choice of Preconditioners Zakariae Jorti
    Fast solution of sparse linear systems with adaptive choice of preconditioners Zakariae Jorti To cite this version: Zakariae Jorti. Fast solution of sparse linear systems with adaptive choice of preconditioners. Gen- eral Mathematics [math.GM]. Sorbonne Université, 2019. English. NNT : 2019SORUS149. tel- 02425679v2 HAL Id: tel-02425679 https://tel.archives-ouvertes.fr/tel-02425679v2 Submitted on 15 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Sorbonne Université École doctorale de Sciences Mathématiques de Paris Centre Laboratoire Jacques-Louis Lions Résolution rapide de systèmes linéaires creux avec choix adaptatif de préconditionneurs Par Zakariae Jorti Thèse de doctorat de Mathématiques appliquées Dirigée par Laura Grigori Co-supervisée par Ani Anciaux-Sedrakian et Soleiman Yousef Présentée et soutenue publiquement le 03/10/2019 Devant un jury composé de: M. Tromeur-Dervout Damien, Professeur, Université de Lyon, Rapporteur M. Schenk Olaf, Professeur, Università della Svizzera italiana, Rapporteur M. Hecht Frédéric, Professeur, Sorbonne Université, Président du jury Mme. Emad Nahid, Professeur, Université Paris-Saclay, Examinateur M. Vasseur Xavier, Ingénieur de recherche, ISAE-SUPAERO, Examinateur Mme. Grigori Laura, Directrice de recherche, Inria Paris, Directrice de thèse Mme. Anciaux-Sedrakian Ani, Ingénieur de recherche, IFPEN, Co-encadrante de thèse M.
    [Show full text]
  • On-The-Fly Computation of Frontal Orbitals in Density Matrix
    On-the-fly computation of frontal orbitals in density matrix expansions Anastasia Kruchinina,∗ Elias Rudberg,∗ and Emanuel H. Rubensson∗ Division of Scientific Computing, Department of Information Technology, Uppsala University, Sweden E-mail: [email protected]; [email protected]; [email protected] Abstract Linear scaling density matrix methods typically do not provide individual eigen- vectors and eigenvalues of the Fock/Kohn-Sham matrix, so additional work has to be performed if they are needed. Spectral transformation techniques facilitate compu- tation of frontal (homo and lumo) molecular orbitals. In the purify-shift-and-square method the convergence of iterative eigenvalue solvers is improved by combining recur- sive density matrix expansion with the folded spectrum method [J. Chem. Phys. 128, 176101 (2008)]. However, the location of the shift in the folded spectrum method and the iteration of the recursive expansion selected for eigenpair computation may have a significant influence on the iterative eigenvalue solver performance and eigenvector accuracy. In this work, we make use of recent homo and lumo eigenvalue estimates arXiv:1709.04900v1 [physics.comp-ph] 14 Sep 2017 [SIAM J. Sci. Comput. 36, B147 (2014)] for selecting shift and iteration such that homo and lumo orbitals can be computed in a small fraction of the total recursive expansion time and with sufficient accuracy. We illustrate our method by performing self-consistent field calculations for large scale systems. 1 1 Introduction Computing interior eigenvalues of large matrices is one of the most difficult problems in the area of numerical linear algebra and it appears in many applications.
    [Show full text]
  • Efficient “Black-Box” Multigrid Solvers for Convection-Dominated Problems
    EFFICIENT \BLACK-BOX" MULTIGRID SOLVERS FOR CONVECTION-DOMINATED PROBLEMS A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Science 2011 Glyn Owen Rees School of Computer Science 2 Contents Declaration 12 Copyright 13 Acknowledgement 14 1 Introduction 15 2 The Convection-Di®usion Problem 18 2.1 The continuous problem . 18 2.2 The Galerkin approximation method . 22 2.2.1 The ¯nite element method . 26 2.3 The Petrov-Galerkin (SUPG) approximation . 29 2.4 Properties of the discrete operator . 34 2.5 The Navier-Stokes problem . 37 3 Methods for Solving Linear Algebraic Systems 42 3.1 Basic iterative methods . 43 3.1.1 The Jacobi method . 45 3.1.2 The Gauss-Seidel method . 46 3.1.3 Convergence of splitting iterations . 48 3.1.4 Incomplete LU (ILU) factorisation . 49 3.2 Krylov methods . 56 3.2.1 The GMRES method . 59 3.2.2 Preconditioned GMRES method . 62 3.3 Multigrid method . 67 3.3.1 Geometric multigrid method . 67 3.3.2 Algebraic multigrid method . 75 3.3.3 Parallel AMG . 80 3.3.4 Literature summary of multigrid . 81 3.3.5 Multigrid preconditioning of Krylov solvers . 84 3 3.4 A new tILU smoother . 85 4 Two-Dimensional Case Studies 89 4.1 The di®usion problem . 90 4.2 Geometric multigrid preconditioning . 95 4.2.1 Constant uni-directional wind . 95 4.2.2 Double glazing problem - recirculating wind . 106 4.2.3 Combined uni-directional and recirculating wind .
    [Show full text]
  • Chapter 7 Iterative Methods for Large Sparse Linear Systems
    Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay for the direct methods based on matrix factorization is that the factors of a sparse matrix may not be sparse, so that for large sparse systems the memory cost make direct methods too expensive, in memory and in execution time. Instead we introduce iterative methods, for which matrix sparsity is exploited to develop fast algorithms with a low memory footprint. 7.1 Sparse matrix algebra Large sparse matrices We say that the matrix A Rn is large if n is large, and that A is sparse if most of the elements are2 zero. If a matrix is not sparse, we say that the matrix is dense. Whereas for a dense matrix the number of nonzero elements is (n2), for a sparse matrix it is only (n), which has obvious implicationsO for the memory footprint and efficiencyO for algorithms that exploit the sparsity of a matrix. AdiagonalmatrixisasparsematrixA =(aij), for which aij =0for all i = j,andadiagonalmatrixcanbegeneralizedtoabanded matrix, 6 for which there exists a number p,thebandwidth,suchthataij =0forall i<j p or i>j+ p.Forexample,atridiagonal matrix A is a banded − 59 CHAPTER 7. ITERATIVE METHODS FOR LARGE SPARSE 60 LINEAR SYSTEMS matrix with p =1, xx0000 xxx000 20 xxx003 A = , (7.1) 600xxx07 6 7 6000xxx7 6 7 60000xx7 6 7 where x represents a nonzero4 element. 5 Compressed row storage The compressed row storage (CRS) format is a data structure for efficient represention of a sparse matrix by three arrays, containing the nonzero values, the respective column indices, and the extents of the rows.
    [Show full text]
  • Chebyshev and Fourier Spectral Methods 2000
    Chebyshev and Fourier Spectral Methods Second Edition John P. Boyd University of Michigan Ann Arbor, Michigan 48109-2143 email: [email protected] http://www-personal.engin.umich.edu/jpboyd/ 2000 DOVER Publications, Inc. 31 East 2nd Street Mineola, New York 11501 1 Dedication To Marilyn, Ian, and Emma “A computation is a temptation that should be resisted as long as possible.” — J. P. Boyd, paraphrasing T. S. Eliot i Contents PREFACE x Acknowledgments xiv Errata and Extended-Bibliography xvi 1 Introduction 1 1.1 Series expansions .................................. 1 1.2 First Example .................................... 2 1.3 Comparison with finite element methods .................... 4 1.4 Comparisons with Finite Differences ....................... 6 1.5 Parallel Computers ................................. 9 1.6 Choice of basis functions .............................. 9 1.7 Boundary conditions ................................ 10 1.8 Non-Interpolating and Pseudospectral ...................... 12 1.9 Nonlinearity ..................................... 13 1.10 Time-dependent problems ............................. 15 1.11 FAQ: Frequently Asked Questions ........................ 16 1.12 The Chrysalis .................................... 17 2 Chebyshev & Fourier Series 19 2.1 Introduction ..................................... 19 2.2 Fourier series .................................... 20 2.3 Orders of Convergence ............................... 25 2.4 Convergence Order ................................. 27 2.5 Assumption of Equal Errors ...........................
    [Show full text]
  • Deflation by Restriction for the Inverse-Free Preconditioned Krylov Subspace Method
    NUMERICAL ALGEBRA, doi:10.3934/naco.2016.6.55 CONTROL AND OPTIMIZATION Volume 6, Number 1, March 2016 pp. 55{71 DEFLATION BY RESTRICTION FOR THE INVERSE-FREE PRECONDITIONED KRYLOV SUBSPACE METHOD Qiao Liang Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA Qiang Ye∗ Department of Mathematics University of Kentucky Lexington, KY 40506-0027, USA (Communicated by Xiaoqing Jin) Abstract. A deflation by restriction scheme is developed for the inverse-free preconditioned Krylov subspace method for computing a few extreme eigen- values of the definite symmetric generalized eigenvalue problem Ax = λBx. The convergence theory for the inverse-free preconditioned Krylov subspace method is generalized to include this deflation scheme and numerical examples are presented to demonstrate the convergence properties of the algorithm with the deflation scheme. 1. Introduction. The definite symmetric generalized eigenvalue problem for (A; B) is to find λ 2 R and x 2 Rn with x 6= 0 such that Ax = λBx (1) where A; B are n × n symmetric matrices and B is positive definite. The eigen- value problem (1), also referred to as a pencil eigenvalue problem (A; B), arises in many scientific and engineering applications, such as structural dynamics, quantum mechanics, and machine learning. The matrices involved in these applications are usually large and sparse and only a few of the eigenvalues are desired. Iterative methods such as the Lanczos algorithm and the Arnoldi algorithm are some of the most efficient numerical methods developed in the past few decades for computing a few eigenvalues of a large scale eigenvalue problem, see [1, 11, 19].
    [Show full text]
  • Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem †
    S S symmetry Article Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem † Shiliang Wu 1, *, Cuixia Li 1 and Praveen Agarwal 2,3 1 School of Mathematics, Yunnan Normal University, Kunming 650500, China; [email protected] 2 Department of Mathrematics, Anand International College of Engineering, Jaipur 303012, India; [email protected] 3 Nonlinear Dynamics Research Center (NDRC), Ajman University, Ajman, United Arab Emirates * Correspondence: [email protected] † This research was supported by National Natural Science Foundation of China (No.11961082). Abstract: In this paper, we obtain a new equivalent fixed-point form of the linear complementarity problem by introducing a relaxed matrix and establish a class of relaxed modulus-based matrix split- ting iteration methods for solving the linear complementarity problem. Some sufficient conditions for guaranteeing the convergence of relaxed modulus-based matrix splitting iteration methods are presented. Numerical examples are offered to show the efficacy of the proposed methods. Keywords: linear complementarity problem; matrix splitting; iteration method; convergence MSC: 90C33; 65F10; 65F50; 65G40 1. Introduction Citation: Wu, S.; Li, C.; Agarwal, P. Relaxed Modulus-Based Matrix In this paper, we focus on the iterative solution of the linear complementarity problem, Splitting Methods for the Linear abbreviated as ‘LCP(q, A)’, whose form is Complementarity Problem. Symmetry T 2021, 13, 503. https://doi.org/ w = Az + q ≥ 0, z ≥ 0 and z w = 0, (1) 10.3390/sym13030503 × where A 2 Rn n and q 2 Rn are given, and z 2 Rn is unknown, and for two s × t matrices Academic Editor: Jan Awrejcewicz G = (gij) and H = (hij) the order G ≥ (>)H means gij ≥ (>)hij for any i and j.
    [Show full text]
  • Numerical Solution of Saddle Point Problems
    Acta Numerica (2005), pp. 1–137 c Cambridge University Press, 2005 DOI: 10.1017/S0962492904000212 Printed in the United Kingdom Numerical solution of saddle point problems Michele Benzi∗ Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia 30322, USA E-mail: [email protected] Gene H. Golub† Scientific Computing and Computational Mathematics Program, Stanford University, Stanford, California 94305-9025, USA E-mail: [email protected] J¨org Liesen‡ Institut f¨ur Mathematik, Technische Universit¨at Berlin, D-10623 Berlin, Germany E-mail: [email protected] We dedicate this paper to Gil Strang on the occasion of his 70th birthday Large linear systems of saddle point type arise in a wide variety of applica- tions throughout computational science and engineering. Due to their indef- initeness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for this type of system. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems. ∗ Supported in part by the National Science Foundation grant DMS-0207599. † Supported in part by the Department of Energy of the United States Government. ‡ Supported in part by the Emmy Noether Programm of the Deutsche Forschungs- gemeinschaft. 2 M. Benzi, G. H. Golub and J. Liesen CONTENTS 1 Introduction 2 2 Applications leading to saddle point problems 5 3 Properties of saddle point matrices 14 4 Overview of solution algorithms 29 5 Schur complement reduction 30 6 Null space methods 32 7 Coupled direct solvers 40 8 Stationary iterations 43 9 Krylov subspace methods 49 10 Preconditioners 59 11 Multilevel methods 96 12 Available software 105 13 Concluding remarks 107 References 109 1.
    [Show full text]
  • A Generalized Eigenvalue Algorithm for Tridiagonal Matrix Pencils Based on a Nonautonomous Discrete Integrable System
    A generalized eigenvalue algorithm for tridiagonal matrix pencils based on a nonautonomous discrete integrable system Kazuki Maedaa, , Satoshi Tsujimotoa ∗ aDepartment of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan Abstract A generalized eigenvalue algorithm for tridiagonal matrix pencils is presented. The algorithm appears as the time evolution equation of a nonautonomous discrete integrable system associated with a polynomial sequence which has some orthogonality on the support set of the zeros of the characteristic polynomial for a tridiagonal matrix pencil. The convergence of the algorithm is discussed by using the solution to the initial value problem for the corresponding discrete integrable system. Keywords: generalized eigenvalue problem, nonautonomous discrete integrable system, RII chain, dqds algorithm, orthogonal polynomials 2010 MSC: 37K10, 37K40, 42C05, 65F15 1. Introduction Applications of discrete integrable systems to numerical algorithms are important and fascinating top- ics. Since the end of the twentieth century, a number of relationships between classical numerical algorithms and integrable systems have been studied (see the review papers [1–3]). On this basis, new algorithms based on discrete integrable systems have been developed: (i) singular value algorithms for bidiagonal matrices based on the discrete Lotka–Volterra equation [4, 5], (ii) Pade´ approximation algorithms based on the dis- crete relativistic Toda lattice [6] and the discrete Schur flow [7], (iii) eigenvalue algorithms for band matrices based on the discrete hungry Lotka–Volterra equation [8] and the nonautonomous discrete hungry Toda lattice [9], and (iv) algorithms for computing D-optimal designs based on the nonautonomous discrete Toda (nd-Toda) lattice [10] and the discrete modified KdV equation [11].
    [Show full text]
  • Analysis of a Fast Hankel Eigenvalue Algorithm
    Analysis of a Fast Hankel Eigenvalue Algorithm a b Franklin T Luk and Sanzheng Qiao a Department of Computer Science Rensselaer Polytechnic Institute Troy New York USA b Department of Computing and Software McMaster University Hamilton Ontario LS L Canada ABSTRACT This pap er analyzes the imp ortant steps of an O n log n algorithm for nding the eigenvalues of a complex Hankel matrix The three key steps are a Lanczostype tridiagonalization algorithm a fast FFTbased Hankel matrixvector pro duct pro cedure and a QR eigenvalue metho d based on complexorthogonal transformations In this pap er we present an error analysis of the three steps as well as results from numerical exp eriments Keywords Hankel matrix eigenvalue decomp osition Lanczos tridiagonalization Hankel matrixvector multipli cation complexorthogonal transformations error analysis INTRODUCTION The eigenvalue decomp osition of a structured matrix has imp ortant applications in signal pro cessing In this pap er nn we consider a complex Hankel matrix H C h h h h n n h h h h B C n n B C B C H B C A h h h h n n n n h h h h n n n n The authors prop osed a fast algorithm for nding the eigenvalues of H The key step is a fast Lanczos tridiago nalization algorithm which employs a fast Hankel matrixvector multiplication based on the Fast Fourier transform FFT Then the algorithm p erforms a QRlike pro cedure using the complexorthogonal transformations in the di agonalization to nd the eigenvalues In this pap er we present an error analysis and discuss
    [Show full text]
  • Introduction to Mathematical Programming
    Introduction to Mathematical Programming Ming Zhong Lecture 24 October 29, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 18 Singular Value Decomposition (SVD) Table of Contents 1 Singular Value Decomposition (SVD) Ming Zhong (JHU) AMS Fall 2018 2 / 18 Singular Value Decomposition (SVD) Matrix Decomposition n×n We have discussed several decomposition techniques (for A 2 R ), A = LU when A is non-singular (Gaussian elimination). A = LL> when A is symmetric positive definite (Cholesky). A = QR for any real square matrix (unique when A is non-singular). A = PDP−1 if A has n linearly independent eigen-vectors). A = PJP−1 for any real square matrix. We are now ready to discuss the Singular Value Decomposition (SVD), A = UΣV∗; m×n m×m n×n m×n for any A 2 C , U 2 C , V 2 C , and Σ 2 R . Ming Zhong (JHU) AMS Fall 2018 3 / 18 Singular Value Decomposition (SVD) Some Brief History Going back in time, It was originally developed by differential geometers, equivalence of bi-linear forms by independent orthogonal transformation. Eugenio Beltrami in 1873, independently Camille Jordan in 1874, for bi-linear forms. James Joseph Sylvester in 1889 did SVD for real square matrices, independently; singular values = canonical multipliers of the matrix. Autonne in 1915 did SVD via polar decomposition. Carl Eckart and Gale Young in 1936 did the proof of SVD of rectangular and complex matrices, as a generalization of the principal axis transformaton of the Hermitian matrices. Erhard Schmidt in 1907 defined SVD for integral operators. Emile´ Picard in 1910 is the first to call the numbers singular values.
    [Show full text]
  • Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses October 2018 Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm Brendan E. Gavin University of Massachusetts Amherst Follow this and additional works at: https://scholarworks.umass.edu/dissertations_2 Part of the Numerical Analysis and Computation Commons, and the Numerical Analysis and Scientific Computing Commons Recommended Citation Gavin, Brendan E., "Inexact and Nonlinear Extensions of the FEAST Eigenvalue Algorithm" (2018). Doctoral Dissertations. 1341. https://doi.org/10.7275/12360224 https://scholarworks.umass.edu/dissertations_2/1341 This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. INEXACT AND NONLINEAR EXTENSIONS OF THE FEAST EIGENVALUE ALGORITHM A Dissertation Presented by BRENDAN GAVIN Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY September 2018 Electrical and Computer Engineering c Copyright by Brendan Gavin 2018 All Rights Reserved INEXACT AND NONLINEAR EXTENSIONS OF THE FEAST EIGENVALUE ALGORITHM A Dissertation Presented by BRENDAN GAVIN Approved as to style and content by: Eric Polizzi, Chair Zlatan Aksamija, Member
    [Show full text]