Circulant Decomposition of a Matrix and the Eigenvalues of Toeplitz Type Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Circulant Decomposition of a Matrix and the Eigenvalues of Toeplitz Type Matrices Circulant decomposition of a matrix and the eigenvalues of Toeplitz type matrices Hariprasad M., Murugesan Venkatapathi Department of Computational & Data Sciences, Indian Institute of Science, Bangalore - 560012 Abstract We begin by showing that a n×n matrix can be decomposed into a sum of n cir- culant matrices with appropriate relaxations. We use Fast-Fourier-Transform (FFT) operations to perform a sparse similarity transformation representing only the dominant circulant components, to evaluate all eigenvalues of dense Toeplitz, block-Toeplitz and other periodic or quasi-periodic matrices, to a rea- sonable approximation in O(n2) arithmetic operations. This sparse similarity transformation can be exploited for other evaluations as well. Keywords: Eigenvalues, periodic entries, symbol. AMS subject classifications: 65F15, 15A18, 15B05, 15A04 1. Introduction Multiple approaches to efficiently and accurately evaluate eigenvalues of sparse and effectively-sparse matrices are known [1, 2, 3, 4, 5]. Efficient methods are equally desirable for dense matrices that occur frequently in the numerical solution of eigenvalue problems; and Toeplitz, block-Toeplitz, and other ma- trices with periodicity in the diagonals are such a class. O(n2) algorithms to evaluate the characteristic polynomial and its derivative were proposed a few decades ago for Toeplitz [6], block-Toeplitz [7, 8] and Hankel matrices [9]. These algorithms were amalgamated with the Newton methods applied to evaluate only the eigenvalues of interest. On the other hand, the asymptotic behavior of spectra of Toeplitz and block-Toeplitz operators in the limit of large dimen- arXiv:2105.14805v1 [math.NA] 31 May 2021 sions have been well studied [10],[11] and they can also be approximated by an appropriate circulant matrix. An approximation of a finite dimensional Toeplitz matrix using a circulant matrix for speed up of operations was suggested decades ago [12],[13]. Similarly, circulant preconditioners were constructed for Toeplitz matrices by minimizing Email addresses: [email protected] (Hariprasad M.), [email protected] (Murugesan Venkatapathi) the Frobenius norm of the residue [14]. Spectral preserving properties of this preconditioner were shown [15], and block circulant versions were also proposed and analyzed [16]. Note that this preconditioner is the first term in the circulant series representation of a matrix, discussed in this work. Since these precon- ditioners are a projection of the matrix on the low complexity matrix classes, they speed up iterative methods in regularization and optimization. But these preconditioners may not be sufficiently accurate as a similarity transformation of the given matrix. First, using the symbol of a Toeplitz matrix we highlight cases where this single-term circulant approximation is effective for Toeplitz matrices. Later, we broaden the scope of work to matrices with any periodicity along the diagonals. We first recall in remark 1 that any matrix can be decomposed into n cycles that generate its 2n-1 diagonals. These cycles are given by a n term power series of the cyclic permutation matrix with appropriate relaxations. We later show in lemma 1 that when this decomposition is instead applied on an appropriate sim- ilarity transformation of the matrix, it is equivalent to a decomposition of the given matrix into n circulant matrices with appropriate relaxations. By includ- ing only the dominant cycles of the similar matrix, we include the dominant circulant components of the given matrix. The sparse similar matrix can be operated by a non-symmetric Lanczos algorithm for (block) tridiagonalization, and one can evaluate all eigenvalues of such dense matrices in O(n2) arithmetic operations. Let In be the identity matrix of dimension n, and C be the permutation matrix corresponding to the adjacency matrix of a full cycle. 0 1 C = : I 0 n−1 n×n Remark 1. Any matrix A can be decomposed into n cycles given by a power Pn−1 j series in C such that A = j=0 ΛjC . In the above, we refer to the non-zero entries of Hadamard product A:Cj, supported on Cj, as the jth cycle of matrix A. Let the permutation matrix given by a flipped identity matrix be 2 3 0 ··· 0 0 1 6 7 60 ··· 0 1 07 6 7 J = 60 ··· 1 0 07 : 6. 7 6. :: 7 4. : 0 0 05 1 ··· 0 0 0 n×n Remark 2. Eigenvalues of a circulant matrix R are given by the discrete Fourier transform of the first row R(1; j). The corresponding eigenvectors are 1 2π p i n pq given by the columns of a matrix W with W (p; q) = n e where p; q = 1; 2 ··· n. The matrix W has the property W 2 = JC, so any circulant matrix has an eigen decomposition R = W ΛW y. R is also given by R = W yΛ~W where Λ~ = JCΛCT J T . 2 i 2πkp Lemma 1. Given diagonal matrices D with Dk(p; p) = e n and any circulant T k matrix R, the matrices RDk and Λ(~ C ) are similar. Here Λ is a diagonal matrix with the non zero entries given by eigenvalues of R, and C is the cyclic permutation matrix. Proof. Using W for a similarity transformation, y y y W RDkW = WW Λ~WDkW ; y = Λ~WDkW = Λ(~ CT )k: y T k The equality WDkW = (C ) can be deduced by evaluating its (p; l) entry given by y 1 X i 2πpq i 2πkq −i 2πql WD W (p; l) = e n e n e n ; k n q 1 X i 2π(p+k−l)q = e n ; n q = 1 when l = p + k and 0 otherwise: T k Hence, the eigenvalues of RDk are the eigenvalues of the matrix Λ(~ C ) . The matrix Λ(~ CT )k is sparse, and its eigenvalues are easy to compute. Mo- tivated by this result, we construct the decomposition of a matrix into these components. Theorem 1. Any n×n square matrix A can be represented as sum of n circulant matrices with relaxations taking values from the nth root of unity. It is of the form n X A = RkDk k=1 Where Rk is a circulant matrix and Dk is a diagonal matrix with Dk(p; p) = i 2πkp e n . Proof. Consider the matrix B = W AW y. Using remark 1 to decompose it into Pn ~ T k cycles, and recalling the proof of lemma 1, we have B = k=1 Λk(C ) = Pn ~ y ~ k=1 ΛkWDkW , where Λk are diagonal. The corresponding matrix A is then Pn y ~ Pn A = k=1 W ΛkWDk = k=1 RkDk. Corollary 1. If a matrix A has k particular dominant frequencies on each of its cycles, then the matrix W AW y has only k dominant cycles. Proof. Consider the decomposition of A into cycles. n−1 X q A = ΛqC q=0 3 where entries of each Λq are given by k dominant frequencies. Using the pro- posed similarity transformation, n−1 y X q y W AW = W ( ΛqC )W q=0 n−1 y X B = W AW = RqDq q=0 We know each circulant matrix Rq has eigenvalues given by the diagonal matrix Λq. As mentioned in remark 2, entries Rq(1; j) are given by the inverse FFT of Λ, and hence, all rows have k particular dominant entries, with the transformed similar matrix B having only k dominant cycles. Also note that Pn−1 j q=0 Rq(1; j)C Dq is the FFT of the sequence Rq(1; j) for q = 0; 1; 2; ··· n. By Parsevals theorem, a cycle supported on Cj in B has the same energy as the sequence corresponding entries of Rq(1; j) for q = 0; 1; 2; ··· n. 2. Single-term circulant approximation of a Toeplitz matrix and its relation with the symbol A Toeplitz matrix is of the form, 2 3 a0 a1 a2 ··· an−1 6 . 7 6 .. 7 6 a−1 a0 a1 a2 7 6 . 7 A = 6 .. .. 7. 6 a−2 a−1 a0 7 6 . 7 6 . .. .. .. .. 7 4 . 5 a−(n−1) a−(n−2) ······ a0 iθ Pn−1 2πik The function a(e ) = k=−(n−1) ake for θ 2 [0; 2π) is called the symbol of the matrix A [17]. In this section we consider different types of symbols, the corresponding eigenvalues of the single-term circulant approximation, and the actual spectrum of the Toeplitz matrices. The single-term approximation is given by only the diagonal entries of W AW y. The symbol may be exploited for computing efficiency, when applicable, as it can be evaluated in O(n) arithmetic operations. On the other hand, the single-term circulant approximation is more accurate and robust in general. 2.1. Case 1: Symbol is a product of a polynomial and a trigonometric polynomial Spectrum of A is said to be canonically distributed if the limiting spectrum approaches the range of the symbol. For symbols of the form a(eiθ) = p(θ)q(eiθ) with polynomials p and q, we consider the Toeplitz matrices whose entries are from its truncated Fourier series expansion. Several conditional theorems on the symbol such that the spectrum of T is canonically distributed have been pre- sented [17]. Figure 1 shows the spectrum, single-term circulant approximation, and the symbol for a(θ) = (1 + θ)eiθ. 4 1.5 Actual 1 Range(a) diag(WAW’) 0.5 0 Imag -0.5 -1 -1.5 -2 -2 -1 0 1 2 Real Figure 1: Range of the symbol, and diagonal entries of W AW y for a Toeplitz matrix of dimension 200 with a symbol a(θ) = (1 + θ)eiθ. 2.2. Case 2: Symbol of the banded Toeplitz matrix is a trigonometric polynomial For a banded Toeplitz matrix A, with first row a0; a1; a2; ··· al and first iθ Pl ikθ column entries a0; a−1; ··· a−m, the symbol is given by a(e ) = k=−m ake .
Recommended publications
  • On Multigrid Methods for Solving Electromagnetic Scattering Problems
    On Multigrid Methods for Solving Electromagnetic Scattering Problems Dissertation zur Erlangung des akademischen Grades eines Doktor der Ingenieurwissenschaften (Dr.-Ing.) der Technischen Fakultat¨ der Christian-Albrechts-Universitat¨ zu Kiel vorgelegt von Simona Gheorghe 2005 1. Gutachter: Prof. Dr.-Ing. L. Klinkenbusch 2. Gutachter: Prof. Dr. U. van Rienen Datum der mundliche¨ Prufung:¨ 20. Jan. 2006 Contents 1 Introductory remarks 3 1.1 General introduction . 3 1.2 Maxwell’s equations . 6 1.3 Boundary conditions . 7 1.3.1 Sommerfeld’s radiation condition . 9 1.4 Scattering problem (Model Problem I) . 10 1.5 Discontinuity in a parallel-plate waveguide (Model Problem II) . 11 1.6 Absorbing-boundary conditions . 12 1.6.1 Global radiation conditions . 13 1.6.2 Local radiation conditions . 18 1.7 Summary . 19 2 Coupling of FEM-BEM 21 2.1 Introduction . 21 2.2 Finite element formulation . 21 2.2.1 Discretization . 26 2.3 Boundary-element formulation . 28 3 4 CONTENTS 2.4 Coupling . 32 3 Iterative solvers for sparse matrices 35 3.1 Introduction . 35 3.2 Classical iterative methods . 36 3.3 Krylov subspace methods . 37 3.3.1 General projection methods . 37 3.3.2 Krylov subspace methods . 39 3.4 Preconditioning . 40 3.4.1 Matrix-based preconditioners . 41 3.4.2 Operator-based preconditioners . 42 3.5 Multigrid . 43 3.5.1 Full Multigrid . 47 4 Numerical results 49 4.1 Coupling between FEM and local/global boundary conditions . 49 4.1.1 Model problem I . 50 4.1.2 Model problem II . 63 4.2 Multigrid . 64 4.2.1 Theoretical considerations regarding the classical multi- grid behavior in the case of an indefinite problem .
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • Sparse Matrices and Iterative Methods
    Iterative Methods Sparsity Sparse Matrices and Iterative Methods K. Cooper1 1Department of Mathematics Washington State University 2018 Cooper Washington State University Introduction Iterative Methods Sparsity Iterative Methods Consider the problem of solving Ax = b, where A is n × n. Why would we use an iterative method? I Avoid direct decomposition (LU, QR, Cholesky) I Replace with iterated matrix multiplication 3 I LU is O(n ) flops. 2 I . matrix-vector multiplication is O(n )... I so if we can get convergence in e.g. log(n), iteration might be faster. Cooper Washington State University Introduction Iterative Methods Sparsity Jacobi, GS, SOR Some old methods: I Jacobi is easily parallelized. I . but converges extremely slowly. I Gauss-Seidel/SOR converge faster. I . but cannot be effectively parallelized. I Only Jacobi really takes advantage of sparsity. Cooper Washington State University Introduction Iterative Methods Sparsity Sparsity When a matrix is sparse (many more zero entries than nonzero), then typically the number of nonzero entries is O(n), so matrix-vector multiplication becomes an O(n) operation. This makes iterative methods very attractive. It does not help direct solves as much because of the problem of fill-in, but we note that there are specialized solvers to minimize fill-in. Cooper Washington State University Introduction Iterative Methods Sparsity Krylov Subspace Methods A class of methods that converge in n iterations (in exact arithmetic). We hope that they arrive at a solution that is “close enough” in fewer iterations. Often these work much better than the classic methods. They are more readily parallelized, and take full advantage of sparsity.
    [Show full text]
  • Scalable Stochastic Kriging with Markovian Covariances
    Scalable Stochastic Kriging with Markovian Covariances Liang Ding and Xiaowei Zhang∗ Department of Industrial Engineering and Decision Analytics The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong Abstract Stochastic kriging is a popular technique for simulation metamodeling due to its flexibility and analytical tractability. Its computational bottleneck is the inversion of a covariance matrix, which takes O(n3) time in general and becomes prohibitive for large n, where n is the number of design points. Moreover, the covariance matrix is often ill-conditioned for large n, and thus the inversion is prone to numerical instability, resulting in erroneous parameter estimation and prediction. These two numerical issues preclude the use of stochastic kriging at a large scale. This paper presents a novel approach to address them. We construct a class of covariance functions, called Markovian covariance functions (MCFs), which have two properties: (i) the associated covariance matrices can be inverted analytically, and (ii) the inverse matrices are sparse. With the use of MCFs, the inversion-related computational time is reduced to O(n2) in general, and can be further reduced by orders of magnitude with additional assumptions on the simulation errors and design points. The analytical invertibility also enhance the numerical stability dramatically. The key in our approach is that we identify a general functional form of covariance functions that can induce sparsity in the corresponding inverse matrices. We also establish a connection between MCFs and linear ordinary differential equations. Such a connection provides a flexible, principled approach to constructing a wide class of MCFs. Extensive numerical experiments demonstrate that stochastic kriging with MCFs can handle large-scale problems in an both computationally efficient and numerically stable manner.
    [Show full text]
  • Chapter 7 Iterative Methods for Large Sparse Linear Systems
    Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay for the direct methods based on matrix factorization is that the factors of a sparse matrix may not be sparse, so that for large sparse systems the memory cost make direct methods too expensive, in memory and in execution time. Instead we introduce iterative methods, for which matrix sparsity is exploited to develop fast algorithms with a low memory footprint. 7.1 Sparse matrix algebra Large sparse matrices We say that the matrix A Rn is large if n is large, and that A is sparse if most of the elements are2 zero. If a matrix is not sparse, we say that the matrix is dense. Whereas for a dense matrix the number of nonzero elements is (n2), for a sparse matrix it is only (n), which has obvious implicationsO for the memory footprint and efficiencyO for algorithms that exploit the sparsity of a matrix. AdiagonalmatrixisasparsematrixA =(aij), for which aij =0for all i = j,andadiagonalmatrixcanbegeneralizedtoabanded matrix, 6 for which there exists a number p,thebandwidth,suchthataij =0forall i<j p or i>j+ p.Forexample,atridiagonal matrix A is a banded − 59 CHAPTER 7. ITERATIVE METHODS FOR LARGE SPARSE 60 LINEAR SYSTEMS matrix with p =1, xx0000 xxx000 20 xxx003 A = , (7.1) 600xxx07 6 7 6000xxx7 6 7 60000xx7 6 7 where x represents a nonzero4 element. 5 Compressed row storage The compressed row storage (CRS) format is a data structure for efficient represention of a sparse matrix by three arrays, containing the nonzero values, the respective column indices, and the extents of the rows.
    [Show full text]
  • Problems in Abstract Algebra
    STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • Polynomial Sequences Generated by Linear Recurrences
    Innocent Ndikubwayo Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Linear Recurrences: Location by Sequences Generated Polynomial Innocent Ndikubwayo ISBN 978-91-7911-462-6 Department of Mathematics Doctoral Thesis in Mathematics at Stockholm University, Sweden 2021 Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Innocent Ndikubwayo Academic dissertation for the Degree of Doctor of Philosophy in Mathematics at Stockholm University to be publicly defended on Friday 14 May 2021 at 15.00 in sal 14 (Gradängsalen), hus 5, Kräftriket, Roslagsvägen 101 and online via Zoom, public link is available at the department website. Abstract In this thesis, we study the problem of location of the zeros of individual polynomials in sequences of polynomials generated by linear recurrence relations. In paper I, we establish the necessary and sufficient conditions that guarantee hyperbolicity of all the polynomials generated by a three-term recurrence of length 2, whose coefficients are arbitrary real polynomials. These zeros are dense on the real intervals of an explicitly defined real semialgebraic curve. Paper II extends Paper I to three-term recurrences of length greater than 2. We prove that there always exist non- hyperbolic polynomial(s) in the generated sequence. We further show that with at most finitely many known exceptions, all the zeros of all the polynomials generated by the recurrence lie and are dense on an explicitly defined real semialgebraic curve which consists of real intervals and non-real segments. The boundary points of this curve form a subset of zero locus of the discriminant of the characteristic polynomial of the recurrence.
    [Show full text]
  • A Parallel Solver for Graph Laplacians
    A Parallel Solver for Graph Laplacians Tristan Konolige Jed Brown University of Colorado Boulder University of Colorado Boulder [email protected] ABSTRACT low energy components while maintaining coarse grid sparsity. We Problems from graph drawing, spectral clustering, network ow also develop a parallel algorithm for nding and eliminating low and graph partitioning can all be expressed in terms of graph Lapla- degree vertices, a technique introduced for a sequential multigrid cian matrices. ere are a variety of practical approaches to solving algorithm by Livne and Brandt [22], that is important for irregular these problems in serial. However, as problem sizes increase and graphs. single core speeds stagnate, parallelism is essential to solve such While matrices can be distributed using a vertex partition (a problems quickly. We present an unsmoothed aggregation multi- 1D/row distribution) for PDE problems, this leads to unacceptable grid method for solving graph Laplacians in a distributed memory load imbalance for irregular graphs. We represent matrices using a seing. We introduce new parallel aggregation and low degree elim- 2D distribution (a partition of the edges) that maintains load balance ination algorithms targeted specically at irregular degree graphs. for parallel scaling [11]. Many algorithms that are practical for 1D ese algorithms are expressed in terms of sparse matrix-vector distributions are not feasible for more general distributions. Our products using generalized sum and product operations. is for- new parallel algorithms are distribution agnostic. mulation is amenable to linear algebra using arbitrary distributions and allows us to operate on a 2D sparse matrix distribution, which 2 BACKGROUND is necessary for parallel scalability.
    [Show full text]
  • Solving Linear Systems: Iterative Methods and Sparse Systems
    Solving Linear Systems: Iterative Methods and Sparse Systems COS 323 Last time • Linear system: Ax = b • Singular and ill-conditioned systems • Gaussian Elimination: A general purpose method – Naïve Gauss (no pivoting) – Gauss with partial and full pivoting – Asymptotic analysis: O(n3) • Triangular systems and LU decomposition • Special matrices and algorithms: – Symmetric positive definite: Cholesky decomposition – Tridiagonal matrices • Singularity detection and condition numbers Today: Methods for large and sparse systems • Rank-one updating with Sherman-Morrison • Iterative refinement • Fixed-point and stationary methods – Introduction – Iterative refinement as a stationary method – Gauss-Seidel and Jacobi methods – Successive over-relaxation (SOR) • Solving a system as an optimization problem • Representing sparse systems Problems with large systems • Gaussian elimination, LU decomposition (factoring step) take O(n3) • Expensive for big systems! • Can get by more easily with special matrices – Cholesky decomposition: for symmetric positive definite A; still O(n3) but halves storage and operations – Band-diagonal: O(n) storage and operations • What if A is big? (And not diagonal?) Special Example: Cyclic Tridiagonal • Interesting extension: cyclic tridiagonal • Could derive yet another special case algorithm, but there’s a better way Updating Inverse • Suppose we have some fast way of finding A-1 for some matrix A • Now A changes in a special way: A* = A + uvT for some n×1 vectors u and v • Goal: find a fast way of computing (A*)-1
    [Show full text]
  • Explicit Inverse of a Tridiagonal (P, R)–Toeplitz Matrix
    Explicit inverse of a tridiagonal (p; r){Toeplitz matrix A.M. Encinas, M.J. Jim´enez Departament de Matemtiques Universitat Politcnica de Catalunya Abstract Tridiagonal matrices appears in many contexts in pure and applied mathematics, so the study of the inverse of these matrices becomes of specific interest. In recent years the invertibility of nonsingular tridiagonal matrices has been quite investigated in different fields, not only from the theoretical point of view (either in the framework of linear algebra or in the ambit of numerical analysis), but also due to applications, for instance in the study of sound propagation problems or certain quantum oscillators. However, explicit inverses are known only in a few cases, in particular when the tridiagonal matrix has constant diagonals or the coefficients of these diagonals are subjected to some restrictions like the tridiagonal p{Toeplitz matrices [7], such that their three diagonals are formed by p{periodic sequences. The recent formulae for the inversion of tridiagonal p{Toeplitz matrices are based, more o less directly, on the solution of second order linear difference equations, although most of them use a cumbersome formulation, that in fact don not take into account the periodicity of the coefficients. This contribution presents the explicit inverse of a tridiagonal matrix (p; r){Toeplitz, which diagonal coefficients are in a more general class of sequences than periodic ones, that we have called quasi{periodic sequences. A tridiagonal matrix A = (aij) of order n + 2 is called (p; r){Toeplitz if there exists m 2 N0 such that n + 2 = mp and ai+p;j+p = raij; i; j = 0;:::; (m − 1)p: Equivalently, A is a (p; r){Toeplitz matrix iff k ai+kp;j+kp = r aij; i; j = 0; : : : ; p; k = 0; : : : ; m − 1: We have developed a technique that reduces any linear second order difference equation with periodic or quasi-periodic coefficients to a difference equation of the same kind but with constant coefficients [3].
    [Show full text]
  • Toeplitz and Toeplitz-Block-Toeplitz Matrices and Their Correlation with Syzygies of Polynomials
    TOEPLITZ AND TOEPLITZ-BLOCK-TOEPLITZ MATRICES AND THEIR CORRELATION WITH SYZYGIES OF POLYNOMIALS HOUSSAM KHALIL∗, BERNARD MOURRAIN† , AND MICHELLE SCHATZMAN‡ Abstract. In this paper, we re-investigate the resolution of Toeplitz systems T u = g, from a new point of view, by correlating the solution of such problems with syzygies of polynomials or moving lines. We show an explicit connection between the generators of a Toeplitz matrix and the generators of the corresponding module of syzygies. We show that this module is generated by two elements of degree n and the solution of T u = g can be reinterpreted as the remainder of an explicit vector depending on g, by these two generators. This approach extends naturally to multivariate problems and we describe for Toeplitz-block-Toeplitz matrices, the structure of the corresponding generators. Key words. Toeplitz matrix, rational interpolation, syzygie 1. Introduction. Structured matrices appear in various domains, such as scientific computing, signal processing, . They usually express, in a linearize way, a problem which depends on less pa- rameters than the number of entries of the corresponding matrix. An important area of research is devoted to the development of methods for the treatment of such matrices, which depend on the actual parameters involved in these matrices. Among well-known structured matrices, Toeplitz and Hankel structures have been intensively studied [5, 6]. Nearly optimal algorithms are known for the multiplication or the resolution of linear systems, for such structure. Namely, if A is a Toeplitz matrix of size n, multiplying it by a vector or solving a linear system with A requires O˜(n) arithmetic operations (where O˜(n) = O(n logc(n)) for some c > 0) [2, 12].
    [Show full text]
  • A Note on Multilevel Toeplitz Matrices
    Spec. Matrices 2019; 7:114–126 Research Article Open Access Lei Cao and Selcuk Koyuncu* A note on multilevel Toeplitz matrices https://doi.org/110.1515/spma-2019-0011 Received August 7, 2019; accepted September 12, 2019 Abstract: Chien, Liu, Nakazato and Tam proved that all n × n classical Toeplitz matrices (one-level Toeplitz matrices) are unitarily similar to complex symmetric matrices via two types of unitary matrices and the type of the unitary matrices only depends on the parity of n. In this paper we extend their result to multilevel Toeplitz matrices that any multilevel Toeplitz matrix is unitarily similar to a complex symmetric matrix. We provide a method to construct the unitary matrices that uniformly turn any multilevel Toeplitz matrix to a complex symmetric matrix by taking tensor products of these two types of unitary matrices for one-level Toeplitz matrices according to the parity of each level of the multilevel Toeplitz matrices. In addition, we introduce a class of complex symmetric matrices that are unitarily similar to some p-level Toeplitz matrices. Keywords: Multilevel Toeplitz matrix; Unitary similarity; Complex symmetric matrices 1 Introduction Although every complex square matrix is similar to a complex symmetric matrix (see Theorem 4.4.24, [5]), it is known that not every n × n matrix is unitarily similar to a complex symmetric matrix when n ≥ 3 (See [4]). Some characterizations of matrices unitarily equivalent to a complex symmetric matrix (UECSM) were given by [1] and [3]. Very recently, a constructive proof that every Toeplitz matrix is unitarily similar to a complex symmetric matrix was given in [2] in which the unitary matrices turning all n × n Toeplitz matrices to complex symmetric matrices was given explicitly.
    [Show full text]