Block-Tridiagonal Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Block-Tridiagonal Matrices FMB - NLA Block-tridiagonal matrices . – p.1/31 FMB - NLA Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute the eigenvalues of a matrix. – p.2/31 FMB - NLA Block-tridiagonal matrices Ω1 Ω2 Ω3 Consider a two-dimensional domain partitioned in strips. Assume that points on the lines of intersection are only coupled to their nearest neighbors in the underlying mesh (and we do not have periodic boundary conditions). Hence, there is no coupling between subdomains except through the “glue” on the interfaces. – p.3/31 FMB - NLA Block-tridiagonal matrices When the subdomains are ordered lexicographically from left to right, a domain Ωi becomes coupled only to its pre- and postdecessors Ωi 1 and Ωi+1, respectively and the corresponding matrix takes the form of a block tridiagonal matrix = tridiag ( i;i 1 i;i i;i+1), or ¾ ¿ 11 12 0 6 7 6 21 22 23 7 6 7 = 6 .. .. .. 7 4 . 5 0 Ò;Ò 1 Ò;Ò \ For definiteness we let the boundary meshline Ω i Ω i+1 belong to Ωi . In order to preserve the sparsity pattern we shall factor without use of permutations. Naturally, the lines of intersection do not have to be straight. – p.4/31 FMB - NLA Block-tridiagonal matrices How do we factorize a (block)-tridiagonal matrix ? . – p.5/31 FMB - NLA Ä Í Let be block-tridiagonal, and expressed as = A A A . 1 Convenient: seek Ä , , Í such that = Ä Í and where Ä Ä Í Í is diagonal, = A and = A Direct computation: 1 1 Ä Í Ä Í Ä Í Ä Í =( A ) ( A )= A A + A A = A A A 1 Ä Í i.e., A = + A A Ä Í Important: A and A are strictly lower and upper triangular. – p.6/31 FMB - NLA 1 = ÄD Í A for pointwise tridiagonal matrices 2 11 3 2 1 3 6 22 7 6 2 7 6 7 = 6 7 + 6 .. 7 6 .. 7 6 . 7 6 . 7 6 7 6 7 6 7 6 7 4 ÒÒ 5 4 Ò 5 2 0 3 21 1 3 20 12 3 0 1 0 6 2 1 7 6 2 7 6 2 3 7 6 7 6 7 6 7 6 .. 7 6 .. 7 6 .. 7 6 . 7 6 . 7 6 . 7 6 7 6 7 6 7 6 7 6 7 6 7 Ò 4 ÒÒ 1 05 4 1 5 4 05 Factorization algorithm: 1 = 1;1 i;i 1 i 1;i i = i;i i 1 . – p.7/31 FMB - NLA 1 = ÄD Í A for pointwise tridiagonal matrices 1 Solution of systems with Ä Í . – p.8/31 FMB - NLA Block-tridiagonal matrices Ä Í Let be block-tridiagonal, and expressed as = A A A . One can envisage three major versions of the factorization algorithm: 1 Ä Í (i) =( A ) ( A ) 1 1 Ä Í (ii) =( A ) ( A ) e 1 e Á Ä Á Í (iii) =( A ) ( A ) 1 i = ii i;i i ;i 2 = 1 i 1 1 1 11 1 i =( ii i;i 1 i 1 i 1;i ) 1 0 = 0 (Inverse free e e Ä Ä Í Í substitutions), where A = A , A = A . 1 e 1 e 1 Á Í Á Ä Here =( A ) ( A ) × e 1 e 2 e 2 e e 1 Á Í Á Í Á Í Á Í Á Ä ( A ) =( + ) ( + )( + A ) and similarly for ( A ) . A A . – p.9/31 FMB - NLA Existence of factorization for block-tridiagonal matrices (Ö ) We assume that the matrices are real. It can be shown that ÖÖ is always nonsingular for two important classes of matrices, namely for ¯ matrices which are positive definite, i.e., Ì Ò Ü Ü 0 for all Ü ¾ Ê ( if has order Ò) ¯ blockwise generalized diagonally dominant matrices (also called block À -matrices), i.e., for which the diagonal matrices are nonsingular and 1 1 k k k k Ò i ;i + i ;i 1 = 1 2 1 ii +1 ii (here 0;1 = 0 Ò+1;Ò = 0). – p.10/31 FMB - NLA = 1; 2; : : : ; Ò 1 A factorization passes through stages Ö For two important classes of matrices there holds that the successive top blocks, i.e., pivot matrices which arise after every factorization stage, are nonsingular. (Ö ) At every stage the current matrix is partitioned in 2 ¢ 2 blocks, 211 12 ¡ ¡ ¡ 0 3 (1) (1) 21 22 23 ¡ ¡ ¡ 0 (1) 6 7 211 12 3 = = 6 7 = 6 . 7 (1) (1) . 6 ¡ ¡ ¡ . 7 4 21 22 5 6 7 6 7 ÒÒ 4 0 0 ÒÒ 1 5 1 Ö Ö ( ) ( ) (Ö ) At the Ö th stage we compute 11 = 11 and factor , 1 (Ö ) (Ö ) Á 0 (Ö ) 2 3 2 11 12 3 = (Ö ) (Ö ) (Ö +1) Á 4 21 11 5 4 0 5 Ö Ö Ö Ö (Ö +1) ( ) ( ) ( ) ( ) where = 22 21 11 12 is the so-called Schur complement. – p.11/31 FMB - NLA Existence of factorization for block-tridiagonal matrices The factorization of a block matrix is equivalent to the block Gaussian (Ö ) elimination of it. Note then that the only block in 22 which will be (1) affected by the elimination (of block matrix 21 ) is the top block of the (Ö ) (Ö +1) block tridiagonal decomposition of 22 , i.e., 11 , the new pivot matrix. We show that for the above matrix classes the Schur complement Ö Ö Ö Ö (Ö +1) ( ) ( ) ( ) ( ) (Ö ) = 22 21 11 12 belongs to the same class as , i.e., in particular that the pivot entries are nonsingular. – p.12/31 FMB - NLA " # 11 12 Lemma 1 Let = be positive definite. Then ii = 1 2 and the 21 22 1 Schur complement Ë = 22 21 11 12 are also positive definite. Ì Ì Ì Ì Ì Proof There holds Ü Ü = Ü1 11 Ü1 for all Ü =(Ü1 0). Hence Ü1 11 Ü1 0 for all Ü1, i.e., 11 is positive definite. Similarly, it can be shown that 22 is positive definite. Since is nonsingular then Ì Ì Ì Ì 1 Ü Ü = Ü Ü = Ý Ý for Ý = Ü Ì 1 so Ý Ý 0 for all Ý =6 0 i.e., the inverse of is also positive definite. Use now the explicit form of the inverse computed by use of the factorization , Á 11 12 11 0 Á 0 £ £ 1 2 3 2 3 2 3 2 3 = = 1 1 40 Á 5 4 0 Ë 5 4 21 11 Á 5 4£ Ë 5 where £ indicates entries not important for the present discussion. 1 1 Hence, since is positive definite, so is its diagonal block Ë . Hence, the inverse of 1 Ë , and therefore also Ë , is positive definite. – p.13/31 FMB - NLA Ö (Ö ) (Ö +1) ( +1) Corollary 1 When is positive definite, and in particular, 11 are positive definite. (Ö +1) (Ö ) (Ö +1) Proof is a Schur complement of so by Lemma 1, (Ö ) is positive definite when is. In particular, its top diagonal block is positive definite. – p.14/31 FMB - NLA " # 11 12 Lemma 2 Let = be blockwise generalized diagonally dominant 21 22 1 where is block tridiagonal. Then the Schur complement Ë = 22 21 11 12 is also generalized diagonally dominant. Proof (Hint) Since the only matrix block in Ë which has been changed (Ö +1) from 22 is its top block 11 to 11 it suffices to show that 11 is non- singular and the first block column is generalized diagonally dominant. – p.15/31 Linear recursions Consider the solution of the linear system of equations x = b, where has been already factorized as = ÄÍ or = Ä Í . The matrices Ä fÐ g Í fÙ g = i;j and = i;j are lower- and upper-triangular, respectively. To compute x, we must perform two steps: forward substitution: Äz = b, i.e., i 1 È Þ Þ Ð Þ ¡ ¡ ¡ Ò 1 = 1 i = i i;k k = 2 3 k =1 backward substitution: Í x = z, i.e., Ò È Ü Þ Ü Þ Ù Ü Ò Ò ¡ ¡ ¡ Ò = Ò i = i i;k k = 1 2 1 k =i+1 . – p.16/31 FMB - NLA While the implementation of the forward and back-substitution on a serial computer is trivial, to implement them on a vector or parallel computer system is problematic. The reason is that the relations are particular examples of a linear recursion which is an inherently sequential process. A general Ñ-level recurrence relation reads as Ü Ü Ü ¡ ¡ ¡ Ü i = 1;i i 1 + 2;i i 2 + + Ñ;i i Ñ + i and the performance of its straightforward vector or parallel implementation is degraded due to the existing backwards data dependencies. – p.17/31 FMB - NLA Block-tridiagonal matrices Can we speedup somehow the solution of systems with bi- or tri-diagonal matrices ? . – p.18/31 FMB - NLA Multifrontal solution methods * * * * * * * * * * * * * * * * 1 3 5 7 9 8 6 4 2 * * * * * * x n0 * * * Ü (a) Two way frontal method ( Ò0 is the (b) The structure of the middle node matrix A Any tridiagonal or block tridiagonal matrix can be attacked in parallel from both ends, after a proper numbering of the unknowns It can be seen that we can work independently on the odd numbered and even numbered points until we have eliminated all entries except the final corner one. – p.19/31 FMB - NLA Hence, the factorization and forward substitution can occur in parallel for the two fronts (the even and the odd).
Recommended publications
  • Eigenvalues and Eigenvectors of Tridiagonal Matrices∗
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 15, pp. 115-133, April 2006 ELA http://math.technion.ac.il/iic/ela EIGENVALUES AND EIGENVECTORS OF TRIDIAGONAL MATRICES∗ SAID KOUACHI† Abstract. This paper is continuation of previous work by the present author, where explicit formulas for the eigenvalues associated with several tridiagonal matrices were given. In this paper the associated eigenvectors are calculated explicitly. As a consequence, a result obtained by Wen- Chyuan Yueh and independently by S. Kouachi, concerning the eigenvalues and in particular the corresponding eigenvectors of tridiagonal matrices, is generalized. Expressions for the eigenvectors are obtained that differ completely from those obtained by Yueh. The techniques used herein are based on theory of recurrent sequences. The entries situated on each of the secondary diagonals are not necessary equal as was the case considered by Yueh. Key words. Eigenvectors, Tridiagonal matrices. AMS subject classifications. 15A18. 1. Introduction. The subject of this paper is diagonalization of tridiagonal matrices. We generalize a result obtained in [5] concerning the eigenvalues and the corresponding eigenvectors of several tridiagonal matrices. We consider tridiagonal matrices of the form −α + bc1 00 ... 0 a1 bc2 0 ... 0 .. .. 0 a2 b . An = , (1) .. .. .. 00. 0 . .. .. .. . cn−1 0 ... ... 0 an−1 −β + b n−1 n−1 ∞ where {aj}j=1 and {cj}j=1 are two finite subsequences of the sequences {aj}j=1 and ∞ {cj}j=1 of the field of complex numbers C, respectively, and α, β and b are complex numbers. We suppose that 2 d1, if j is odd ajcj = 2 j =1, 2, ..., (2) d2, if j is even where d 1 and d2 are complex numbers.
    [Show full text]
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • (Hessenberg) Eigenvalue-Eigenmatrix Relations∗
    (HESSENBERG) EIGENVALUE-EIGENMATRIX RELATIONS∗ JENS-PETER M. ZEMKE† Abstract. Explicit relations between eigenvalues, eigenmatrix entries and matrix elements are derived. First, a general, theoretical result based on the Taylor expansion of the adjugate of zI − A on the one hand and explicit knowledge of the Jordan decomposition on the other hand is proven. This result forms the basis for several, more practical and enlightening results tailored to non-derogatory, diagonalizable and normal matrices, respectively. Finally, inherent properties of (upper) Hessenberg, resp. tridiagonal matrix structure are utilized to construct computable relations between eigenvalues, eigenvector components, eigenvalues of principal submatrices and products of lower diagonal elements. Key words. Algebraic eigenvalue problem, eigenvalue-eigenmatrix relations, Jordan normal form, adjugate, principal submatrices, Hessenberg matrices, eigenvector components AMS subject classifications. 15A18 (primary), 15A24, 15A15, 15A57 1. Introduction. Eigenvalues and eigenvectors are defined using the relations Av = vλ and V −1AV = J. (1.1) We speak of a partial eigenvalue problem, when for a given matrix A ∈ Cn×n we seek scalar λ ∈ C and a corresponding nonzero vector v ∈ Cn. The scalar λ is called the eigenvalue and the corresponding vector v is called the eigenvector. We speak of the full or algebraic eigenvalue problem, when for a given matrix A ∈ Cn×n we seek its Jordan normal form J ∈ Cn×n and a corresponding (not necessarily unique) eigenmatrix V ∈ Cn×n. Apart from these constitutional relations, for some classes of structured matrices several more intriguing relations between components of eigenvectors, matrix entries and eigenvalues are known. For example, consider the so-called Jacobi matrices.
    [Show full text]
  • Explicit Inverse of a Tridiagonal (P, R)–Toeplitz Matrix
    Explicit inverse of a tridiagonal (p; r){Toeplitz matrix A.M. Encinas, M.J. Jim´enez Departament de Matemtiques Universitat Politcnica de Catalunya Abstract Tridiagonal matrices appears in many contexts in pure and applied mathematics, so the study of the inverse of these matrices becomes of specific interest. In recent years the invertibility of nonsingular tridiagonal matrices has been quite investigated in different fields, not only from the theoretical point of view (either in the framework of linear algebra or in the ambit of numerical analysis), but also due to applications, for instance in the study of sound propagation problems or certain quantum oscillators. However, explicit inverses are known only in a few cases, in particular when the tridiagonal matrix has constant diagonals or the coefficients of these diagonals are subjected to some restrictions like the tridiagonal p{Toeplitz matrices [7], such that their three diagonals are formed by p{periodic sequences. The recent formulae for the inversion of tridiagonal p{Toeplitz matrices are based, more o less directly, on the solution of second order linear difference equations, although most of them use a cumbersome formulation, that in fact don not take into account the periodicity of the coefficients. This contribution presents the explicit inverse of a tridiagonal matrix (p; r){Toeplitz, which diagonal coefficients are in a more general class of sequences than periodic ones, that we have called quasi{periodic sequences. A tridiagonal matrix A = (aij) of order n + 2 is called (p; r){Toeplitz if there exists m 2 N0 such that n + 2 = mp and ai+p;j+p = raij; i; j = 0;:::; (m − 1)p: Equivalently, A is a (p; r){Toeplitz matrix iff k ai+kp;j+kp = r aij; i; j = 0; : : : ; p; k = 0; : : : ; m − 1: We have developed a technique that reduces any linear second order difference equation with periodic or quasi-periodic coefficients to a difference equation of the same kind but with constant coefficients [3].
    [Show full text]
  • Determinant Formulas and Cofactors
    Determinant formulas and cofactors Now that we know the properties of the determinant, it’s time to learn some (rather messy) formulas for computing it. Formula for the determinant We know that the determinant has the following three properties: 1. det I = 1 2. Exchanging rows reverses the sign of the determinant. 3. The determinant is linear in each row separately. Last class we listed seven consequences of these properties. We can use these ten properties to find a formula for the determinant of a 2 by 2 matrix: � � � � � � � a b � � a 0 � � 0 b � � � = � � + � � � c d � � c d � � c d � � � � � � � � � � a 0 � � a 0 � � 0 b � � 0 b � = � � + � � + � � + � � � c 0 � � 0 d � � c 0 � � 0 d � = 0 + ad + (−cb) + 0 = ad − bc. By applying property 3 to separate the individual entries of each row we could get a formula for any other square matrix. However, for a 3 by 3 matrix we’ll have to add the determinants of twenty seven different matrices! Many of those determinants are zero. The non-zero pieces are: � � � � � � � � � a a a � � a 0 0 � � a 0 0 � � 0 a 0 � � 11 12 13 � � 11 � � 11 � � 12 � � a21 a22 a23 � = � 0 a22 0 � + � 0 0 a23 � + � a21 0 0 � � � � � � � � � � a31 a32 a33 � � 0 0 a33 � � 0 a32 0 � � 0 0 a33 � � � � � � � � 0 a 0 � � 0 0 a � � 0 0 a � � 12 � � 13 � � 13 � + � 0 0 a23 � + � a21 0 0 � + � 0 a22 0 � � � � � � � � a31 0 0 � � 0 a32 0 � � a31 0 0 � = a11 a22a33 − a11a23 a33 − a12a21a33 +a12a23a31 + a13 a21a32 − a13a22a31. Each of the non-zero pieces has one entry from each row in each column, as in a permutation matrix.
    [Show full text]
  • The Unsymmetric Eigenvalue Problem
    Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Supplemental Notes The Unsymmetric Eigenvalue Problem Properties and Decompositions Let A be an n × n matrix. A nonzero vector x is called an eigenvector of A if there exists a scalar λ such that Ax = λx: The scalar λ is called an eigenvalue of A, and we say that x is an eigenvector of A corresponding to λ. We see that an eigenvector of A is a vector for which matrix-vector multiplication with A is equivalent to scalar multiplication by λ. We say that a nonzero vector y is a left eigenvector of A if there exists a scalar λ such that λyH = yH A: The superscript H refers to the Hermitian transpose, which includes transposition and complex conjugation. That is, for any matrix A, AH = AT . An eigenvector of A, as defined above, is sometimes called a right eigenvector of A, to distinguish from a left eigenvector. It can be seen that if y is a left eigenvector of A with eigenvalue λ, then y is also a right eigenvector of AH , with eigenvalue λ. Because x is nonzero, it follows that if x is an eigenvector of A, then the matrix A − λI is singular, where λ is the corresponding eigenvalue. Therefore, λ satisfies the equation det(A − λI) = 0: The expression det(A−λI) is a polynomial of degree n in λ, and therefore is called the characteristic polynomial of A (eigenvalues are sometimes called characteristic values). It follows from the fact that the eigenvalues of A are the roots of the characteristic polynomial that A has n eigenvalues, which can repeat, and can also be complex, even if A is real.
    [Show full text]
  • Sparse Linear Systems Section 4.2 – Banded Matrices
    Band Systems Tridiagonal Systems Cyclic Reduction Parallel Numerical Algorithms Chapter 4 – Sparse Linear Systems Section 4.2 – Banded Matrices Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 1 / 28 Band Systems Tridiagonal Systems Cyclic Reduction Outline 1 Band Systems 2 Tridiagonal Systems 3 Cyclic Reduction Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 2 / 28 Band Systems Tridiagonal Systems Cyclic Reduction Banded Linear Systems Bandwidth (or semibandwidth) of n × n matrix A is smallest value w such that aij = 0 for all ji − jj > w Matrix is banded if w n If w p, then minor modifications of parallel algorithms for dense LU or Cholesky factorization are reasonably efficient for solving banded linear system Ax = b If w / p, then standard parallel algorithms for LU or Cholesky factorization utilize few processors and are very inefficient Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 3 / 28 Band Systems Tridiagonal Systems Cyclic Reduction Narrow Banded Linear Systems More efficient parallel algorithms for narrow banded linear systems are based on divide-and-conquer approach in which band is partitioned into multiple pieces that are processed simultaneously Reordering matrix by nested dissection is one example of this approach Because of fill, such methods generally require more total work than best serial algorithm for system with dense band We will illustrate for tridiagonal linear systems, for which w = 1, and will assume pivoting is not needed for stability (e.g., matrix is diagonally dominant or symmetric positive definite) Michael T.
    [Show full text]
  • Eigenvalues of a Special Tridiagonal Matrix
    Eigenvalues of a Special Tridiagonal Matrix Alexander De Serre Rothney∗ October 10, 2013 Abstract In this paper we consider a special tridiagonal test matrix. We prove that its eigenvalues are the even integers 2;:::; 2n and show its relationship with the famous Kac-Sylvester tridiagonal matrix. 1 Introduction We begin with a quick overview of the theory of symmetric tridiagonal matrices, that is, we detail a few basic facts about tridiagonal matrices. In particular, we describe the symmetrization process of a tridiagonal matrix as well as the orthogonal polynomials that arise from the characteristic polynomials of said matrices. Definition 1.1. A tridiagonal matrix, Tn, is of the form: 2a1 b1 0 ::: 0 3 6 :: :: : 7 6c1 a2 : : : 7 6 7 6 :: :: :: 7 Tn = 6 0 : : : 0 7 ; (1.1) 6 7 6 : :: :: 7 4 : : : an−1 bn−15 0 ::: 0 cn−1 an where entries below the subdiagonal and above the superdiagonal are zero. If bi 6= 0 for i = 1; : : : ; n − 1 and ci 6= 0 for i = 1; : : : ; n − 1, Tn is called a Jacobi matrix. In this paper we will use a more compact notation and only describe the subdiagonal, diagonal, and superdiagonal (where appropriate). For example, Tn can be rewritten as: 0 1 b1 : : : bn−1 Tn = @ a1 a2 : : : an−1 an A : (1.2) c1 : : : cn−1 ∗Bishop's University, Sherbrooke, Quebec, Canada 1 Note that the study of symmetric tridiagonal matrices is sufficient for our purpose as any Jacobi matrix with bici > 0 8i can be symmetrized through a similarity transformation: 0 p p 1 b1c1 ::: bn−1cn−1 −1 An = Dn TnDn = @ a1 a2 : : : an−1 an A ; (1.3) p p b1c1 ::: bn−1cn−1 r cici+1 ··· cn−1 where Dn = diag(γ1; : : : ; γn) and γi = .
    [Show full text]
  • Institute of Computer Science Efficient Tridiagonal Preconditioner for The
    Institute of Computer Science Academy of Sciences of the Czech Republic Efficient tridiagonal preconditioner for the matrix-free truncated Newton method Ladislav Lukˇsan,Jan Vlˇcek Technical report No. 1177 January 2013 Pod Vod´arenskou vˇeˇz´ı2, 182 07 Prague 8 phone: +420 2 688 42 44, fax: +420 2 858 57 89, e-mail:e-mail:[email protected] Institute of Computer Science Academy of Sciences of the Czech Republic Efficient tridiagonal preconditioner for the matrix-free truncated Newton method Ladislav Lukˇsan,Jan Vlˇcek 1 Technical report No. 1177 January 2013 Abstract: In this paper, we study an efficient tridiagonal preconditioner, based on numerical differentiation, applied to the matrix-free truncated Newton method for unconstrained optimization. It is proved that this preconditioner is positive definite for many practical problems. The efficiency of the resulting matrix-free truncated Newton method is demonstrated by results of extensive numerical experiments. Keywords: 1This work was supported by the Institute of Computer Science of the AS CR (RVO:67985807) 1 Introduction We consider the unconstrained minimization problem x∗ = arg min F (x); x2Rn where function F : D(F ) ⊂ Rn ! R is twice continuously differentiable and n is large. We use the notation g(x) = rF (x);G(x) = r2F (x) and the assumption that kG(x)k ≤ G, 8x 2 D(F ). Numerical methods for unconstrained minimization are usually iterative and their iteration step has the form xk+1 = xk + αkdk; k 2 N; where dk is a direction vector and αk is a step-length. In this paper, we will deal with the Newton method, which uses the quadratic model 1 F (x + d) ≈ Q(x + d) = F (x ) + gT (x )d + dT G(x )d (1) k k k k 2 k for direction determination in such a way that dk = arg min Q(xk + d): (2) d2Mk There are two basic possibilities for direction determination: the line-search method, where n n Mk = R ; and the trust-region method, where Mk = fd 2 R : kdk ≤ ∆kg (here ∆k > 0 is the trust region radius).
    [Show full text]
  • A Study of Determinants and Inverses for Periodic Tridiagonal Toeplitz Matrices with Perturbed Corners Involving Mersenne Numbers
    mathematics Article A Study of Determinants and Inverses for Periodic Tridiagonal Toeplitz Matrices with Perturbed Corners Involving Mersenne Numbers Yunlan Wei 1,2, Yanpeng Zheng 1,3,∗, Zhaolin Jiang 1,∗ and Sugoog Shon 2 1 School of Mathematics and Statistics, Linyi University, Linyi 276000, China; [email protected] 2 College of Information Technology, The University of Suwon, Hwaseong-si 445-743, Korea; [email protected] 3 School of Automation and Electrical Engineering, Linyi University, Linyi 276000, China * Correspondence: [email protected] (Y.Z.); [email protected] or [email protected] (Z.J.) Received: 5 July 2019; Accepted: 20 September 2019; Published: 24 September 2019 Abstract: In this paper, we study periodic tridiagonal Toeplitz matrices with perturbed corners. By using some matrix transformations, the Schur complement and matrix decompositions techniques, as well as the Sherman-Morrison-Woodbury formula, we derive explicit determinants and inverses of these matrices. One feature of these formulas is the connection with the famous Mersenne numbers. We also propose two algorithms to illustrate our formulas. Keywords: determinant; inverse; Mersenne number; periodic tridiagonal Toeplitz matrix; Sherman-Morrison-Woodbury formula 1. Introduction Mersenne numbers are ubiquitous in combinatorics, group theory, chaos, geometry, physics, etc. [1]. They are generated by the following recurrence [2]: Mn+1 = 3Mn − 2Mn−1 where M0 = 0, M1 = 1, n ≥ 1; (1) 3 1 1 M = M− − M where M = 0, M− = − , n ≥ 1. (2) −(n+1) 2 n 2 −(n−1) 0 1 2 n The Binet formula says that the nth Mersenne number Mn = 2 − 1 [3]. One application we would like to mention is that Nussbaumer [4] applied number theoretical transform closely related to Mersenne number to deal with problems of digital filtering and convolution of discrete signals.
    [Show full text]
  • Associated Polynomials, Spectral Matrices and the Bispectral Problem
    Methods and Applications of Analysis © 1999 International Press 6 (2) 1999, pp. 209-224 ISSN 1073-2772 ASSOCIATED POLYNOMIALS, SPECTRAL MATRICES AND THE BISPECTRAL PROBLEM F. Alberto Griinbaum and Luc Haine Dedicated to Professor Richard Askey on the occasion of his 65th birthday. ABSTRACT. The associated Hermite, Laguerre, Jacobi, and Bessel polynomials appear naturally when Bochner's problem [5] of characterizing orthogonal polyno- mials satisfying a second-order differential equation is extended to doubly infinite tridiagonal matrices. We obtain an explicit expression for the spectral matrix measure corresponding to the associated doubly infinite tridiagonal matrix in the Jacobi case. We show that, in an appropriate basis of "bispectral" functions, the spectral matrix can be put into a nice diagonal form, restoring the simplicity of the familiar orthogonality relations satisfied by the Jacobi polynomials. 1. Introduction This paper deals with topics that are very classical. We did arrive at them, however, by a fairly nonclassical route: the study of the bispectral problem. We feel that this gives useful insight into subjects that have been considered earlier for different reasons. At any rate, we hope that Dick Askey, to whom this paper is dedicated, will like this interweaving of old and new material. His own mastery at doing this has proved tremendously useful to mathematics and to all of us. We start with a statement of the bispectral problem which is appropriate for our purposes: describe all families of functions /n(^)5 n £ Z, z 6 C, that satisfy (Lf)n(z) = zfn(z) and Bfn(z) = Xnfn(z) for all z and n.
    [Show full text]
  • Explicit Spectrum of a Circulant-Tridiagonal Matrix with Applications
    EXPLICIT SPECTRUM OF A CIRCULANT-TRIDIAGONAL MATRIX WITH APPLICATIONS EMRAH KILIÇ AND AYNUR YALÇINER Abstract. We consider a circulant-tridiagonal matrix and compute its deter- minant by using generating function method. Then we explicitly determine its spectrum. Finally we present applications of our results for trigonometric factorizations of the generalized Lucas sequences. 1. Introduction Tridiagonal matrices have been used in many different fields, especially in ap- plicative fields such as numerical analysis (e.g., orthogonal polynomials), engineer- ing, telecommunication system analysis, system identification, signal processing (e.g., speech decoding, deconvolution), special functions, partial differential equa- tions and naturally linear algebra (see [1, 3, 4, 12, 16, 17]). Some authors consider a general tridiagonal matrix of finite order and then compute its LU factorizations, determinant and inverse (see [2, 5, 8, 13]). A tridiagonal Toeplitz matrix of order n has the form: a b 0 c a b 2 c a 3 An = , 6 . 7 6 .. .. b 7 6 7 6 0 c a 7 6 7 where a, b and c’sare nonzero complex4 numbers. 5 A tridiagonal 2-Toeplitz matrix has the form: a1 b1 0 0 0 ··· c1 a2 b2 0 0 2 ··· 3 0 c2 a1 b1 0 T = ··· , n 6 0 0 c1 a2 b2 7 6 ··· 7 6 0 0 0 c2 a1 7 6 ··· 7 6 . 7 6 . .. 7 6 7 where a, b and c’sare nonzero4 complex numbers. 5 Let a1, a2, b1 and b2 be real numbers. The period two second order linear recur- rence system is defined to be the sequence f0 = 1, f1 = a1, and f2n = a2f2n 1 + b1f2n 2 and f2n+1 = a1f2n + b2f2n 1 2000 Mathematics Subject Classification.
    [Show full text]