MAT 610: Numerical Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

MAT 610: Numerical Linear Algebra MAT 610: Numerical Linear Algebra James V. Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems7 1.1 Introduction............................7 1.1.1 Systems of Linear Equations..............7 1.1.2 The Eigenvalue Problem.................8 1.2 Basic Concepts, Algorithms, and Notation...........8 1.2.1 Vector Spaces.......................9 1.2.2 Subspaces and Bases................... 10 1.2.3 Matrix Representation of Linear Transformations... 14 1.2.4 Matrix Multiplication.................. 16 1.2.5 Vector Operations for Matrices............. 17 1.2.6 The Transpose of a Matrix............... 18 1.2.7 Other Fundamental Matrix Computations....... 19 1.2.8 Understanding Matrix-Matrix Multiplication..... 22 1.3 Other Essential Ideas from Linear Algebra........... 22 1.3.1 Special Subspaces.................... 22 1.3.2 The Identity Matrix................... 23 1.3.3 The Inverse of a Matrix................. 23 1.3.4 Triangular and Diagonal Matrices........... 24 1.3.5 The Sherman-Morrison-Woodbury Formula...... 25 1.3.6 The Determinant of a Matrix.............. 27 1.3.7 Differentiation of Matrices................ 29 2 Matrix Analysis 31 2.1 Vector Norms........................... 31 2.2 Matrix Norms........................... 34 2.3 Eigenvalues and Eigenvectors.................. 37 2.4 Perturbations and the Inverse.................. 40 2.5 Roundoff Errors and Computer Arithmetic.......... 41 2.5.1 Floating-Point Numbers................. 42 3 4 CONTENTS 2.5.2 Properties of Floating-Point Systems.......... 43 2.5.3 Rounding......................... 44 2.5.4 Floating-Point Arithmetic................ 46 2.6 Basics of Error Analysis..................... 48 2.7 Orthogonality........................... 51 2.8 The Singular Value Decomposition............... 54 2.9 Projections............................ 58 2.10 Projections and the SVD.................... 60 2.11 Sensitivity of Linear Systems.................. 60 3 General Linear Systems 65 3.1 Gaussian Elimination and Back Substitution......... 65 3.2 The LU Factorization...................... 70 3.2.1 Unit Lower Triangular Matrices............. 71 3.2.2 Solution of Ax = b .................... 73 3.2.3 Implementation Details................. 76 3.2.4 Existence and Uniqueness................ 76 3.2.5 Practical Computation of Determinants........ 77 3.3 Roundoff Analysis of Gaussian Elimination.......... 77 3.3.1 Error in the LU Factorization.............. 78 3.3.2 Bounding the perturbation in A ............ 80 3.3.3 Bounding the error in the solution........... 81 3.4 Pivoting.............................. 82 3.4.1 Partial Pivoting..................... 82 3.4.2 Complete Pivoting.................... 83 3.4.3 The LU Decomposition with Pivoting......... 83 3.4.4 Practical Computation of Determinants, Revisited.. 85 3.5 Improving and Estimating Accuracy.............. 85 3.5.1 Iterative Refinement................... 85 3.5.2 Estimating the Condition Number........... 89 3.5.3 Scaling and Equilibration................ 91 4 Special Linear Systems 93 4.1 The LDM T and LDLT Factorizations............. 93 4.2 Positive Definite Systems.................... 95 4.3 Banded Systems......................... 100 5 Iterative Methods for Linear Systems 103 5.1 Stationary Iterative Methods.................. 104 5.1.1 Convergence....................... 104 CONTENTS 5 5.1.2 The Jacobi Method................... 105 5.1.3 The Gauss-Seidel Method................ 106 5.1.4 Successive Overrelaxation................ 106 5.1.5 Positive Definite Systems................ 107 5.1.6 Stopping Criteria..................... 108 5.2 Solving Differential Equations.................. 109 5.3 The Conjugate Gradient Method................ 111 5.4 Minimum Residual Methods................... 119 5.5 The Biconjugate Gradient Method............... 122 6 Orthogonalization and Least Squares 127 6.1 The Full-rank Linear Least Squares Problem......... 127 6.1.1 Minimizing the Residual................. 127 6.1.2 The Condition Number of AT A ............. 131 6.1.3 Comparison of Approaches............... 132 6.1.4 Solving the Normal Equations.............. 132 6.1.5 The Pseudo-Inverse................... 132 6.1.6 Perturbation Theory................... 135 6.2 The QR Factorization...................... 136 6.2.1 Givens Rotations..................... 136 6.2.2 Householder Reflections................. 143 6.2.3 Givens Rotations vs. Householder Reflections..... 150 6.3 Block Algorithms......................... 150 6.4 Gram-Schmidt Orthogonalization................ 151 6.4.1 Classical Gram-Schmidt................. 152 6.4.2 Modified Gram-Schmidt................. 153 6.5 The Rank-Deficient Least Squares Problem.......... 155 6.5.1 QR with Column Pivoting................ 155 6.5.2 Complete Orthogonal Decomposition.......... 158 6.6 Applications of the SVD..................... 159 6.6.1 Minimum-norm least squares solution......... 159 6.6.2 Closest Orthogonal Matrix............... 160 6.6.3 Low-Rank Approximations............... 160 7 The Unsymmetric Eigenvalue Problem 163 7.1 Properties and Decompositions................. 163 7.2 Perturbation Theory....................... 170 7.3 Power Iterations......................... 175 7.4 The Hessenberg and Real Schur Forms............. 178 7.5 The Practical QR Algorithm.................. 182 6 CONTENTS 8 The Symmetric Eigenvalue Problem 187 8.1 Properties and Decompositions................. 187 8.2 Perturbation Theory....................... 188 8.3 Power Iterations......................... 190 8.4 Reduction to Tridiagonal Form................. 192 8.5 The Practical QR Algorithm.................. 193 8.6 Jacobi Methods.......................... 193 8.6.1 The Jacobi Idea..................... 194 8.6.2 The 2-by-2 Symmetric Schur Decomposition...... 195 8.6.3 The Classical Jacobi Algorithm............. 196 8.6.4 The Cyclic-by-Row Algorithm.............. 196 8.6.5 Error Analysis...................... 196 8.6.6 Parallel Jacobi...................... 197 8.7 The SVD Algorithm....................... 197 8.7.1 Jacobi SVD Procedures................. 199 9 Matrices, Moments and Quadrature 201 9.1 Approximation of Quadratic Forms............... 201 9.1.1 Gaussian Quadrature.................. 203 9.1.2 Roots of Orthogonal Polynomials............ 207 9.1.3 Connection to Hankel Matrices............. 207 9.1.4 Derivation of the Lanczos Algorithm.......... 208 9.2 Generalizations.......................... 211 9.2.1 Prescribing Nodes.................... 211 9.2.2 The Case of u 6= v .................... 213 9.2.3 Unsymmetric Matrices.................. 215 9.2.4 Modification of Weight Functions............ 216 9.3 Applications of Moments..................... 218 9.3.1 Approximating the Scattering Amplitude....... 218 9.3.2 Estimation of the Regularization Parameter...... 219 9.3.3 Estimation of the Trace of the Inverse and Determinant219 Chapter 1 Matrix Multiplication Problems 1.1 Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra by nu- merical methods that can be implemented on a computer. We begin with a brief discussion of the problems that will be discussed in this course, and then establish some fundamental concepts that will serve as the foundation of our discussion of various numerical methods for solving these problems. 1.1.1 Systems of Linear Equations One of the most fundamental problems in computational mathematics is to solve a system of n linear equations a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm for the n unknowns x1; x2; : : : ; xn. Many data fitting problems, such as polynomial interpolation or least-squares approximation, involve the solu- tion of such a system. Discretization of partial differential equations often yields systems of linear equations that must be solved. Systems of nonlinear equations are typically solved using iterative methods that solve a system 7 8 CHAPTER 1. MATRIX MULTIPLICATION PROBLEMS of linear equations during each iteration. In this course, we will study the solution of this type of problem in detail. 1.1.2 The Eigenvalue Problem As we will see, a system of linear equations can be described in terms of a single vector equation, in which the product of a known matrix of the coef- ficients aij, and an unknown vector containing the unknowns xj, is equated to a known vector containing the right-hand side values bi. Unlike multipli- cation of numbers, multiplication of a matrix and a vector is an operation that is difficult to understand intuitively. However, given a matrix A, there are certain vectors for which multiplication by A is equivalent to \ordinary" multiplication. This leads to the eigenvalue problem, which consists of find- ing a nonzero vector x and a number λ such that Ax = λx: The number λ is called an eigenvalue of A, and the vector x is called an eigenvector corresponding to λ. This is another problem whose solution we will discuss in this course. 1.2 Basic Concepts, Algorithms, and Notation Writing a system of equations can be quite tedious. Therefore, we instead represent a system of linear equations using a matrix, which is an array of elements, or entries. We say that a matrix A is m × n if it has m rows and n columns, and we denote the element in row i and column j by aij. We also denote the matrix A by [aij]. With this notation, a general system of m equations with n unknowns can be represented using a matrix A that contains the coefficients of the equations, a vector x that contains the unknowns, and a vector
Recommended publications
  • Eigenvalues and Eigenvectors of Tridiagonal Matrices∗
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 15, pp. 115-133, April 2006 ELA http://math.technion.ac.il/iic/ela EIGENVALUES AND EIGENVECTORS OF TRIDIAGONAL MATRICES∗ SAID KOUACHI† Abstract. This paper is continuation of previous work by the present author, where explicit formulas for the eigenvalues associated with several tridiagonal matrices were given. In this paper the associated eigenvectors are calculated explicitly. As a consequence, a result obtained by Wen- Chyuan Yueh and independently by S. Kouachi, concerning the eigenvalues and in particular the corresponding eigenvectors of tridiagonal matrices, is generalized. Expressions for the eigenvectors are obtained that differ completely from those obtained by Yueh. The techniques used herein are based on theory of recurrent sequences. The entries situated on each of the secondary diagonals are not necessary equal as was the case considered by Yueh. Key words. Eigenvectors, Tridiagonal matrices. AMS subject classifications. 15A18. 1. Introduction. The subject of this paper is diagonalization of tridiagonal matrices. We generalize a result obtained in [5] concerning the eigenvalues and the corresponding eigenvectors of several tridiagonal matrices. We consider tridiagonal matrices of the form −α + bc1 00 ... 0 a1 bc2 0 ... 0 .. .. 0 a2 b . An = , (1) .. .. .. 00. 0 . .. .. .. . cn−1 0 ... ... 0 an−1 −β + b n−1 n−1 ∞ where {aj}j=1 and {cj}j=1 are two finite subsequences of the sequences {aj}j=1 and ∞ {cj}j=1 of the field of complex numbers C, respectively, and α, β and b are complex numbers. We suppose that 2 d1, if j is odd ajcj = 2 j =1, 2, ..., (2) d2, if j is even where d 1 and d2 are complex numbers.
    [Show full text]
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • QR Decomposition: History and Its Applications
    Mathematics & Statistics Auburn University, Alabama, USA QR History Dec 17, 2010 Asymptotic result QR iteration QR decomposition: History and its EE Applications Home Page Title Page Tin-Yau Tam èèèUUUÎÎÎ JJ II J I Page 1 of 37 Æâ§w Go Back fÆêÆÆÆ Full Screen Close email: [email protected] Website: www.auburn.edu/∼tamtiny Quit 1. QR decomposition Recall the QR decomposition of A ∈ GLn(C): QR A = QR History Asymptotic result QR iteration where Q ∈ GLn(C) is unitary and R ∈ GLn(C) is upper ∆ with positive EE diagonal entries. Such decomposition is unique. Set Home Page a(A) := diag (r11, . , rnn) Title Page where A is written in column form JJ II J I A = (a1| · · · |an) Page 2 of 37 Go Back Geometric interpretation of a(A): Full Screen rii is the distance (w.r.t. 2-norm) between ai and span {a1, . , ai−1}, Close i = 2, . , n. Quit Example: 12 −51 4 6/7 −69/175 −58/175 14 21 −14 6 167 −68 = 3/7 158/175 6/175 0 175 −70 . QR −4 24 −41 −2/7 6/35 −33/35 0 0 35 History Asymptotic result QR iteration EE • QR decomposition is the matrix version of the Gram-Schmidt orthonor- Home Page malization process. Title Page JJ II • QR decomposition can be extended to rectangular matrices, i.e., if A ∈ J I m×n with m ≥ n (tall matrix) and full rank, then C Page 3 of 37 A = QR Go Back Full Screen where Q ∈ Cm×n has orthonormal columns and R ∈ Cn×n is upper ∆ Close with positive “diagonal” entries.
    [Show full text]
  • (Hessenberg) Eigenvalue-Eigenmatrix Relations∗
    (HESSENBERG) EIGENVALUE-EIGENMATRIX RELATIONS∗ JENS-PETER M. ZEMKE† Abstract. Explicit relations between eigenvalues, eigenmatrix entries and matrix elements are derived. First, a general, theoretical result based on the Taylor expansion of the adjugate of zI − A on the one hand and explicit knowledge of the Jordan decomposition on the other hand is proven. This result forms the basis for several, more practical and enlightening results tailored to non-derogatory, diagonalizable and normal matrices, respectively. Finally, inherent properties of (upper) Hessenberg, resp. tridiagonal matrix structure are utilized to construct computable relations between eigenvalues, eigenvector components, eigenvalues of principal submatrices and products of lower diagonal elements. Key words. Algebraic eigenvalue problem, eigenvalue-eigenmatrix relations, Jordan normal form, adjugate, principal submatrices, Hessenberg matrices, eigenvector components AMS subject classifications. 15A18 (primary), 15A24, 15A15, 15A57 1. Introduction. Eigenvalues and eigenvectors are defined using the relations Av = vλ and V −1AV = J. (1.1) We speak of a partial eigenvalue problem, when for a given matrix A ∈ Cn×n we seek scalar λ ∈ C and a corresponding nonzero vector v ∈ Cn. The scalar λ is called the eigenvalue and the corresponding vector v is called the eigenvector. We speak of the full or algebraic eigenvalue problem, when for a given matrix A ∈ Cn×n we seek its Jordan normal form J ∈ Cn×n and a corresponding (not necessarily unique) eigenmatrix V ∈ Cn×n. Apart from these constitutional relations, for some classes of structured matrices several more intriguing relations between components of eigenvectors, matrix entries and eigenvalues are known. For example, consider the so-called Jacobi matrices.
    [Show full text]
  • Explicit Inverse of a Tridiagonal (P, R)–Toeplitz Matrix
    Explicit inverse of a tridiagonal (p; r){Toeplitz matrix A.M. Encinas, M.J. Jim´enez Departament de Matemtiques Universitat Politcnica de Catalunya Abstract Tridiagonal matrices appears in many contexts in pure and applied mathematics, so the study of the inverse of these matrices becomes of specific interest. In recent years the invertibility of nonsingular tridiagonal matrices has been quite investigated in different fields, not only from the theoretical point of view (either in the framework of linear algebra or in the ambit of numerical analysis), but also due to applications, for instance in the study of sound propagation problems or certain quantum oscillators. However, explicit inverses are known only in a few cases, in particular when the tridiagonal matrix has constant diagonals or the coefficients of these diagonals are subjected to some restrictions like the tridiagonal p{Toeplitz matrices [7], such that their three diagonals are formed by p{periodic sequences. The recent formulae for the inversion of tridiagonal p{Toeplitz matrices are based, more o less directly, on the solution of second order linear difference equations, although most of them use a cumbersome formulation, that in fact don not take into account the periodicity of the coefficients. This contribution presents the explicit inverse of a tridiagonal matrix (p; r){Toeplitz, which diagonal coefficients are in a more general class of sequences than periodic ones, that we have called quasi{periodic sequences. A tridiagonal matrix A = (aij) of order n + 2 is called (p; r){Toeplitz if there exists m 2 N0 such that n + 2 = mp and ai+p;j+p = raij; i; j = 0;:::; (m − 1)p: Equivalently, A is a (p; r){Toeplitz matrix iff k ai+kp;j+kp = r aij; i; j = 0; : : : ; p; k = 0; : : : ; m − 1: We have developed a technique that reduces any linear second order difference equation with periodic or quasi-periodic coefficients to a difference equation of the same kind but with constant coefficients [3].
    [Show full text]
  • Etna the Block Hessenberg Process for Matrix
    Electronic Transactions on Numerical Analysis. Volume 46, pp. 460–473, 2017. ETNA Kent State University and c Copyright 2017, Kent State University. Johann Radon Institute (RICAM) ISSN 1068–9613. THE BLOCK HESSENBERG PROCESS FOR MATRIX EQUATIONS∗ M. ADDAMy, M. HEYOUNIy, AND H. SADOKz Abstract. In the present paper, we first introduce a block variant of the Hessenberg process and discuss its properties. Then, we show how to apply the block Hessenberg process in order to solve linear systems with multiple right-hand sides. More precisely, we define the block CMRH method for solving linear systems that share the same coefficient matrix. We also show how to apply this process for solving discrete Sylvester matrix equations. Finally, numerical comparisons are provided in order to compare the proposed new algorithms with other existing methods. Key words. Block Krylov subspace methods, Hessenberg process, Arnoldi process, CMRH, GMRES, low-rank matrix equations. AMS subject classifications. 65F10, 65F30 1. Introduction. In this work, we are first interested in solving s systems of linear equations with the same coefficient matrix and different right-hand sides of the form (1.1) A x(i) = y(i); 1 ≤ i ≤ s; where A is a large and sparse n × n real matrix, y(i) is a real column vector of length n, and s n. Such linear systems arise in numerous applications in computational science and engineering such as wave propagation phenomena, quantum chromodynamics, and dynamics of structures [5, 9, 36, 39]. When n is small, it is well known that the solution of (1.1) can be computed by a direct method such as LU or Cholesky factorization.
    [Show full text]
  • Determinant Formulas and Cofactors
    Determinant formulas and cofactors Now that we know the properties of the determinant, it’s time to learn some (rather messy) formulas for computing it. Formula for the determinant We know that the determinant has the following three properties: 1. det I = 1 2. Exchanging rows reverses the sign of the determinant. 3. The determinant is linear in each row separately. Last class we listed seven consequences of these properties. We can use these ten properties to find a formula for the determinant of a 2 by 2 matrix: � � � � � � � a b � � a 0 � � 0 b � � � = � � + � � � c d � � c d � � c d � � � � � � � � � � a 0 � � a 0 � � 0 b � � 0 b � = � � + � � + � � + � � � c 0 � � 0 d � � c 0 � � 0 d � = 0 + ad + (−cb) + 0 = ad − bc. By applying property 3 to separate the individual entries of each row we could get a formula for any other square matrix. However, for a 3 by 3 matrix we’ll have to add the determinants of twenty seven different matrices! Many of those determinants are zero. The non-zero pieces are: � � � � � � � � � a a a � � a 0 0 � � a 0 0 � � 0 a 0 � � 11 12 13 � � 11 � � 11 � � 12 � � a21 a22 a23 � = � 0 a22 0 � + � 0 0 a23 � + � a21 0 0 � � � � � � � � � � a31 a32 a33 � � 0 0 a33 � � 0 a32 0 � � 0 0 a33 � � � � � � � � 0 a 0 � � 0 0 a � � 0 0 a � � 12 � � 13 � � 13 � + � 0 0 a23 � + � a21 0 0 � + � 0 a22 0 � � � � � � � � a31 0 0 � � 0 a32 0 � � a31 0 0 � = a11 a22a33 − a11a23 a33 − a12a21a33 +a12a23a31 + a13 a21a32 − a13a22a31. Each of the non-zero pieces has one entry from each row in each column, as in a permutation matrix.
    [Show full text]
  • The Unsymmetric Eigenvalue Problem
    Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Supplemental Notes The Unsymmetric Eigenvalue Problem Properties and Decompositions Let A be an n × n matrix. A nonzero vector x is called an eigenvector of A if there exists a scalar λ such that Ax = λx: The scalar λ is called an eigenvalue of A, and we say that x is an eigenvector of A corresponding to λ. We see that an eigenvector of A is a vector for which matrix-vector multiplication with A is equivalent to scalar multiplication by λ. We say that a nonzero vector y is a left eigenvector of A if there exists a scalar λ such that λyH = yH A: The superscript H refers to the Hermitian transpose, which includes transposition and complex conjugation. That is, for any matrix A, AH = AT . An eigenvector of A, as defined above, is sometimes called a right eigenvector of A, to distinguish from a left eigenvector. It can be seen that if y is a left eigenvector of A with eigenvalue λ, then y is also a right eigenvector of AH , with eigenvalue λ. Because x is nonzero, it follows that if x is an eigenvector of A, then the matrix A − λI is singular, where λ is the corresponding eigenvalue. Therefore, λ satisfies the equation det(A − λI) = 0: The expression det(A−λI) is a polynomial of degree n in λ, and therefore is called the characteristic polynomial of A (eigenvalues are sometimes called characteristic values). It follows from the fact that the eigenvalues of A are the roots of the characteristic polynomial that A has n eigenvalues, which can repeat, and can also be complex, even if A is real.
    [Show full text]
  • Computing the Jordan Structure of an Eigenvalue∗
    SIAM J. MATRIX ANAL.APPL. c 2017 Society for Industrial and Applied Mathematics Vol. 38, No. 3, pp. 949{966 COMPUTING THE JORDAN STRUCTURE OF AN EIGENVALUE∗ NICOLA MASTRONARDIy AND PAUL VAN DOORENz Abstract. In this paper we revisit the problem of finding an orthogonal similarity transformation that puts an n × n matrix A in a block upper-triangular form that reveals its Jordan structure at a particular eigenvalue λ0. The obtained form in fact reveals the dimensions of the null spaces of i (A − λ0I) at that eigenvalue via the sizes of the leading diagonal blocks, and from this the Jordan structure at λ0 is then easily recovered. The method starts from a Hessenberg form that already reveals several properties of the Jordan structure of A. It then updates the Hessenberg form in an efficient way to transform it to a block-triangular form in O(mn2) floating point operations, where m is the total multiplicity of the eigenvalue. The method only uses orthogonal transformations and is backward stable. We illustrate the method with a number of numerical examples. Key words. Jordan structure, staircase form, Hessenberg form AMS subject classifications. 65F15, 65F25 DOI. 10.1137/16M1083098 1. Introduction. Finding the eigenvalues and their corresponding Jordan struc- ture of a matrix A is one of the most studied problems in numerical linear algebra. This structure plays an important role in the solution of explicit differential equations, which can be modeled as n×n (1.1) λx(t) = Ax(t); x(0) = x0;A 2 R ; where λ stands for the differential operator.
    [Show full text]
  • 11.5 Reduction of a General Matrix to Hessenberg Form
    482 Chapter 11. Eigensystems is equivalent to the 2n 2n real problem × A B u u − =λ (11.4.2) BA·v v T T Note that the 2n 2n matrix in (11.4.2) is symmetric: A = A and B = B visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to [email protected] (outside North America). readable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMs Permission is granted for internet users to make one paper copy their own personal use. Further reproduction, or any copying of machine- Copyright (C) 1988-1992 by Cambridge University Press.Programs Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) if C is Hermitian.× − Corresponding to a given eigenvalue λ, the vector v (11.4.3) −u is also an eigenvector, as you can verify by writing out the two matrix equa- tions implied by (11.4.2). Thus if λ1,λ2,...,λn are the eigenvalues of C, then the 2n eigenvalues of the augmented problem (11.4.2) are λ1,λ1,λ2,λ2,..., λn,λn; each, in other words, is repeated twice. The eigenvectors are pairs of the form u + iv and i(u + iv); that is, they are the same up to an inessential phase. Thus we solve the augmented problem (11.4.2), and choose one eigenvalue and eigenvector from each pair. These give the eigenvalues and eigenvectors of the original matrix C.
    [Show full text]
  • Sparse Linear Systems Section 4.2 – Banded Matrices
    Band Systems Tridiagonal Systems Cyclic Reduction Parallel Numerical Algorithms Chapter 4 – Sparse Linear Systems Section 4.2 – Banded Matrices Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 1 / 28 Band Systems Tridiagonal Systems Cyclic Reduction Outline 1 Band Systems 2 Tridiagonal Systems 3 Cyclic Reduction Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 2 / 28 Band Systems Tridiagonal Systems Cyclic Reduction Banded Linear Systems Bandwidth (or semibandwidth) of n × n matrix A is smallest value w such that aij = 0 for all ji − jj > w Matrix is banded if w n If w p, then minor modifications of parallel algorithms for dense LU or Cholesky factorization are reasonably efficient for solving banded linear system Ax = b If w / p, then standard parallel algorithms for LU or Cholesky factorization utilize few processors and are very inefficient Michael T. Heath and Edgar Solomonik Parallel Numerical Algorithms 3 / 28 Band Systems Tridiagonal Systems Cyclic Reduction Narrow Banded Linear Systems More efficient parallel algorithms for narrow banded linear systems are based on divide-and-conquer approach in which band is partitioned into multiple pieces that are processed simultaneously Reordering matrix by nested dissection is one example of this approach Because of fill, such methods generally require more total work than best serial algorithm for system with dense band We will illustrate for tridiagonal linear systems, for which w = 1, and will assume pivoting is not needed for stability (e.g., matrix is diagonally dominant or symmetric positive definite) Michael T.
    [Show full text]
  • Eigenvalues of a Special Tridiagonal Matrix
    Eigenvalues of a Special Tridiagonal Matrix Alexander De Serre Rothney∗ October 10, 2013 Abstract In this paper we consider a special tridiagonal test matrix. We prove that its eigenvalues are the even integers 2;:::; 2n and show its relationship with the famous Kac-Sylvester tridiagonal matrix. 1 Introduction We begin with a quick overview of the theory of symmetric tridiagonal matrices, that is, we detail a few basic facts about tridiagonal matrices. In particular, we describe the symmetrization process of a tridiagonal matrix as well as the orthogonal polynomials that arise from the characteristic polynomials of said matrices. Definition 1.1. A tridiagonal matrix, Tn, is of the form: 2a1 b1 0 ::: 0 3 6 :: :: : 7 6c1 a2 : : : 7 6 7 6 :: :: :: 7 Tn = 6 0 : : : 0 7 ; (1.1) 6 7 6 : :: :: 7 4 : : : an−1 bn−15 0 ::: 0 cn−1 an where entries below the subdiagonal and above the superdiagonal are zero. If bi 6= 0 for i = 1; : : : ; n − 1 and ci 6= 0 for i = 1; : : : ; n − 1, Tn is called a Jacobi matrix. In this paper we will use a more compact notation and only describe the subdiagonal, diagonal, and superdiagonal (where appropriate). For example, Tn can be rewritten as: 0 1 b1 : : : bn−1 Tn = @ a1 a2 : : : an−1 an A : (1.2) c1 : : : cn−1 ∗Bishop's University, Sherbrooke, Quebec, Canada 1 Note that the study of symmetric tridiagonal matrices is sufficient for our purpose as any Jacobi matrix with bici > 0 8i can be symmetrized through a similarity transformation: 0 p p 1 b1c1 ::: bn−1cn−1 −1 An = Dn TnDn = @ a1 a2 : : : an−1 an A ; (1.3) p p b1c1 ::: bn−1cn−1 r cici+1 ··· cn−1 where Dn = diag(γ1; : : : ; γn) and γi = .
    [Show full text]