LU-Factorization and Positive Definite Matrices

Total Page:16

File Type:pdf, Size:1020Kb

LU-Factorization and Positive Definite Matrices LU-factorization and Positive Definite Matrices Tom Lyche University of Oslo Norway LU-factorization and Positive Definite Matrices – p. 1/49 Topics Today Block multiplication of matrices Basics on triangular matrices LU factorization of matrices Positive definite matrices examples criteria for positive definiteness LU factorization of positive definite matrices LU-factorization and Positive Definite Matrices – p. 2/49 Partitioned matrices A rectangular matrix A can be partitioned into submatrices by drawing horizontal lines between selected rows and vertical lines between 123 selected columns. For example, A = 456 can be partitoned as 789 h i 1 2 3 1 2 3 A11 A12 (i) = 4 5 6 , (ii) a.1, a.2, a.3 = 4 5 6 , A21 A22 7 8 9 h i 7 8 9 T a1. 1 2 3 1 2 3 T (iii) a2. = 4 5 6 , (iv) A11, A12 = 4 5 6 . aT 7 8 9 h i 7 8 9 3. The submatrices in a partition is often referred to as blocks and a partitioned matrix is sometimes called a block matrix. LU-factorization and Positive Definite Matrices – p. 3/49 Column partition Suppose A Rm,p and B Rp,n. ∈ ∈ If B = b.1,..., b.n is partitioned into columns then the partition of the producth AB intoi columns is AB = Ab.1, Ab.2,..., Ab.n . h i In particular, if I is the identity matrix of order p then A = AI = A e1, e2,..., ep = Ae1, Ae2,..., Aep h i h i and we see that column j of A can be written Aej for j = 1,...,p. LU-factorization and Positive Definite Matrices – p. 4/49 Row partition If A is partitioned into rows then T T a1. a1.B T T a2. a2.B AB = B = . . . . T T am. am.B T and taking A = Ip it follows that row i of B can be written ei B. It is often useful to write the matrix-vector product Ax as a linear combination of the columns of A Ax = x a + x a + + x a . 1 .1 2 .2 ··· p .p One way to see that this is correct is to partition A into columns and x into rows. LU-factorization and Positive Definite Matrices – p. 5/49 Rules for 2 2 blocks × p,r p,n−r If B = B , B , where B1 R and B2 R then 1 2 ∈ ∈ h i A B1, B2 = AB1, AB2 . h i h i A 1 k,p m−k,p If A = , where A1 R and A2 R then A2 ∈ ∈ A1 A1B B = . A2 A2B m,s m,p−s If A = A , A and B = B , B , where A1 R , A2 R , 1 2 1 2 ∈ ∈ B Rhs,p and iB Rp−s,nh then i 1 ∈ 2 ∈ B1 A1, A2 = A1B1 + A2B2 . B h i 2 h i LU-factorization and Positive Definite Matrices – p. 6/49 The general rule for 2 2 blocks × A11 A12 B11 B12 If A = and B = then A21 A22 B21 B22 A11 A12 B11 B12 A11B11 + A12B21 A11B12 + A12B22 = , A21 A22 B21 B22 A21B11 + A22B21 A21B12 + A22B22 provided the vertical partition line in A matches the horizontal line in B, i.e. the number of columns in A11 and A21 equals the number of rows in B11 and B12. LU-factorization and Positive Definite Matrices – p. 7/49 The general case If A11 A1s B11 B1q . ··· . ··· . A = . , B = . , Ap1 Aps Bs1 Bsq ··· ··· and if all the matrix products in s Cij = AikBkj, i = 1,...,p, j = 1,...,q kX=1 are well defined then C11 C1q . ··· . AB = . . Cp1 Cpq ··· LU-factorization and Positive Definite Matrices – p. 8/49 Block-Triangular Matrices Lemma 1. Suppose A11 A12 A = 0 A22 where A, A11 and A22 are square matrices. Then A is nonsingular if and only if both A11 and A22 are nonsingular. In that case −1 −1 −1 A11 A11 A12A22 A−1 = − (1) −1 0 A22 LU-factorization and Positive Definite Matrices – p. 9/49 Proof ⇐ If A11 and A12 are nonsingular then −1 −1 −1 A A A12A A11 A12 I 0 11 − 11 22 = = I −1 0 A22 0 A22 0 I and A is nonsingular with the indicated inverse. LU-factorization and Positive Definite Matrices – p. 10/49 Proof ⇒ Conversely, let B be the inverse of the nonsingular matrix A. We partition B conformally with A and have B11 B12 A11 A12 I 0 BA = = = I B21 B22 0 A22 0 I Using block-multiplication we find B11A11 = I, B21A11 = 0, B21A12 + B22A22 = I. The first equation implies that A11 is invertible, this in turn implies that B21 = 0 in the second equation, and then the third equation simplifies to B22A22 = I. We conclude that also A22 is invertible. LU-factorization and Positive Definite Matrices – p. 11/49 The inverse Consider now a triangular matrix. n,n Lemma 2. An upper (lower) triangular matrix A = [aij] R is ∈ nonsingular if and only if the diagonal entries aii, i = 1,...,n are nonzero. In that case the inverse is upper (lower) triangular with diagonal −1 entries aii , i = 1,...,n. Proof: We use induction on n. The result holds for n = 1: The 1-by-1 matrix A =(a11) is invertible if and only if a11 = 0 −1 −1 6 and in that case A =(a11 ). Suppose the result holds for n = k and let A = Rk+1,k+1 be upper triangular. ∈ LU-factorization and Positive Definite Matrices – p. 12/49 Proof We partition A in the form Ak ak A = 0 ak+1,k+1 and note that A Rk,k is upper triangular. By Lemma 1.1 A is k ∈ nonsingular if and only if Ak and (ak+1,k+1) are nonsingular and in that case −1 −1 −1 A A aka A−1 = k − k k+1,k+1 . −1 0 ak+1,k+1 By the induction hypothesis Ak is nonsingular if and only if the diagonal −1 entries a11,...,akk of Ak are nonzero and in that case Ak is upper −1 triangular with diagonal entries aii , i = 1,...,k. The result for A follows. LU-factorization and Positive Definite Matrices – p. 13/49 Unit Triangular Matrices Lemma 3. The product C = AB =(cij) of two upper(lower) triangular matrices A =(aij ) and B =(bij) is upper(lower) triangular with diagonal entries cii = aiibii for all i. Proof. Exercise. A matrix is unit triangular if it is triangular with 1’s on the diagonal. Lemma 4. For a unit upper(lower) triangular matrix A Rn,n: ∈ 1. A is invertible and the inverse is unit upper(lower) triangular. 2. The product of two unit upper(lower) triangular matrices is unit upper(lower) triangular. Proof. 1. follows from Lemma 1.2, while Lemma 1.3 implies 2. LU-factorization and Positive Definite Matrices – p. 14/49 LU-factorization We say that A = LU is an LU-factorization of A Rn,n if L Rn,n is ∈ ∈ lower triangular and U Rn,n is upper triangular. In addition we will ∈ assume that L is unit triangular. Example 1. The equation 2 1 1 0 2 1 A = − = − 1 2 1/2 1 0 3/2 − − gives an LU -factorization of the 2-by-2 matrix A. LU-factorization and Positive Definite Matrices – p. 15/49 Example Every nonsingular matrix has a P LU-factorization, but not necessarily an LU-factorization. 0 1 Example 2. An LU -factorization of A = 1 1 must satisfy the equation 0 1 1 0 u1 u3 u1 u3 = = 1 1 l1 1 0 u2 l1u1 l1u3 + u2 for the unknowns l1 in L and u1,u2,u3 in U . Comparing (1, 1)-elements we see that u1 = 0, which makes it impossible to satisfy the condition 1= l1u1 for the (2, 1) element. We conclude that A has no LU -factorization. LU-factorization and Positive Definite Matrices – p. 16/49 Uniqueness Theorem 5. The LU -factorization of a nonsingular matrix is unique whenever it exists. Proof. x Suppose A = L1U 1 = L2U 2 are two LU -factorizations of the nonsingular matrix A. −1 −1 The equation L1U 1 = L2U 2 can be written in the form L2 L1 = U 2U 1 , −1 −1 where by lemmas 1.2-1.4 L2 L1 is unit lower triangular and U 2 U 1 is upper triangular. But then both matrices must be diagonal with ones on the diagonal. −1 −1 We conclude that L2 L1 = I = U 1U 2 which means that L1 = L2 and U 1 = U 2. LU-factorization and Positive Definite Matrices – p. 17/49 Leading Principal Submatrices Suppose A Cn,n. The upper left k k corners ∈ × a11 a1k . ··· . Ak = . for k = 1,...,n ak1 akk ··· of A are called the leading principal submatrices of A LU-factorization and Positive Definite Matrices – p. 18/49 A Lemma The following lemma will be used for existence. Lemma 6. Suppose A = LU is the LU -factorization of A Rn,n. For k = 1,...,n ∈ let Ak, Lk, U k be the leading principal submatrices of A, L, U , respectively. Then Ak = LkU k is the LU -factorization of Ak for k = 1,...,n. Proof: We partition A = LU as follows: Ak Bk Lk 0 U k Vk A = = = LU, (2) Ck Dk Mk Nk 0 Wk where D , N ,W Rn−k,n−k. k k k ∈ LU-factorization and Positive Definite Matrices – p. 19/49 Proof Using block-multiplication we find the equations Ak = LkU k (3) Bk = LkVk (4) Ck = MkU k (5) Dk = MkVk + NkWk (6) Since Lk is unit lower triangular and U k is upper triangular we see that (3) gives the LU-factorization of Ak.
Recommended publications
  • The Inverse Along a Lower Triangular Matrix∗
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM The inverse along a lower triangular matrix∗ Xavier Marya, Pedro Patr´ıciob aUniversit´eParis-Ouest Nanterre { La D´efense,Laboratoire Modal'X, 200 avenuue de la r´epublique,92000 Nanterre, France. email: [email protected] bDepartamento de Matem´aticae Aplica¸c~oes,Universidade do Minho, 4710-057 Braga, Portugal. email: [email protected] Abstract In this paper, we investigate the recently defined notion of inverse along an element in the context of matrices over a ring. Precisely, we study the inverse of a matrix along a lower triangular matrix, under some conditions. Keywords: Generalized inverse, inverse along an element, Dedekind-finite ring, Green's relations, rings AMS classification: 15A09, 16E50 1 Introduction In this paper, R is a ring with identity. We say a is (von Neumann) regular in R if a 2 aRa.A particular solution to axa = a is denoted by a−, and the set of all such solutions is denoted by af1g. Given a−; a= 2 af1g then x = a=aa− satisfies axa = a; xax = a simultaneously. Such a solution is called a reflexive inverse, and is denoted by a+. The set of all reflexive inverses of a is denoted by af1; 2g. Finally, a is group invertible if there is a# 2 af1; 2g that commutes with a, and a is Drazin invertible if ak is group invertible, for some non-negative integer k. This is equivalent to the existence of aD 2 R such that ak+1aD = ak; aDaaD = aD; aaD = aDa.
    [Show full text]
  • LU Decompositions We Seek a Factorization of a Square Matrix a Into the Product of Two Matrices Which Yields an Efficient Method
    LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable vector and is a constant vector for . The factorization of A into the product of two matrices is closely related to Gaussian elimination. Definition 1. A square matrix is said to be lower triangular if for all . 2. A square matrix is said to be unit lower triangular if it is lower triangular and each . 3. A square matrix is said to be upper triangular if for all . Examples 1. The following are all lower triangular matrices: , , 2. The following are all unit lower triangular matrices: , , 3. The following are all upper triangular matrices: , , We note that the identity matrix is only square matrix that is both unit lower triangular and upper triangular. Example Let . For elementary matrices (See solv_lin_equ2.pdf) , , and we find that . Now, if , then direct computation yields and . It follows that and, hence, that where L is unit lower triangular and U is upper triangular. That is, . Observe the key fact that the unit lower triangular matrix L “contains” the essential data of the three elementary matrices , , and . Definition We say that the matrix A has an LU decomposition if where L is unit lower triangular and U is upper triangular. We also call the LU decomposition an LU factorization. Example 1. and so has an LU decomposition. 2. The matrix has more than one LU decomposition. Two such LU factorizations are and .
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Triangular Factorization
    Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix.
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • 8.3 Positive Definite Matrices
    8.3. Positive Definite Matrices 433 Exercise 8.2.25 Show that every 2 2 orthog- [Hint: If a2 + b2 = 1, then a = cos θ and b = sinθ for × cos θ sinθ some angle θ.] onal matrix has the form − or sinθ cosθ cos θ sin θ Exercise 8.2.26 Use Theorem 8.2.5 to show that every for some angle θ. sinθ cosθ symmetric matrix is orthogonally diagonalizable. − 8.3 Positive Definite Matrices All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are encountered, have countless applications throughout science and engineering. They also arise in statistics (for example, in factor analysis used in the social sciences) and in geometry (see Section 8.9). We will encounter them again in Chapter 10 when describing all inner products in Rn. Definition 8.5 Positive Definite Matrices A square matrix is called positive definite if it is symmetric and all its eigenvalues λ are positive, that is λ > 0. Because these matrices are symmetric, the principal axes theorem plays a central role in the theory. Theorem 8.3.1 If A is positive definite, then it is invertible and det A > 0. Proof. If A is n n and the eigenvalues are λ1, λ2, ..., λn, then det A = λ1λ2 λn > 0 by the principal axes theorem (or× the corollary to Theorem 8.2.5). ··· If x is a column in Rn and A is any real n n matrix, we view the 1 1 matrix xT Ax as a real number.
    [Show full text]
  • Wronskian Solutions to the Kdv Equation Via B\" Acklund
    Wronskian solutions to the KdV equation via B¨acklund transformation Qi-fei Xuan∗, Mei-ying Ou, Da-jun Zhang† Department of Mathematics, Shanghai University, Shanghai 200444, P.R. China October 27, 2018 Abstract In the paper we discuss the B¨acklund transformation of the KdV equation between solitons and solitons, between negatons and negatons, between positons and positons, between rational solution and rational solution, and between complexitons and complexitons. We investigate the conditions that Wronskian entries satisfy for the bilinear B¨acklund transformation of the KdV equation. By choosing suitable Wronskian entries and the parameter in the bilinear B¨acklund transformation, we obtain transformations between many kinds of solutions. Keywords: the KdV equation, Wronskian solution, bilinear form, B¨acklund transformation 1 Introduction The Wronskian can be considered as a bridge connecting with many classical methods in soliton theory. This is not only because soliton solutions in Wronskian form can be obtained from the Darboux transformation[1], Sato theory[2, 3] and Wronskian technique[4]-[10], but also because the exponential polynomial for N-solitons derived from Hirota method[11, 12] and the matrix form given by the Inverse Scattering Transform[13, 14] can be transformed into a Wronskian by extracting some exponential factors. The special structure of a Wronskian contributes simple arXiv:0706.3487v1 [nlin.SI] 24 Jun 2007 forms of its derivatives, and this admits solution verification by direct substituting Wronskians into a bilinear soliton equation or a bilinear B¨acklund transformation(BT). This approach is re- ferred to as Wronskian technique[4]. In the approach a bilinear soliton equation is some algebraic identity provided that Wronskian entry vector satisfies some differential equation set which we call Wronskian condition.
    [Show full text]
  • Inner Products and Norms (Part III)
    Inner Products and Norms (part III) Prof. Dan A. Simovici UMB 1 / 74 Outline 1 Approximating Subspaces 2 Gram Matrices 3 The Gram-Schmidt Orthogonalization Algorithm 4 QR Decomposition of Matrices 5 Gram-Schmidt Algorithm in R 2 / 74 Approximating Subspaces Definition A subspace T of a inner product linear space is an approximating subspace if for every x 2 L there is a unique element in T that is closest to x. Theorem Let T be a subspace in the inner product space L. If x 2 L and t 2 T , then x − t 2 T ? if and only if t is the unique element of T closest to x. 3 / 74 Approximating Subspaces Proof Suppose that x − t 2 T ?. Then, for any u 2 T we have k x − u k2=k (x − t) + (t − u) k2=k x − t k2 + k t − u k2; by observing that x − t 2 T ? and t − u 2 T and applying Pythagora's 2 2 Theorem to x − t and t − u. Therefore, we have k x − u k >k x − t k , so t is the unique element of T closest to x. 4 / 74 Approximating Subspaces Proof (cont'd) Conversely, suppose that t is the unique element of T closest to x and x − t 62 T ?, that is, there exists u 2 T such that (x − t; u) 6= 0. This implies, of course, that u 6= 0L. We have k x − (t + au) k2=k x − t − au k2=k x − t k2 −2(x − t; au) + jaj2 k u k2 : 2 2 Since k x − (t + au) k >k x − t k (by the definition of t), we have 2 2 1 −2(x − t; au) + jaj k u k > 0 for every a 2 F.
    [Show full text]
  • Lecture 5: Matrix Operations: Inverse
    Lecture 5: Matrix Operations: Inverse • Inverse of a matrix • Computation of inverse using co-factor matrix • Properties of the inverse of a matrix • Inverse of special matrices • Unit Matrix • Diagonal Matrix • Orthogonal Matrix • Lower/Upper Triangular Matrices 1 Matrix Inverse • Inverse of a matrix can only be defined for square matrices. • Inverse of a square matrix exists only if the determinant of that matrix is non-zero. • Inverse matrix of 퐴 is noted as 퐴−1. • 퐴퐴−1 = 퐴−1퐴 = 퐼 • Example: 2 −1 0 1 • 퐴 = , 퐴−1 = , 1 0 −1 2 2 −1 0 1 0 1 2 −1 1 0 • 퐴퐴−1 = = 퐴−1퐴 = = 1 0 −1 2 −1 2 1 0 0 1 2 Inverse of a 3 x 3 matrix (using cofactor matrix) • Calculating the inverse of a 3 × 3 matrix is: • Compute the matrix of minors for A. • Compute the cofactor matrix by alternating + and – signs. • Compute the adjugate matrix by taking a transpose of cofactor matrix. • Divide all elements in the adjugate matrix by determinant of matrix 퐴. 1 퐴−1 = 푎푑푗(퐴) det(퐴) 3 Inverse of a 3 x 3 matrix (using cofactor matrix) 3 0 2 퐴 = 2 0 −2 0 1 1 0 −2 2 −2 2 0 1 1 0 1 0 1 2 2 2 0 2 3 2 3 0 Matrix of Minors = = −2 3 3 1 1 0 1 0 1 0 2 3 2 3 0 0 −10 0 0 −2 2 −2 2 0 2 2 2 1 −1 1 2 −2 2 Cofactor of A (퐂) = −2 3 3 .∗ −1 1 −1 = 2 3 −3 0 −10 0 1 −1 1 0 10 0 2 2 0 adj A = CT = −2 3 10 2 −3 0 2 2 0 0.2 0.2 0 1 1 A-1 = ∗ adj A = −2 3 10 = −0.2 0.3 1 |퐴| 10 2 −3 0 0.2 −0.3 0 4 Properties of Inverse of a Matrix • (A-1)-1 = A • (AB)-1 = B-1A-1 • (kA)-1 = k-1A-1 where k is a non-zero scalar.
    [Show full text]
  • Stat 309: Mathematical Computations I Fall 2014 Lecture 8
    STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2014 LECTURE 8 • we look at LU factorization and some of its variants: condensed LU, LDU, LDLT, and Cholesky factorizations 1. existence of lu factorization • the solution method for a linear system Ax = b depends on the structure of A: A may be a sparse or dense matrix, or it may have one of many well-known structures, such as being a banded matrix, or a Hankel matrix • for the general case of a dense, unstructured matrix A, the most common method is to obtain a decomposition A = LU, where L is lower triangular and U is upper triangular • this decomposition is called the LU factorization or LU decomposition • we deduce its existence via a constructive proof, namely, Gaussian elimination • the motivation for this is something you learnt in middle school, i.e., solving Ax = b by eliminating variables a11x1 + ··· + a1nxn = b1 a21x1 + ··· + a2nxn = b2 . an1x1 + ··· + annxn = bn • we proceed by multiplying the first equation by −a21=a11 and adding it to the second equation, and in general multiplying the first equation by −ai1=a11 and adding it to equation i and this leaves you with the equivalent system a11x1 + a12x2 + ··· + a1nxn = b1 0 0 0 0x1 + a22x2 + ··· + a2nxn = b2 . 0 0 0 0x1 + an2x2 + ··· + annxn = bn • continuing in this fashion, adding multiples of the second equation to each subsequent equa- tion to make all elements below the diagonal equal to zero, you obtain an upper triangular system and may then solve for all xn; xn−1; : : : ; x1 by back substitution • getting the LU factorization A = LU is very similar, the main difference is that you want not just the final upper triangular matrix (which is your U) but also to keep track of all the elimination steps (which is your L) 2.
    [Show full text]
  • Amath/Math 516 First Homework Set Solutions
    AMATH/MATH 516 FIRST HOMEWORK SET SOLUTIONS The purpose of this problem set is to have you brush up and further develop your multi-variable calculus and linear algebra skills. The problem set will be very difficult for some and straightforward for others. If you are having any difficulty, please feel free to discuss the problems with me at any time. Don't delay in starting work on these problems! 1. Let Q be an n × n symmetric positive definite matrix. The following fact for symmetric matrices can be used to answer the questions in this problem. Fact: If M is a real symmetric n×n matrix, then there is a real orthogonal n×n matrix U (U T U = I) T and a real diagonal matrix Λ = diag(λ1; λ2; : : : ; λn) such that M = UΛU . (a) Show that the eigenvalues of Q2 are the square of the eigenvalues of Q. Note that λ is an eigenvalue of Q if and only if there is some vector v such that Qv = λv. Then Q2v = Qλv = λ2v, so λ2 is an eigenvalue of Q2. (b) If λ1 ≥ λ2 ≥ · · · ≥ λn are the eigen values of Q, show that 2 T 2 n λnkuk2 ≤ u Qu ≤ λ1kuk2 8 u 2 IR : T T Pn 2 We have u Qu = u UΛUu = i=1 λikuk2. The result follows immediately from bounds on the λi. (c) If 0 < λ < λ¯ are such that 2 T ¯ 2 n λkuk2 ≤ u Qu ≤ λkuk2 8 u 2 IR ; then all of the eigenvalues of Q must lie in the interval [λ; λ¯].
    [Show full text]
  • Using Row Reduction to Calculate the Inverse and the Determinant of a Square Matrix
    Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n × n square matrix A is called invertible if there exists a matrix X such that AX = XA = I, where I is the n × n identity matrix. If such matrix X exists, one can show that it is unique. We call it the inverse of A and denote it by A−1 = X, so that AA−1 = A−1A = I holds if A−1 exists, i.e. if A is invertible. Not all matrices are invertible. If A−1 does not exist, the matrix A is called singular or noninvertible. Note that if A is invertible, then the linear algebraic system Ax = b has a unique solution x = A−1b. Indeed, multiplying both sides of Ax = b on the left by A−1, we obtain A−1Ax = A−1b. But A−1A = I and Ix = x, so x = A−1b The converse is also true, so for a square matrix A, Ax = b has a unique solution if and only if A is invertible. 2 Calculating the inverse To compute A−1 if it exists, we need to find a matrix X such that AX = I (1) Linear algebra tells us that if such X exists, then XA = I holds as well, and so X = A−1. 1 Now observe that solving (1) is equivalent to solving the following linear systems: Ax1 = e1 Ax2 = e2 ... Axn = en, where xj, j = 1, .
    [Show full text]