Eigenvalues, Diagonalization, and the Jordan Canonical Form 1 4.1 Eigenvalues, Eigenvectors, and the Characteristic Polynomial

Total Page:16

File Type:pdf, Size:1020Kb

Eigenvalues, Diagonalization, and the Jordan Canonical Form 1 4.1 Eigenvalues, Eigenvectors, and the Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 2020, v. 2.00) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 1 4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 4.1.1 Eigenvalues and Eigenvectors . 2 4.1.2 Eigenvalues and Eigenvectors of Matrices . 3 4.1.3 Eigenspaces . 6 4.2 Diagonalization . 10 4.3 Generalized Eigenvectors and the Jordan Canonical Form . 15 4.3.1 Generalized Eigenvectors . 16 4.3.2 The Jordan Canonical Form . 20 4.4 Applications of Diagonalization and the Jordan Canonical Form . 24 4.4.1 Spectral Mapping and the Cayley-Hamilton Theorem . 24 4.4.2 Transition Matrices and Incidence Matrices . 24 4.4.3 Systems of Linear Dierential Equations . 27 4.4.4 Matrix Exponentials and the Jordan Form . 29 4.4.5 The Spectral Theorem for Hermitian Operators . 31 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Unfortunately, not all linear operators are diagonalizable, so we will then discuss a method for computing the Jordan canonical form of matrix, which is the representation that is as close to a diagonal matrix as possible. We close with a few applications of the Jordan canonical form, including a proof of the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial. 4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix. ◦ Ultimately, our reason for asking this question is that we would like to describe T in as simple a way as possible, and it is unlikely we could hope for anything simpler than a diagonal matrix. So suppose that and the diagonal entries of β are . ◦ β = fv1;:::; vng [T ]β fλ1; : : : ; λng 1 ◦ Then, by assumption, we have T (vi) = λivi for each 1 ≤ i ≤ n: the linear transformation T behaves like scalar multiplication by λi on the vector vi. ◦ Conversely, if we were able to nd a basis β of V such that T (vi) = λivi for some scalars λi, with , then the associated matrix β would be a diagonal matrix. 1 ≤ i ≤ n [T ]β ◦ This suggests we should study vectors v such that T (v) = λv for some scalar λ. 4.1.1 Eigenvalues and Eigenvectors • Denition: If T : V ! V is a linear transformation, a nonzero vector v with T (v) = λv is called an eigenvector of T , and the corresponding scalar λ is called an eigenvalue of T . ◦ Important note: We do not consider the zero vector 0 an eigenvector. (The reason for this convention is to ensure that if v is an eigenvector, then its corresponding eigenvalue λ is unique.) ◦ Note also that (implicitly) λ must be an element of the scalar eld of V , since otherwise λv does not make sense. ◦ When V is a vector space of functions, we often use the word eigenfunction in place of eigenvector. • Here are a few examples of linear transformations and eigenvectors: ◦ Example: If T : R2 ! R2 is the map with T (x; y) = h2x + 3y; x + 4yi, then the vector v = h3; −1i is an eigenvector of T with eigenvalue 1, since T (v) = h3; −1i = v. ◦ Example: If T : C2 ! C2 is the map with T (x; y) = h2x + 3y; x + 4yi, the vector w = h1; 1i is an eigenvector of T with eigenvalue 5, since T (w) = h5; 5i = 5w. 1 1 ◦ Example: If T : M ( ) ! M ( ) is the transpose map, then the matrix is an eigenvector 2×2 R 2×2 R 1 3 of T with eigenvalue 1. 0 −2 ◦ Example: If T : M ( ) ! M ( ) is the transpose map, then the matrix is an eigenvector 2×2 R 2×2 R 2 0 of T with eigenvalue −1. ◦ Example: If T : P (R) ! P (R) is the map with T (f(x)) = xf 0(x), then for any integer n ≥ 0, the polynomial xn is an eigenfunction of T with eigenvalue n, since T (xn) = x · nxn−1 = nxn. ◦ Example: If V is the space of innitely-dierentiable functions and D : V ! V is the dierentiation operator, the function f(x) = erx is an eigenfunction with eigenvalue r, for any real number r, since D(erx) = rerx. ◦ Example: If T : V ! V is any linear transformation and v is a nonzero vector in ker(T ), then v is an eigenvector of V with eigenvalue 0. In fact, the eigenvectors with eigenvalue 0 are precisely the nonzero vectors in ker(T ). • Finding eigenvectors is a generalization of computing the kernel of a linear transformation, but, in fact, we can reduce the problem of nding eigenvectors to that of computing the kernel of a related linear transformation: • Proposition (Eigenvalue Criterion): If T : V ! V is a linear transformation, the nonzero vector v is an eigenvector of T with eigenvalue λ if and only if v is in ker(λI − T ) = ker(T − λI), where I is the identity transformation on V . ◦ This criterion reduces the computation of eigenvectors to that of computing the kernel of a collection of linear transformations. ◦ Proof: Assume v 6= 0. Then v is an eigenvalue of T with eigenvalue λ () T (v) = λv () (λI)v − T (v) = 0 () (λI − T )(v) = 0 () v is in the kernel of λI − T . The equivalence ker(λI − T ) = ker(T − λI) is also immediate. • We will remark that some linear operators may have no eigenvectors at all. Example: If is the integration operator x , show that has no eigenvectors. • I : R[x] ! R[x] I(p) = 0 p(t) dt I ´ 2 Suppose that , so that x . ◦ I(p) = λp 0 p(t) dt = λp(x) ◦ Then, dierentiating both sides´ with respect to x and applying the fundamental theorem of calculus yields p(x) = λp0(x). ◦ If p had positive degree n, then λp0(x) would have degree at most n − 1, so it could not equal p(x). ◦ Thus, p must be a constant polynomial. But the only constant polynomial with I(p) = λp is the zero polynomial, which is by denition not an eigenvector. Thus, I has no eigenvectors. • In other cases, the existence of eigenvectors may depend on the scalar eld being used. • Example: Show that T : F 2 ! F 2 dened by T (x; y) = hy; −xi has no eigenvectors when F = R, but does have eigenvectors when F = C. ◦ If T (x; y) = λ hx; yi, we get y = λx and −x = λy, so that (λ2 + 1)y = 0. ◦ If y were zero then x = −λy would also be zero, impossible. Thus y 6= 0 and so λ2 + 1 = 0. ◦ When F = R there is no such scalar λ, so there are no eigenvectors in this case. ◦ However, when F = C, we get λ = ±i, and then the eigenvectors are hx; −ixi with eigenvalue i and hx; ixi with eigenvalue −i. • Computing eigenvectors of general linear transformations on innite-dimensional spaces can be quite dicult. ◦ For example, if V is the space of innitely-dierentiable functions, then computing the eigenvectors of the map T : V ! V with T (f) = f 00 + xf 0 requires solving the dierential equation f 00 + xf 0 = λf for an arbitrary λ. ◦ It is quite hard to solve that particular dierential equation for a general λ (at least, without resorting to using an innite series expansion to describe the solutions), and the solutions for most values of λ are non-elementary functions. • In the nite-dimensional case, however, we can recast everything using matrices. • Proposition (Eigenvalues and Matrices): Suppose V is a nite-dimensional vector space with ordered basis β and that T : V ! V is linear. Then v is an eigenvector of T with eigenvalue λ if and only if [v]β is an eigenvector of left-multiplication by β with eigenvalue . [T ]β λ ◦ Proof: Note that v 6= 0 if and only if [v]β 6= 0, so now assume v 6= 0. ◦ Then v is an eigenvector of T with eigenvalue λ () T (v) = λv () [T (v)]β = [λv]β () β is an eigenvector of left-multiplication by β with eigenvalue . [T ]β[v]β = λ[v]β () [v]β [T ]β λ 4.1.2 Eigenvalues and Eigenvectors of Matrices • We will now study eigenvalues and eigenvectors of matrices. For convenience, we restate the denition for this setting: • Denition: For A an n × n matrix, a nonzero vector x with Ax = λx is called1 an eigenvector of A, and the corresponding scalar λ is called an eigenvalue of A. 2 3 3 ◦ Example: If A = , the vector x = is an eigenvector of A with eigenvalue 1, because 1 4 −1 2 3 3 3 Ax = = = x. 1 4 −1 −1 1Technically, such a vector x is a right eigenvector of A: this stands in contrast to a vector y with yA = λy, which is called a left eigenvector of A.
Recommended publications
  • Math 221: LINEAR ALGEBRA
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Theorem A symmetric matrix A is positive definite if and only if ~xTA~x > 0 for all n ~x 2 R , ~x 6= ~0. Proof. Since A is symmetric, there exists an orthogonal matrix P so that T P AP = diag(λ1; λ2; : : : ; λn) = D; where λ1; λ2; : : : ; λn are the (not necessarily distinct) eigenvalues of A. Let n T ~x 2 R , ~x 6= ~0, and define ~y = P ~x. Then ~xTA~x = ~xT(PDPT)~x = (~xTP)D(PT~x) = (PT~x)TD(PT~x) = ~yTD~y: T Writing ~y = y1 y2 ··· yn , 2 y1 3 6 y2 7 ~xTA~x = y y ··· y diag(λ ; λ ; : : : ; λ ) 6 7 1 2 n 1 2 n 6 .
    [Show full text]
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Quiz7 Problem 1
    Quiz 7 Quiz7 Problem 1. Independence The Problem. In the parts below, cite which tests apply to decide on independence or dependence. Choose one test and show complete details. 1 ! 1 ! (a) Vectors ~v = ;~v = 1 −1 2 1 0 1 1 0 1 1 0 1 1 B C B C B C (b) Vectors ~v1 = @ −1 A ;~v2 = @ 1 A ;~v3 = @ 0 A 0 −1 −1 2 5 (c) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = x ; y = x ; y = x10 on (−∞; 1). (d) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = 1 + x; y = 1 − x; y = x3 on (−∞; 1). Basic Test. To show three vectors are independent, form the system of equations c1~v1 + c2~v2 + c3~v3 = ~0; then solve for c1; c2; c3. If the only solution possible is c1 = c2 = c3 = 0, then the vectors are independent. Linear Combination Test. A list of vectors is independent if and only if each vector in the list is not a linear combination of the remaining vectors, and each vector in the list is not zero. Subset Test. Any nonvoid subset of an independent set is independent. The basic test leads to three quick independence tests for column vectors. All tests use the augmented matrix A of the vectors, which for 3 vectors is A =< ~v1j~v2j~v3 >. Rank Test. The vectors are independent if and only if rank(A) equals the number of vectors. Determinant Test. Assume A is square. The vectors are independent if and only if the deter- minant of A is nonzero.
    [Show full text]
  • Math 4571 (Advanced Linear Algebra) Lecture #27
    Math 4571 (Advanced Linear Algebra) Lecture #27 Applications of Diagonalization and the Jordan Canonical Form (Part 1): Spectral Mapping and the Cayley-Hamilton Theorem Transition Matrices and Markov Chains The Spectral Theorem for Hermitian Operators This material represents x4.4.1 + x4.4.4 +x4.4.5 from the course notes. Overview In this lecture and the next, we discuss a variety of applications of diagonalization and the Jordan canonical form. This lecture will discuss three essentially unrelated topics: A proof of the Cayley-Hamilton theorem for general matrices Transition matrices and Markov chains, used for modeling iterated changes in systems over time The spectral theorem for Hermitian operators, in which we establish that Hermitian operators (i.e., operators with T ∗ = T ) are diagonalizable In the next lecture, we will discuss another fundamental application: solving systems of linear differential equations. Cayley-Hamilton, I First, we establish the Cayley-Hamilton theorem for arbitrary matrices: Theorem (Cayley-Hamilton) If p(x) is the characteristic polynomial of a matrix A, then p(A) is the zero matrix 0. The same result holds for the characteristic polynomial of a linear operator T : V ! V on a finite-dimensional vector space. Cayley-Hamilton, II Proof: Since the characteristic polynomial of a matrix does not depend on the underlying field of coefficients, we may assume that the characteristic polynomial factors completely over the field (i.e., that all of the eigenvalues of A lie in the field) by replacing the field with its algebraic closure. Then by our results, the Jordan canonical form of A exists.
    [Show full text]
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • Estimations of the Trace of Powers of Positive Self-Adjoint Operators by Extrapolation of the Moments∗
    Electronic Transactions on Numerical Analysis. ETNA Volume 39, pp. 144-155, 2012. Kent State University Copyright 2012, Kent State University. http://etna.math.kent.edu ISSN 1068-9613. ESTIMATIONS OF THE TRACE OF POWERS OF POSITIVE SELF-ADJOINT OPERATORS BY EXTRAPOLATION OF THE MOMENTS∗ CLAUDE BREZINSKI†, PARASKEVI FIKA‡, AND MARILENA MITROULI‡ Abstract. Let A be a positive self-adjoint linear operator on a real separable Hilbert space H. Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the moments of A. Applications of the matrix case are discussed, and numerical results are given. Key words. Trace, positive self-adjoint linear operator, symmetric matrix, matrix powers, matrix moments, extrapolation. AMS subject classifications. 65F15, 65F30, 65B05, 65C05, 65J10, 15A18, 15A45. 1. Introduction. Let A be a positive self-adjoint linear operator from H to H, where H is a real separable Hilbert space with inner product denoted by (·, ·). Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the integer moments (z, Anz) of A, for n ∈ N. A similar procedure was first introduced in [3] for estimating the Euclidean norm of the error when solving a system of linear equations, which corresponds to q = −2. The case q = −1, which leads to estimates of the trace of the inverse of a matrix, was studied in [4]; on this problem, see [10]. Let us mention that, when only positive powers of A are used, the Hilbert space H could be infinite dimensional, while, for negative powers of A, it is always assumed to be a finite dimensional one, and, obviously, A is also assumed to be invertible.
    [Show full text]
  • The Inverse Along a Lower Triangular Matrix∗
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM The inverse along a lower triangular matrix∗ Xavier Marya, Pedro Patr´ıciob aUniversit´eParis-Ouest Nanterre { La D´efense,Laboratoire Modal'X, 200 avenuue de la r´epublique,92000 Nanterre, France. email: [email protected] bDepartamento de Matem´aticae Aplica¸c~oes,Universidade do Minho, 4710-057 Braga, Portugal. email: [email protected] Abstract In this paper, we investigate the recently defined notion of inverse along an element in the context of matrices over a ring. Precisely, we study the inverse of a matrix along a lower triangular matrix, under some conditions. Keywords: Generalized inverse, inverse along an element, Dedekind-finite ring, Green's relations, rings AMS classification: 15A09, 16E50 1 Introduction In this paper, R is a ring with identity. We say a is (von Neumann) regular in R if a 2 aRa.A particular solution to axa = a is denoted by a−, and the set of all such solutions is denoted by af1g. Given a−; a= 2 af1g then x = a=aa− satisfies axa = a; xax = a simultaneously. Such a solution is called a reflexive inverse, and is denoted by a+. The set of all reflexive inverses of a is denoted by af1; 2g. Finally, a is group invertible if there is a# 2 af1; 2g that commutes with a, and a is Drazin invertible if ak is group invertible, for some non-negative integer k. This is equivalent to the existence of aD 2 R such that ak+1aD = ak; aDaaD = aD; aaD = aDa.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • Linear Independence, Span, and Basis of a Set of Vectors What Is Linear Independence?
    LECTURENOTES · purdue university MA 26500 Kyle Kloster Linear Algebra October 22, 2014 Linear Independence, Span, and Basis of a Set of Vectors What is linear independence? A set of vectors S = fv1; ··· ; vkg is linearly independent if none of the vectors vi can be written as a linear combination of the other vectors, i.e. vj = α1v1 + ··· + αkvk. Suppose the vector vj can be written as a linear combination of the other vectors, i.e. there exist scalars αi such that vj = α1v1 + ··· + αkvk holds. (This is equivalent to saying that the vectors v1; ··· ; vk are linearly dependent). We can subtract vj to move it over to the other side to get an expression 0 = α1v1 + ··· αkvk (where the term vj now appears on the right hand side. In other words, the condition that \the set of vectors S = fv1; ··· ; vkg is linearly dependent" is equivalent to the condition that there exists αi not all of which are zero such that 2 3 α1 6α27 0 = v v ··· v 6 7 : 1 2 k 6 . 7 4 . 5 αk More concisely, form the matrix V whose columns are the vectors vi. Then the set S of vectors vi is a linearly dependent set if there is a nonzero solution x such that V x = 0. This means that the condition that \the set of vectors S = fv1; ··· ; vkg is linearly independent" is equivalent to the condition that \the only solution x to the equation V x = 0 is the zero vector, i.e. x = 0. How do you determine if a set is lin.
    [Show full text]
  • LU Decompositions We Seek a Factorization of a Square Matrix a Into the Product of Two Matrices Which Yields an Efficient Method
    LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable vector and is a constant vector for . The factorization of A into the product of two matrices is closely related to Gaussian elimination. Definition 1. A square matrix is said to be lower triangular if for all . 2. A square matrix is said to be unit lower triangular if it is lower triangular and each . 3. A square matrix is said to be upper triangular if for all . Examples 1. The following are all lower triangular matrices: , , 2. The following are all unit lower triangular matrices: , , 3. The following are all upper triangular matrices: , , We note that the identity matrix is only square matrix that is both unit lower triangular and upper triangular. Example Let . For elementary matrices (See solv_lin_equ2.pdf) , , and we find that . Now, if , then direct computation yields and . It follows that and, hence, that where L is unit lower triangular and U is upper triangular. That is, . Observe the key fact that the unit lower triangular matrix L “contains” the essential data of the three elementary matrices , , and . Definition We say that the matrix A has an LU decomposition if where L is unit lower triangular and U is upper triangular. We also call the LU decomposition an LU factorization. Example 1. and so has an LU decomposition. 2. The matrix has more than one LU decomposition. Two such LU factorizations are and .
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]