Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh

Total Page:16

File Type:pdf, Size:1020Kb

Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b). Take ~z1 and ~z2 in W , and let ~u be any element of W . Show that ~z1 +~z2 is orthogonal to ~u . What can you conclude about ~z1 + ~z2 ? Why? (c). Finish the proof that W ? is a subspace of Rn . a. We have (c~z) · ~u = c(~z · ~u) = c · 0 = 0 , so c~z is orthogonal to ~u . This is true for all ~u 2 W , so c~z lies in W ? . ? b. Let ~z1 , ~z2 , and ~u be as in the problem. Since ~z1 and ~z2 lie in W , we have ~z1 · ~u = ~z2 · ~u = 0 . Therefore (~z1 + ~z2) · ~u = ~z1 · ~u + ~z2 · ~u = 0 ; so ~z1 + ~z2 is orthogonal to ~u . Since this is true for all ~u 2 W , it follows that ~z1 + ~z2 is in W ? . c. We also have ~0 · ~u = 0 for all ~u 2 W (by properties of the inner product). Since this is true for all ~u 2 W , it follows that ~0 2 W ? . Also, parts (a) and (b) show that W ? is closed under scalar multiplication and addition, respectively. Therefore W ? is a subspace of Rn . 1 2 Section 6.2 (Page 290) −3 1 16. Let ~y = and ~u = . Compute the distance from ~y to the line through ~u 9 2 and the origin. First of all, project ~y to the line L through ~u and the origin: ~y · ~u 15 1 3 proj ~y = ~u = = : L ~u · ~u 5 2 6 The distance from ~y to L is the distance from ~y to its projection to L , which is p p p k~y − projL ~yk = k(−3; 9) − (3; 6)k = k(−6; 3)k = 36 + 9 = 45 = 3 5 : See also Example 4. 22. Determine whether the set 2 p 3 2 p 3 2 3 1=p18 1= 2 −2=3 4 4=p18 5 ; 4 0p 5 ; 4 1=3 5 1= 18 −1= 2 −2=3 is orthonormal. If the set is only orthogonal, normalize the vectors to produce an orthonormal set. This set is orthonormal. 27. Let U be a square matrix with orthonormal columns. Explain why U is invertible. (Mention the theorems you use.) Since the columns are orthonormal, they are all nonzero. Since they are nonzero and orthogonal, they are linearly independent (by Theorem 4 on page 284). Therefore U is invertible (by Theorem 8 on page 114, (e) () (a)). 28. Let U be an n × n orthogonal matrix. Show that the rows of U form an orthonormal basis of Rn . Since U is an orthogonal matrix, we have U −1 = U T . Then U T is also orthog- onal, since (U T )−1 = (U −1)T = (U T )T . By Theorem 6, the rows of U are therefore orthonormal, since they are the columns of the orthogonal matrix U T . These vectors are linearly independent (by Theorem 4) and there are n of them, so they form a basis for Rn . 3 Section 6.3 (Page 298) n 24. Let W be a subspace of R with an orthogonal basis f~w1; : : : ; ~wpg , and let f~v1; : : : ;~vqg be an orthogonal basis for W ? . (a). Explain why f~w1; : : : ; ~wp;~v1; : : : ;~vqg is an orthogonal set. (b). Explain why the set in part (a) spans Rn . (c). Show that dim W + dim W ? = n . a. The set f~w1; : : : ; ~wp;~v1; : : : ;~vqg is an orthogonal set because any two distinct ~wi are orthogonal, any two distinct ~vj are orthogonal, and any ~wi is orthogonal to any ? ~vj since ~vj is in W . b. The set spans Rn because any ~y 2 Rn can be written asy ^ + ~z withy ^ 2 W and ? ~z 2 W , and these in turn can be written as linear combinations of ~w1; : : : ; ~wp and ~v1; : : : ;~vq , respectively. c. The set in part (a) is linearly independent because it is an orthogonal set of nonzero vectors (the vectors are nonzero because they are elements of bases). Therefore the set is a basis for Rn . This shows that dim W + dim W ? = p + q = dim Rn = n . Section 6.4 (Page 304) 3. The set 2 2 3 2 4 3 4 −5 5 ; 4 −1 5 1 2 is a basis for a subspace W . Use the Gram-Schmidt process to produce an orthogonal basis for W . Let ~x1 and ~x2 be the given basis elements. The Gram-Schmidt process is 2 2 3 ~v1 = ~x1 = 4 −5 5 ; 1 2 4 3 2 2 3 2 3 3 ~x · ~v 15 ~v = ~x − 2 1 ~v = −1 − −5 = 3=2 : 2 2 ~v · ~v 1 4 5 30 4 5 4 5 1 1 2 1 3=2 These two elements form an orthogonal basis for W . 4 13. Let 2 5 9 3 2 5=6 −1=6 3 1 7 1=6 5=6 A = 6 7 ;Q = 6 7 : 4 −3 −5 5 4 −3=6 1=6 5 1 5 1=6 3=6 The columns of Q were obtained by applying the Gram-Schmidt process to the columns of A . Find an upper triangular matrix R such that A = QR . Check your work. One could determine the entries of R by carrying out the Gram-Schmidt process to express the columns of Q as linear combinations of the columns of A ; then combine those coefficients to give R−1 ; and finally invert to get R . However, there's an easier way: As in Example 4, 2 5 9 3 5=6 1=6 −3=6 1=6 1 7 6 12 R = QT A = 6 7 = : −1=6 5=6 1=6 3=6 4 −3 −5 5 0 6 1 5 To check: 2 5=6 −1=6 3 2 5 −1 3 2 5 9 3 1=6 5=6 6 12 1 5 1 2 1 7 QR = 6 7 = 6 7 = 6 7 = A: 4 −3=6 1=6 5 0 6 4 −3 1 5 0 1 4 −3 −5 5 1=6 3=6 1 3 1 5 20. Suppose A = QR , where R is an invertible matrix. Show that A and Q have the same column space. [Hint: Given ~y in Col A , show that ~y = Q~x for some ~x . Also, given ~y in Col Q , show that ~y = A~x for some ~x .] Following the hint, suppose ~y 2 Col A . Then ~y = A~v , where the coordinates of ~v are the weights used to represent ~y as a linear combination of the columns of A . But then ~y = QR~v , so ~y is in the column space of Q since the weights for writing ~y as a linear combination of the columns of Q are the coordinates of R~v . Conversely, suppose that ~y is in Col Q . Then ~y = Q~u for some ~u whose coor- dinates are the weights used to express ~y as a linear combination of the columns of Q . But then we have Q = AR−1 , so ~y = AR−1~u , so ~y lies in Col A since it can be expressed as a linear combination of the columns of A using weights given by the coordinates of R−1~u ..
Recommended publications
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • The Invertible Matrix Theorem II in Section 2.8, We Gave a List of Characterizations of Invertible Matrices (Theorem 2.8.1)
    i i “main” 2007/2/16 page 312 i i 312 CHAPTER 4 Vector Spaces For Problems 9–12, determine the solution set to Ax = b, 14. Show that a 6 × 4 matrix A with nullity(A) = 0 must and show that all solutions are of the form (4.9.3). have rowspace(A) = R4. Is colspace(A) = R4? 13−1 4 15. Prove that if rowspace(A) = nullspace(A), then A 9. A = 27 9 , b = 11 . contains an even number of columns. 15 21 10 16. Show that a 5×7 matrix A must have 2 ≤ nullity(A) ≤ 2 −114 5 7. Give an example of a 5 × 7 matrix A with 10. A = 1 −123 , b = 6 . nullity(A) = 2 and an example of a 5 × 7 matrix 1 −255 13 A with nullity(A) = 7. 11−2 −3 17. Show that 3 × 8 matrix A must have 5 ≤ nullity(A) ≤ 3 −1 −7 2 8. Give an example of a 3 × 8 matrix A with 11. A = , b = . 111 0 nullity(A) = 5 and an example of a 3 × 8 matrix 22−4 −6 A with nullity(A) = 8. 11−15 0 18. Prove that if A and B are n × n matrices and A is 12. A = 02−17 , b = 0 . invertible, then 42−313 0 nullity(AB) = nullity(B). 13. Show that a 3 × 7 matrix A with nullity(A) = 4 must have colspace(A) = R3. Is rowspace(A) = R3? [Hint: Bx = 0 if and only if ABx = 0.] 4.10 The Invertible Matrix Theorem II In Section 2.8, we gave a list of characterizations of invertible matrices (Theorem 2.8.1).
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • On Bounds and Closed Form Expressions for Capacities Of
    On Bounds and Closed Form Expressions for Capacities of Discrete Memoryless Channels with Invertible Positive Matrices Thuan Nguyen Thinh Nguyen School of Electrical and School of Electrical and Computer Engineering Computer Engineering Oregon State University Oregon State University Corvallis, OR, 97331 Corvallis, 97331 Email: [email protected] Email: [email protected] Abstract—While capacities of discrete memoryless channels are That said, it is still beneficial to find the channel capacity well studied, it is still not possible to obtain a closed form expres- in closed form expression for a number of reasons. These sion for the capacity of an arbitrary discrete memoryless channel. include (1) formulas can often provide a good intuition about This paper describes an elementary technique based on Karush- Kuhn-Tucker (KKT) conditions to obtain (1) a good upper bound the relationship between the capacity and different channel of a discrete memoryless channel having an invertible positive parameters, (2) formulas offer a faster way to determine the channel matrix and (2) a closed form expression for the capacity capacity than that of algorithms, and (3) formulas are useful if the channel matrix satisfies certain conditions related to its for analytical derivations where closed form expression of the singular value and its Gershgorin’s disk. capacity is needed in the intermediate steps. To that end, our Index Terms—Wireless Communication, Convex Optimization, Channel Capacity, Mutual Information. paper describes an elementary technique based on the theory of convex optimization, to find closed form expressions for (1) a new upper bound on capacities of discrete memoryless I. INTRODUCTION channels with positive invertible channel matrix and (2) the Discrete memoryless channels (DMC) play a critical role optimality conditions of the channel matrix such that the upper in the early development of information theory and its ap- bound is precisely the capacity.
    [Show full text]
  • Chapter 9 Eigenvectors and Eigenvalues
    Chapter 9 Eigenvectors and Eigenvalues 9.1 Eigenvectors and Eigenvalues of a Linear Map Given a finite-dimensional vector space E,letf : E E ! be any linear map. If, by luck, there is a basis (e1,...,en) of E with respect to which f is represented by a diagonal matrix λ1 0 ... 0 0 λ ... D = 2 , 0 . ... ... 0 1 B 0 ... 0 λ C B nC @ A then the action of f on E is very simple; in every “direc- tion” ei,wehave f(ei)=λiei. 511 512 CHAPTER 9. EIGENVECTORS AND EIGENVALUES We can think of f as a transformation that stretches or shrinks space along the direction e1,...,en (at least if E is a real vector space). In terms of matrices, the above property translates into the fact that there is an invertible matrix P and a di- agonal matrix D such that a matrix A can be factored as 1 A = PDP− . When this happens, we say that f (or A)isdiagonaliz- able,theλisarecalledtheeigenvalues of f,andtheeis are eigenvectors of f. For example, we will see that every symmetric matrix can be diagonalized. 9.1. EIGENVECTORS AND EIGENVALUES OF A LINEAR MAP 513 Unfortunately, not every matrix can be diagonalized. For example, the matrix 11 A = 1 01 ✓ ◆ can’t be diagonalized. Sometimes, a matrix fails to be diagonalizable because its eigenvalues do not belong to the field of coefficients, such as 0 1 A = , 2 10− ✓ ◆ whose eigenvalues are i. ± This is not a serious problem because A2 can be diago- nalized over the complex numbers.
    [Show full text]
  • The Invertible Matrix Theorem
    Math 240 TA: Shuyi Weng Winter 2017 February 6, 2017 The Invertible Matrix Theorem Theorem 1. Let A 2 Rn×n. Then the following statements are equivalent. 1. A is invertible. 2. A is row equivalent to In. 3. A has n pivots in its reduced echelon form. 4. The matrix equation Ax = 0 has only the trivial solution. 5. The columns of A are linearly independent. 6. The linear transformation T defined by T (x) = Ax is one-to-one. 7. The equation Ax = b has at least one solution for every b 2 Rn. 8. The columns of A span Rn. 9. The linear transformation T defined by T (x) = Ax is onto. 10. There exists an n × n matrix B such that AB = In. 11. There exists an n × n matrix C such that CA = In. 12. AT is invertible. Theorem 2. Let T : Rn ! Rn be defined by T (x) = Ax. Then T is one-to-one and onto if and only if A is an invertible matrix. Problem. True or false (all matrices are assumed to be n × n, unless otherwise specified). 1. The identity matrix is invertible. 2. If A can be row reduced to the identity matrix, then it is invertible. 3. If both A and B are invertible, so is AB. 4. If A is invertible, then the matrix equation Ax = b is consistent for every b 2 Rn. n 5. If A is an n × n matrix such that the equation Ax = ei is consistent for each ei 2 R a column of the n × n identity matrix, then A is invertible.
    [Show full text]
  • Block Matrices in Linear Algebra
    PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples.
    [Show full text]
  • Lecture 12 Elementary Matrices and Finding Inverses
    Mathematics 13: Lecture 12 Elementary Matrices and Finding Inverses Dan Sloughter Furman University January 30, 2008 Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 1 / 24 I A is invertible. I The only solution to A~x = ~0 is the trivial solution ~x = ~0. I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 I The only solution to A~x = ~0 is the trivial solution ~x = ~0. I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: I A is invertible. Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: I A is invertible. I The only solution to A~x = ~0 is the trivial solution ~x = ~0. Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 ~ ~ n I A~x = b has a solution for b in R .
    [Show full text]
  • Similar Matrices and Jordan Form
    Similar matrices and Jordan form We’ve nearly covered the entire heart of linear algebra – once we’ve finished singular value decompositions we’ll have seen all the most central topics. AT A is positive definite A matrix is positive definite if xT Ax > 0 for all x 6= 0. This is a very important class of matrices; positive definite matrices appear in the form of AT A when computing least squares solutions. In many situations, a rectangular matrix is multiplied by its transpose to get a square matrix. Given a symmetric positive definite matrix A, is its inverse also symmet­ ric and positive definite? Yes, because if the (positive) eigenvalues of A are −1 l1, l2, · · · ld then the eigenvalues 1/l1, 1/l2, · · · 1/ld of A are also positive. If A and B are positive definite, is A + B positive definite? We don’t know much about the eigenvalues of A + B, but we can use the property xT Ax > 0 and xT Bx > 0 to show that xT(A + B)x > 0 for x 6= 0 and so A + B is also positive definite. Now suppose A is a rectangular (m by n) matrix. A is almost certainly not symmetric, but AT A is square and symmetric. Is AT A positive definite? We’d rather not try to find the eigenvalues or the pivots of this matrix, so we ask when xT AT Ax is positive. Simplifying xT AT Ax is just a matter of moving parentheses: xT (AT A)x = (Ax)T (Ax) = jAxj2 ≥ 0.
    [Show full text]
  • MATH. 513. JORDAN FORM Let A1,...,Ak Be Square Matrices of Size
    MATH. 513. JORDAN FORM Let A1,...,Ak be square matrices of size n1, . , nk, respectively with entries in a field F . We define the matrix A1 ⊕ ... ⊕ Ak of size n = n1 + ... + nk as the block matrix A1 0 0 ... 0 0 A2 0 ... 0 . . 0 ...... 0 Ak It is called the direct sum of the matrices A1,...,Ak. A matrix of the form λ 1 0 ... 0 0 λ 1 ... 0 . . 0 . λ 1 0 ...... 0 λ is called a Jordan block. If k is its size, it is denoted by Jk(λ). A direct sum J = Jk1 ⊕ ... ⊕ Jkr (λr) of Jordan blocks is called a Jordan matrix. Theorem. Let T : V → V be a linear operator in a finite-dimensional vector space over a field F . Assume that the characteristic polynomial of T is a product of linear polynimials. Then there exists a basis E in V such that [T ]E is a Jordan matrix. Corollary. Let A ∈ Mn(F ). Assume that its characteristic polynomial is a product of linear polynomials. Then there exists a Jordan matrix J and an invertible matrix C such that A = CJC−1. Notice that the Jordan matrix J (which is called a Jordan form of A) is not defined uniquely. For example, we can permute its Jordan blocks. Otherwise the matrix J is defined uniquely (see Problem 7). On the other hand, there are many choices for C. We have seen this already in the diagonalization process. What is good about it? We have, as in the case when A is diagonalizable, AN = CJ N C−1.
    [Show full text]
  • Matrix Inverses Recall
    Matrix inverses Recall... DefinitionMatrix inversesA square matrix A is invertible (or nonsingular) if matrix 9 BRecall...such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix B Remark Not all square matrices are invertible. ∃ such that AB = I and BA = I. (We say B is an inverse of A.) Theorem.Remark IfNotA allis squareinvertible, matrices then are its invertible. inverse is unique. 1 RemarkTheorem.WhenIf A Ais invertible,is invertible, then we its denote inverse itsis unique. inverse as A− . 1 Remark When A is invertible, we denote its inverse as A− . Theorem. If A is an n n invertible matrix, then the system of × ~ 1~ linearTheorem. equationsIf A given is an n by A~xn invertible= b has matrix, the unique then solutionthe system~x of= linearA− b. × 1 equations given by Ax = b has the unique solution x = A− b. Proof. Assume A is an invertible matrix. Then we have by associativity of by def'n of Proof. Assume A is an invertiblematrix m matrix.ult. Theniden wetit havey 1 1 A(A− b)=(AA− )b = Ib = b. by def'n of inverse Theorem (Properties1 of matrix inverse). Thus, ~x = A− ~b is a solution to A~x = ~b. 1 1 1 Suppose(a) If A~y isis another invertible, solution then A to− theis itself linear invertible system. and It follows(A− )− that= AA~y. = ~b, 1 1 but multiplying both sides by A− gives ~y = A− ~b = ~x. (b) If A is invertible and c =0is a scalar, then cA is invertible and 1 1 1 (cA)− = c A− .
    [Show full text]
  • 11.6 Jordan Form and Eigenanalysis
    788 11.6 Jordan Form and Eigenanalysis Generalized Eigenanalysis The main result is Jordan's decomposition A = PJP −1; valid for any real or complex square matrix A. We describe here how to compute the invertible matrix P of generalized eigenvectors and the upper triangular matrix J, called a Jordan form of A. Jordan block. An m×m upper triangular matrix B(λ, m) is called a Jordan block provided all m diagonal elements are the same eigenvalue λ and all super-diagonal elements are one: 0 λ 1 0 ··· 0 0 1 B . C B . C B(λ, m) = B C (m × m matrix) B C @ 0 0 0 ··· λ 1 A 0 0 0 ··· 0 λ Jordan form. Given an n × n matrix A, a Jordan form J for A is a block diagonal matrix J = diag(B(λ1; m1);B(λ2; m2);:::;B(λk; mk)); where λ1,..., λk are eigenvalues of A (duplicates possible) and m1 + ··· + mk = n. Because the eigenvalues of A are on the diagonal of J, then A has exactly k eigenpairs. If k < n, then A is non-diagonalizable. The relation A = PJP −1 is called a Jordan decomposition of A. Invertible matrix P is called the matrix of generalized eigenvectors of A. It defines a coordinate system x = P y in which the vector function x ! Ax is transformed to the simpler vector function y ! Jy. If equal eigenvalues are adjacent in J, then Jordan blocks with equal diagonal entries will be adjacent. Zeros can appear on the super-diagonal of J, because adjacent Jordan blocks join on the super-diagonal with a zero.
    [Show full text]