
2.5. Inverse Matrices 83 2.5 Inverse Matrices 1 1 '1 If the square matrix A has an inverse, then both A− A = I and AA− = I. $ 2 The algorithm to test invertibility is elimination : A must have n (nonzero) pivots. 3 The algebra test for invertibility is the determinant of A : det A must not be zero. 4 The equation that tests for invertibility is Ax = 0 : x = 0 must be the only solution. 1 1 1 5 If A and B (same size) are invertible then so is AB : |(AB)− = B− A− . 1 1 1 6 AA− = I is n equations for n columns of A− . Gauss-Jordan eliminates [AI] to [IA− ]. 7 The last page of the book gives 14 equivalent conditions for a square A to be invertible. & 1 % Suppose A is a square matrix. We look for an “inverse matrix” A− of the same size, 1 1 such that A− times A equals I. Whatever A does, A− undoes. Their product is the 1 1 identity matrix—which does nothing to a vector, so A− Ax = x. But A− might not exist. 1 What a matrix mostly does is to multiply a vector x. Multiplying Ax = b by A− 1 1 1 1 gives A− Ax = A− b. This is x = A− b. The product A− A is like multiplying by a number and then dividing by that number. A number has an inverse if it is not zero— 1 matrices are more complicated and more interesting. The matrix A− is called “A inverse.” 1 DEFINITION The matrix A is invertible if there exists a matrix A− that “inverts” A : 1 1 Two-sided inverse A− A = I and AA− = I. (1) Not all matrices have inverses. This is the first question we ask about a square matrix: 1 Is A invertible ? We don’t mean that we immediately calculate A− . In most problems 1 we never compute it! Here are six “notes” about A− . Note 1 The inverse exists if and only if elimination produces n pivots (row exchanges 1 are allowed). Elimination solves Ax = b without explicitly using the matrix A− . Note 2 The matrix A cannot have two different inverses. Suppose BA = I and also AC = I. Then B = C, according to this “proof by parentheses”: B(AC) = (BA)C gives BI = IC or B = C. (2) This shows that a left-inverse B (multiplying from the left) and a right-inverse C (mul- tiplying A from the right to give AC = I) must be the same matrix. 1 Note 3 If A is invertible, the one and only solution to Ax = b is x = A− b: 1 1 1 Multiply Ax = b by A− . Then x = A− Ax = A− b. 84 Chapter 2. Solving Linear Equations Note 4 (Important) Suppose there is a nonzero vector x such that Ax = 0. Then A cannot have an inverse. No matrix can bring 0 back to x. 1 If A is invertible, then Ax = 0 can only have the zero solution x = A− 0 = 0. Note 5 A 2 by 2 matrix is invertible if and only if ad bc is not zero: − 1 a b − 1 d b 2 by 2 Inverse: = − . (3) c d ad bc c a − − This number ad bc is the determinant of A. A matrix is invertible if its determinant is not zero (Chapter 5).− The test for n pivots is usually decided before the determinant appears. Note 6 A diagonal matrix has an inverse provided no diagonal entries are zero : d1 1/d1 .. 1 .. If A = . then A− = . . d 1/d n n 1 2 Example 1 The 2 by 2 matrix A = 1 2 is not invertible. It fails the test in Note 5, because ad bc equals 2 2 = 0. It fails the test in Note 3, because Ax = 0 when x = (2, 1)−. It fails to have− two pivots as required by Note 1. Elimination− turns the second row of this matrix A into a zero row. The Inverse of a Product AB For two nonzero numbers a and b, the sum a + b might or might not be invertible. The 1 1 numbers a = 3 and b = 3 have inverses 3 and 3 . Their sum a + b = 0 has no inverse. − − 1 1 But the product ab = 9 does have an inverse, which is 3 times 3 . For two matrices A− and B, the situation is similar. It is hard− to say much about the invertibility of A + B. But the product AB has an inverse, if and only if the two factors A 1 and B are separately invertible (and the same size). The important point is that A− and 1 B− come in reverse order : If A and B are invertible then so is AB. The inverse of a product AB is 1 1 1 (AB)− = B− A− . (4) 1 1 1 To see why the order is reversed, multiply AB times B− A− . Inside that is BB− = I: 1 1 1 1 Inverse of AB (AB)(B− A− ) = AIA− = AA− = I. 1 1 1 We moved parentheses to multiply BB− first. Similarly B− A− times AB equals I. 2.5. Inverse Matrices 85 1 1 B− A− illustrates a basic rule of mathematics: Inverses come in reverse order. It is also common sense: If you put on socks and then shoes, the first to be taken off are the . The same reverse order applies to three or more matrices: 1 1 1 1 Reverse order (ABC)− = C− B− A− . (5) Example 2 Inverse of an elimination matrix. If E subtracts 5 times row 1 from row 2, 1 then E− adds 5 times row 1 to row 2 : 1 0 0 1 0 0 E subtracts 1 − E = 5 1 0 and E− = 5 1 0 . E 1 adds − 0 0 1 0 0 1 1 1 Multiply EE− to get the identity matrix I. Also multiply E− E to get I. We are adding and subtracting the same 5 times row 1. If AC = I then automatically CA = I. For square matrices, an inverse on one side is automatically an inverse on the other side. 1 Example 3 Suppose F subtracts 4 times row 2 from row 3, and F − adds it back: 1 0 0 1 0 0 1 F = 0 1 0 and F − = 0 1 0 . 0 4 1 0 4 1 − 1 1 Now multiply F by the matrix E in Example 2 to find FE. Also multiply E− times F − 1 1 1 to find (FE)− . Notice the orders FE and E− F − ! 1 0 0 1 0 0 5 1 0 is inverted by 1 1 5 1 0 (6) FE = − E− F − = . 20 4 1 0 4 1 − The result is beautiful and correct. The product FE contains “20” but its inverse doesn’t. E subtracts 5 times row 1 from row 2. Then F subtracts 4 times the new row 2 (changed by row 1) from row 3. In this order FE, row 3 feels an effect from row 1. 1 1 1 In the order E− F − , that effect does not happen. First F − adds 4 times row 2 to 1 row 3. After that, E− adds 5 times row 1 to row 2. There is no 20, because row 3 doesn’t change again. In this order E−1F −1, row 3 feels no effect from row 1. This is why the next section chooses A = LU, to go back from the triangular U to A. The multipliers fall into place perfectly in the lower triangular L. 1 1 In elimination order F follows E. In reverse order E− follows F − . E−1F −1 is quick. The multipliers 5, 4 fall into place below the diagonal of 1’s. 86 Chapter 2. Solving Linear Equations Calculating A−1 by Gauss-Jordan Elimination 1 I hinted that A− might not be explicitly needed. The equation Ax = b is solved by 1 1 x = A− b. But it is not necessary or efficient to compute A− and multiply it times b. 1 Elimination goes directly to x. And elimination is also the way to calculate A− , as we 1 1 now show. The Gauss-Jordan idea is to solve AA− = I, finding each column of A− . 1 A multiplies the first column of A− (call that x1) to give the first column of I (call that e1). This is our equation Ax1 = e1 = (1, 0, 0). There will be two more equations. −1 Each of the columns x1, x2, x3 of A is multiplied by A to produce a column of I: 1 1 3 columns of A− AA− = A x1 x2 x3 = e1 e2 e3 = I. (7) To invert a 3 by 3 matrix A, we have to solve three systems of equations: Ax1 = e1 and 1 Ax2 = e2 = (0, 1, 0) and Ax3 = e3 = (0, 0, 1). Gauss-Jordan finds A− this way. The Gauss-Jordan method computes A−1 by solving all n equations together. Usually the “augmented matrix” [A b] has one extra column b. Now we have three right sides e1, e2, e3 (when A is 3 by 3). They are the columns of I, so the augmented matrix is really the block matrix [ AI ]. I take this chance to invert my favorite matrix K, with 2’s on the main diagonal and 1’s next to the 2’s: − 2 1 0 1 0 0 Start Gauss-Jordan on K − K e1 e2 e3 = 1 2 1 0 1 0 −0 1 −2 0 0 1 − 2 1 0 1 0 0 − 0 3 1 1 1 0 ( 1 row 1 + row 2) 2 − 2 2 → 0 1 2 0 0 1 − 2 1 0 1 0 0 −3 1 0 2 1 2 1 0 → −4 1 2 2 0 0 3 3 3 1 ( 3 row 2 + row 3) 1 We are halfway to K− .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-