<<

Lecture 6. Inverse of

Recall that any linear system can be written as a matrix equation

A~x = ~b.

In one dimension case, i.e., A is 1 1, then £ Ax = b can be easily solved as

b 1 1 x = = b = A¡ b provided that A = 0. A A 6 In this lecture, we intend to extend this simple method to matrix equations. De…nition 7.1. A An n is said to be invertible if there exists a unique £ matrix Cn n of the same size such that £

AC = CA = In. The matrix C is called the inverse of A, and is denoted by

1 C = A¡

1 Suppose now An n is invertible and C = A¡ is its inverse matrix. Then the matrix equation £ A~x = ~b can be easily solved as follows. Left-multipling the matrix equation by the inverse matrix 1 C = A¡ , we have CA~x = C~b. 1 By de…nition, CA = A¡ A = In. It leads to ~ In~x = Cb, which is the same as 1 ~x = A¡ ~b. (1) We have just solve the matrix equation and obtained a unique solution (1). The above discussion is summarized as Theorem 7.1 Let A be an . Then matrix equation

A~x = ~b has a unique solution 1 ~x = A¡ ~b.

1 1 Example 7.1 (a) Show A is invertible and A¡ = C, where 2 5 7 5 A = , C = . 3 7 ¡3 ¡2 ·¡ ¡ ¸ · ¸ (b) Solve

2x1 + 5x2 = 1 3x 7x = 4. ¡ 1 ¡ 2 (c) Show that the matrix 0 2 B = 0 0 · ¸ is NOT invertible. Solution: (a) Direct calculations lead to

2 5 7 5 1 0 AC = = = I 3 7 ¡3 ¡2 0 1 2 ·¡ ¡ ¸ · ¸ · ¸ 7 5 2 5 1 0 CA = = = I . ¡3 ¡2 3 7 0 1 2 · ¸ ·¡ ¡ ¸ · ¸ By de…nition, 1 C = A¡ . (b) The system has the coe¢cient matrix A, .i.e., the matrix equation is

1 A~x = . 4 · ¸ Therefore, According to Theorem 1, the solution is 1 ~x = A 1 ¡ 4 · ¸ 7 5 1 = ¡3 ¡2 4 · ¸ · ¸ 27 = . ¡11 · ¸ (c) By calculation, we …nd that

0 2 0 2 0 0 B2 = = . 0 0 0 0 0 0 · ¸ · ¸ · ¸ Now, suppose that B is invertible. Then there exists a matrix D such that

BD = I2.

2 Multiplying the above equation by B from the left, we …nd

B (BC) = BI2, B2C = B, which implies 0 = B, since B2 = 0. This is obviously a contradiction. Therefore, the assumption at the beginning "suppose that B is invertible" is false, and consequently, B is NOT invertible. Theorem 7.2 A 2 2 matrix £ a b A = c d · ¸ is invertible i¤ def det (A) = ad bc = 0. ¡ 6 When ad bc = 0, the inverse is ¡ 6 1 1 d b A¡ = ¡ . ad bc c a ¡ ·¡ ¸ 1 Example 7.2. (a) Find A¡ if 2 5 A = . 3 6 ·¡ ¡ ¸ (b) Solve 2x + 5y = 1 3x 6y = 2 ¡ ¡ Solution: (a) a = 2, b = 5, c = 3, d = 6. ad bc = 12 ( 15) = 3. So ¡ ¡ ¡ ¡ ¡ ¡ 1 1 d b A¡ = ¡ ad bc c a ¡ ·¡ ¸ 1 6 5 2 5/3 = ¡ ¡ = ¡ ¡ . 3 3 2 1 2/3 · ¸ · ¸ We may verify the above solution as follows: 2 5 2 5/3 1 0 = . 3 6 ¡1 ¡2/3 0 1 ·¡ ¡ ¸ · ¸ · ¸ (b) To solve the system, we write its matrix equation: A~x = ~b, where 2 5 1 A = , ~b = . 3 6 2 ·¡ ¡ ¸ · ¸ The solution is 2 5/3 1 16 ~x = A 1~b = = 3 . ¡ ¡1 ¡2/3 2 ¡7 · ¸ · ¸ · 3 ¸

3 Properties if Invertible Matrix: ² Theorem 7.3 Suppose that A is an invertible square matrix. Then

1 1 1 1. A¡ is also invertible and (A¡ )¡ = A.

T T 1 1 T 2. A is also invertible and A ¡ = (A¡ ) . 3. If B is another invertible¡mat¢rix of the same size, then AB is also invertible, and

1 1 1 (AB)¡ = B¡ A¡ .

4. The reduced Echelon of A is the I of the same size, i.e., A I. ¡! 1 Proof. (1) By de…nition, A¡ if we can …nd a matrix C such that

1 1 A¡ C = C A¡ = I.

The above is indeed true if C = A¡. ¢ ¡ ¢ 1 1 (2) Take transposes of all three sides of (A¡ ) A = A (A¡ ) = I,

1 T 1 T T T 1 T 1 T T A¡ A = A A¡ = I = A A¡ = A¡ A = I ) T T 1 T T ¡¡ ¢ ¢ =¡ ¡A C¢=¢ CA = I, wh¡ere C¢ = A¡¡ ¢ is the inverse of A . ) 1 1 (3) Let C = B¡ A¡ . Since ¡ ¢

1 1 1 1 1 1 C (AB) = (CA) B = B¡ A¡ A B = B¡ A¡ A B = B¡ I B = B¡ B = I 1 1 1 1 1 1 (AB) C = A (BC) = ¡A¡ B B¡ ¢A¡¢ = ¡A B¡B¡ A¢¢¡ = ¡A IA¡¢ = A¡ A¡ ¢= I. it follows from the de…niti¡on t¡hat this ¢C¢ is the¡¡inverse¢of A¢B, i.e.¡, ¢

1 1 1 (AB)¡ = C = B¡ A¡ .

(4) Since A~x = ~0 has the only solution

1 ~x = A¡ ~0 = ~0, there is no non-trivial solution. Consequently, A has no free variable. All columns are pivot columns. Since A is a square matrix, this means that r (A) = number of columns = number of rows. Therefore, the reduced Echelon form of A has a non-zero entry in each row and thus has to be the identity matrix. We next develop an to …nd inverse matrices. De…nition 7.2 A matrix is called an if it is obtained by performing one single elementary row operation on an identity matrix.

4 Example 7.3 Let us look at 3 3 elementary matrices for corresponding row operations. £ A type (1) elementary matrix E1 is obtained by performing one type (1) row operation. For instance, 1 0 0 1 0 0 0 1 0 R + λR R λ 1 0 = E . 2 3 2 1 ! 2 2 3 1 0 0 1 ¡¡¡¡¡¡¡¡¡¡¡! 0 0 1

We call E1 is the elemen4tary mat5rix associated wit4h the row5operation

R2 + λR1 R2. ! For any matrix A, performing the above row operation is the same as left multiplying by E1. For instance, we see that 1 3 1 1 3 1 ¡ ¡ 2 1 0 R2 + ( 2) R1 R2 0 5 2 . 2 3 ¡ ! 2 ¡ 3 4 0 1 ¡¡¡¡¡¡¡¡¡¡¡¡¡¡! 4 0 1 4 5 4 5 On the other hand, left by E1 with λ= 2 yields ¡ 1 3 1 ¡ E1A = E1 2 1 0 24 0 1 3 14 0 0 51 3 1 1 3 1 = 2 1 0 2 1 ¡0 = 0 5 ¡2 . 2¡0 0 13 24 0 1 3 24 ¡0 1 3 4 5 4 5 4 5 A type (2) elementary matrix E2 is obtained by performing one type (2) row operation. For instance, 1 0 0 0 1 0 0 1 0 R2 R1 1 0 0 = E2. 2 3 ! 2 3 0 0 1 ¡¡¡¡¡¡! 0 0 1

For any matrix A, performi4ng the ab5ove row op4eration is5the same as left multiplying by E2. For instance, we see that 1 3 1 2 1 0 ¡ 2 1 0 R2 R1 1 3 1 2 3 ! 2 ¡ 3 4 0 1 ¡¡¡¡¡¡! 4 0 1 4 5 4 5 On the other hand, left multiplication by E2 yield 1 3 1 ¡ E2A = E2 2 1 0 24 0 1 3 0 41 0 153 1 2 1 0 = 1 0 0 2 1 ¡0 = 1 3 1 . 20 0 13 24 0 1 3 24 0 ¡1 3 4 5 4 5 4 5 5 A type (3) elementary matrix E3 is obtained by performing one type (3) row operation. For instance, 1 0 0 1 0 0 0 1 0 λR R 0 1 0 = E . 2 3 3 ! 3 2 3 3 0 0 1 ¡¡¡¡¡¡¡! 0 0 λ 4 5 4 5 Similarly, performing the above row operation is the same as left multiplying by E2. For instance, we see that 1 3 1 1 3 1 ¡ 1 ¡ 2 1 0 R3 R3 2 1 0 . 24 0 1 3 4 ! 21 0 1/43 ¡¡¡¡¡¡¡! 4 5 4 5 On the other hand, left multiplication by E3 with λ= 1/4 leads to 1 3 1 ¡ E3A = E3 2 1 0 24 0 1 3 1 40 0 51 3 1 1 3 1 = 0 1 0 2 1 ¡0 = 2 1 ¡0 . 20 0 1/43 24 0 1 3 21 0 1/43 4 5 4 5 4 5 Conclusion Performing an elementary row operation on an identity matrix produces an elementary matrix corresponding to that elementary row operation. Any elementary row operation is equivalent to left multiplying by the corresponding elementary matrix.

Justi…cation of LU Decomposition Algorithm ² Recall in Lecture 2, we introduced LU Decomposition as follows: Any matrix A may be decomposed as the product Am n = Lm mUm n £ £ £ of a lower triangle matrix L and a upper triangle matrix U by the following algorithm:

1. Reduce A to an echelon form from U by a sequence of type one row operations (row replacement row operation) 2. Place entries in L such that the same sequence of row operations reduces L to the identity matrix.

We can now justify the algorithm. We know that reducing A to U by a sequence of row operations is equivalent to multiplying A from the left by a sequence of elementary matrices, i.e., E E E A = U. p ¢ ¢ ¢ 2 1 It follows that 1 A = (Ep E2E1)¡ U. ¢ ¢ ¢ 6 This means that 1 L = (Ep E2E1)¡ ¢ ¢ ¢ or equivalently Ep E2E1L = I. ¢ ¢ ¢ This shows that the same sequence of row operations reduces L to I

Inverse matrix Algorithm: ² Suppose now A is invertible. Then, its reduced Echelon form is the identity matrix. In other words, by a series of successive row operations, the matrix A is reduced to I. Since each row operation is equivalent to left multiplication by an elementary matrix, this means that there exist elementary matrices E1, E2, ..., Ek, such that

(Ek...E2E1) A = I.

By de…nition, this implies

1 A¡ = Ek...E2E1 = Ek...E2E1 (I) .

The very last equation says that the exact same row operations that reduce A to the identity 1 I, in the same time, also transforms the identity matrix I to A¡ . We create a n (2n) matrix by adding the identity to the left of A : £

a11 a12 ... a1n 1 0 ... 0 a21 a22 ... a2n 0 1 ... 0 [A I] = 2 ...... 3 ...... 6 7 6 am1 ...... amn 0 ...... 1 7 6 7 4 5 We then perform row operations till the …rst n columns form the identity matrix. When the 1 …rst n columns form the identity matrix, the remaining columns form the inverse A¡ :

1 [A I] elementary row operations I A¡ ¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡! 1 £ ¤ Example 7.4 Find A¡ if 1 3 1 A = 2 7 ¡0 . 2 2 0 15 3 ¡ 4 5

7 Solution: 1 3 1 1 0 0 [A, I] = 2 7 ¡0 0 1 0 2 2 0 15 0 0 13 ¡ 4 1 3 5 1 1 0 0 R2 2R1 R2 ¡ ¡ ! 0 1 2 2 1 0 R3 + 2R1 R3 2 ¡ 3 ! 0 6 13 2 0 1 ¡¡¡¡¡¡¡¡¡¡¡¡! 41 0 7 7 3 50 R1 3R2 R1 ¡ ¡ ¡ ! 0 1 2 2 1 0 R3 6R2 R3 2 ¡ 3 ¡ ! 0 0 1 14 6 1 ¡¡¡¡¡¡¡¡¡¡¡¡! ¡ 41 0 0 105 45 57 R1 + 7R3 R1 ¡ ! 0 1 0 30 13 2 R2 2R3 R2 2 ¡ ¡ 3 ¡ ! 0 0 1 14 6 1 ¡ ¡¡¡¡¡¡¡¡¡¡¡¡! 4 5 105 45 7 1 ¡ A¡ = 30 13 2 2¡14 6 ¡1 3 ¡ We may verify our answer directly: 4 5

1 3 1 105 45 7 1 0 0 2 7 ¡0 30 ¡13 2 = 0 1 0 . 2 2 0 15 3 2¡14 6 ¡1 3 20 0 13 ¡ ¡ 4 5 4 5 4 5 Extended Inverse Matrix Algorithm ² 1 Using the same idea, the inverse matrix algorithm can be generalized to …nd A¡ B. Let An n be a n n square invertible matrix and B be a n m matrix. We combine the columns of A£ and col£umns of B to create a n (n + m) matri£x [A B] , and perform row operations till the …rst n columns of the augment£ed matrix [A B] form the identity matrix of dimension n. When the …rst n columns of the [A B] form the identity matrix, the 1 last m columns forms A¡ B. In short,

1 [A B] I A¡ B . ! Characterization of Invertible Matrices £ ¤ ² We summarize this entire lecture by the following theorem. Theorem 7.4 (Invertible matrix theorem) Let An n be a square matrix. The following statements are equivalent: £

1. The matrix A is invertible.

2. AT is invertible.

8 3. AB = AC = B = C (i.e., cancellation law holds for A. ) 4. A is left-invertible, i.e., there exists C such that CA = I.

5. A is right-invertible, i.e., there exists D such that AD = I.

6. The reduced Echelon form of A is identity matrix I.

7. A is row-equivalent to identity matrix I.

8. A has n pivot positions.

9. A~x = ~0 has only trivial solution ~x = ~0

10. Column vectors of A are linearly independent.

11. Column vectors of A span Rn.

12. Row vectors of A span Rn.

13. A~x = ~b is consistent for all ~b in Rn.

14. The columns of A form a for Col (A) .

15. Col (A) = Rn.

16. (A) = n.

17. Rank(Null (A)) = 0.

T 1 1 T Proof. (Very brie‡y) (1) (2) : because A ¡ = (A¡ ) () 1 (1) = (3) : AB = AC = A (B C) = 0 = A¡ A (B C) = 0 = (B C) = 0 (1) =) (4) and (1) = (5)) : obvio¡usly; ¡ )¢ ¡ ) ¡ ) ) 1 (1) = (5) : Ep...E2E1A = I = Ep...E2E1 = A¡ (6) ) (7) : A I ) (7) () (8) : I h»as n pivots (8) =()(9) : no free variable (9) =) (10) : any linear relation ~x is a solution of A~x = ~0 ) (8) = (11) : A~x = ~b is always consistent since by (10), the last column in the augmented matrix is)non-pivot (2) = (12) : columns of AT are exactly the rows of A (11) ) (13) : re-statements. (11) () (15) : re-statements. (16) () (17) : dimension theorem ()

9 Homework #6. ² 1 2 1 1 2 3 1. Let A = , ~b = , ~b = , ~b = , ~b = . 5 12 1 ¡3 2 5 3 5 4 1 · ¸ · ¸ · ¸ · ¸ · ¸ 1 ~ ~ ~ ~ (a) Find A¡ , and use it to solve A~x1 = b1, A~x2 = b2, A~x3 = b3, A~x4 = b4. (b) Consider the augmented matrix

1 2 1 1 2 3 A ~b ~b ~b ~b = . 1 2 3 4 5 12 ¡3 5 5 1 h i · ¸ Perform row operations to reduced the the augmented matrix to a matrix whose …rst two columns are identity. When the …rst two columns form identity matrix, …nd the last four columns. (c) Are the answers from part (a) and (b) above the same? Can you explain why?

2. Use Inverse Matrix Algorithm to …nd the inverse of

1 0 1 (a) A = 3 1 ¡2 2¡2 3 2 3 ¡ 4 1 0 15 1 1 1 ¡2 2 (b) B = 2¡2 3 2 03 ¡ ¡ 6 0 0 1 27 6 7 4 5 3. Given 1 3 2 2 1 1 1 ¡ 1 ¡ ¡ A¡ = 0 1 3 , B¡ = 0 2 1 . 2 1 0 13 2 3 1 0 3 ¡ ¡ 1 1 4 5 4 5 Find (AB)¡ and (BA)¡ .

1 4. Find A¡ B if 1 2 1 3 4 1 A = 4 ¡7 3 , B = 2 3 ¡6 20 ¡6 43 26 1 3 3 ¡ 4 5 4 5 5. For each statement below, determine whether it is true or false. If it is false, provide a counterexample. If true, explain why.

(a) Suppose A is a n n matrix, and B and C are n m matrices. If AB = AC, then B = C. £ £

10 (b) let A and B be n n matrices. Suppose that AB is invertible. Then both A and B are invertible. £ 1 1 1 (c) (AB)¡ = A¡ B¡ . (d) If a square matrix A has an echelon form whose entries are all pivot, then A is invertible. (e) If A is a square matrix and if A~x = ~b has a unique solution for some ~b, then A is invertible. (f) Let A be a n n matrix, and B a n m matrix. If AB = 0, then B = 0. £ £

11