<<

3.2, 3.3 Inverting Matrices . Danziger Properties of

Transpose has higher precedence than multiplica- tion and , so

T T T T AB = A B  and A + B = A + B  As opposed to the bracketed expressions

(AB)T and (A + B)T

Example 1 1 2 1 1 0 1 Let A = ! and B = !. 2 5 2 1 1 0

Find ABT , and (AB)T .

T 1 1 1 2 1 1 0 1 1 2 1   ABT = ! ! = ! 0 1 2 5 2 1 1 0 2 5 2    1 0 

2 3 = ! 4 7

Whereas (AB)T is undefined. 1 3.2, 3.3 Inverting Matrices P. Danziger

Theorem 2 (Properties of Transpose) Given ma- trices A and B so that the operations can be pre- formed

1. (AT )T = A

2. (A + B)T = AT + BT and (A B)T = AT BT − −

3. (kA)T = kAT

4. (AB)T = BT AT

2 3.2, 3.3 Inverting Matrices P. Danziger Theorem 3 (Algebraic Properties of Matrix )

1. (k + `)A = kA + `A (Distributivity of multiplication I)

2. k(A + B) = kA + kB (Distributivity of II)

3. A(B + C) = AB + AC (Distributivity of )

4. A(BC) = (AB)C (Associativity of matrix mul- tiplication)

5. A + B = B + A (Commutativity of matrix ad- dition)

6. (A + B) + C = A + (B + C) (Associativity of )

7. k(AB) = A(kB) (Commutativity of Scalar Mul- tiplication) 3 3.2, 3.3 Inverting Matrices P. Danziger

The matrix 0 is the identity of matrix addition. That is, given a matrix A,

A + 0 = 0 + A = A. Further 0A = A0 = 0, where 0 is the appropriately sized 0 matrix.

Note that it is possible to have two non-zero ma- trices which multiply to 0.

Example 4

1 1 1 1 1 1 1 1 0 0 ! ! = ! = ! 1− 1 1 1 1− + 1 1− + 1 0 0 − − −

The matrix I is the identity of matrix multiplica- tion. That is, given an m n matrix A, × AIn = ImA = A

Theorem 5 If is in reduced then either R = I, or R has a row of zeros.

4 3.2, 3.3 Inverting Matrices P. Danziger

Theorem 6 (Power Laws) For any square ma- trix A, ArAs = Ar+s and (Ar)s = Ars

Example 7 2 0 0 1 4 0 0 1 2 1.      1 0 1 = 1 0 1    2 2 0   2 2 0       

2. Find A6, where 1 0 A = ! 1 1

6 2 4 2 2 2 A = A A = A A  .

1 0 Now A2 = !, so 2 1

2 2 1 0 1 0 1 0 1 0 A2 A2 = ! ! = ! !   2 1 2 1 2 1 3 1

1 0 = ! 5 1

5 3.2, 3.3 Inverting Matrices P. Danziger

Inverse of a matrix

Given a A, the inverse of A, denoted 1 A− , is defined to be the matrix such that 1 1 AA− = A− A = I Note that inverses are only defined for square ma- trices

Note Not matrices have inverses.

If A has an inverse, it is called invertible.

If A is not invertible it is called singular.

6 3.2, 3.3 Inverting Matrices P. Danziger

Example 8

1 2 5 2 1.A = ! A 1 = ! 2 5 − 2− 1 − 1 2 5 2 1 0 Check: ! ! = ! 2 5 2− 1 0 1 − 1 2 2.A = ! Has no inverse 2 4

1 1 1 3 1 1   1  − −  3.A = 1 2 1 A− = 1 1 0    −   1 1 2   1 0 1  − 1 1 1 3 1 1 1 0 0       Check: 1 2 1 1− 1− 0 = 0 1 0    −     1 1 2   1 0 1   0 0 1  − 1 2 1   4.A = 2 1 3 Has no inverse    3 3 4 

7 3.2, 3.3 Inverting Matrices P. Danziger

Inverses of 2 2 Matrices × Given a 2 2 matrix × a b A = ! c d A is invertible if and only if ad bc = 0 and − 6 1 1 d b A = − ! − ad bc c a − −

The quantity ad bc is called the of − the matrix and is written det(A), or A . | |

Example 9

2 1 2 1 3 2 1 A = ! A 1 = ! = − 3 ! 3 3 − 3 3− 1 1 1 − − −3 1 3 2 1 2 1 3 0 Check: − ! ! = − ! = I 3 3 1 3 3 −3 0 3 − − −

8 3.2, 3.3 Inverting Matrices P. Danziger

Algebra of Invertibility

Theorem 10 Given an A:

1 1 1. (A− )− = A,

n 1 1 n n 2. (A )− = (A− ) = A− ,

1 1 1 3. (kA)− = kA− ,

T 1 1 T 4. (A )− = (A− ) ,

9 3.2, 3.3 Inverting Matrices P. Danziger

Theorem 11 Given two invertible matrices A and B 1 1 1 (AB)− = B− A− .

Proof: Let A and B be invertible matricies and 1 1 let C = AB, so C− = (AB)− .

Consider C = AB. 1 Multiply both sides on the left by A− : 1 1 A− C = A− AB = B. 1 Multiply both sides on the left by B− . 1 1 1 B− A− C = B− B = I.

1 1 So, B− A− is the matrix you need to multiply C by to get the identity.

Thus, by the definition of inverse

1 1 1 1 B− A− = C− = (AB)− .

10 3.2, 3.3 Inverting Matrices P. Danziger

A Method for Inverses

Given a square matrix A and a vector b Rn, ∈ consider the

Ax = b

This represents a system of with coeffi- cient matrix A.

1 Multiply both sides by A− on the left, to get 1 1 A− Ax = A− b.

1 But A− A = In and Ix = x, so we have 1 x = A− b. Note that we have a unique solution. The as- sumption that A is invertible is equaivalent to the assumption that Ax = b has unique solution.

11 3.2, 3.3 Inverting Matrices P. Danziger

During the course of Gauss-Jordan elimination on the (A b) we reduce A I and | → b A 1b, so (A b) I A 1b . → − | →  | − 

If we instead augment A with I, row reducing will 1 produce (hopefully) I on the left and A− on the right, so (A I) I A 1 . | →  | − 

The Method:

1. Augment A with I

2. Use Gauss-Jordan to obtain (I A 1). | −

3. If I does not appear on the left, A is not in- vertable.

1 Otherwise, A− is given on the right.

12 3.2, 3.3 Inverting Matrices P. Danziger

Example 12

1 1. Find A− , where 1 2 3   A = 2 5 5    3 5 8 

13 3.2, 3.3 Inverting Matrices P. Danziger

Augment with I and row reduce: 1 2 3 1 0 0   R2 R2 2R1 2 5 5 0 1 0 → −   R3 R3 3R1  3 5 8 0 0 1  → − 1 2 3 1 0 0   0 1 1 2 1 0 R3 R3 + R2  − −  →  0 1 1 3 0 1  − − − 1 2 3 1 0 0   1 0 1 1 2 1 0 R3 2R3  − −  → −  0 0 2 5 1 1  − − 1 2 3 1 0 0   R1 R1 3R3 0 1 1 2 1 0 → −  − −  R2 R2 + R3  0 0 1 5/2 1/2 1/2  → − − 1 2 0 13/2 3/2 3/2  −  0 1 0 1/2 1/2 1/2 R1 R1 2R2  −  → −  0 0 1 5/2 1/2 1/2  − − 1 0 0 15/2 1/2 5/2   0 1 0 −1/2 1/2 1/2  −   0 0 1 5/2 1/2 1/2  − −

14 3.2, 3.3 Inverting Matrices P. Danziger

So 15 1 5 1 1  −  A− = 1 1 1 2  −   5 1 1  − − To check inverse multiply together:

1 2 3 15 1 5 1   1  −  AA− = 2 5 5 2 1 1 1    −   3 5 8   5 1 1  2 0 0 − − 1   = 2 0 2 0 = I    0 0 2 

2. Solve Ax = b in the case where b = (2, 2, 4)T . 15 1 5 2 1 1  −    x = A− b = 2 1 1 1 2  −     5 1 1   4  18 9− − 1  −   −  = 2 0 = 0      4   2 

15 3.2, 3.3 Inverting Matrices P. Danziger

3. Solve Ax = b in the case where b = (2, 0, 2)T . 15 1 5 2 1 1  −    x = A− b = 2 1 1 1 0  −     5 1 1   2  20 9− − 1  −   −  = 2 0 = 0      8   4 

4. Give a solution to Ax = b in the general case where b = (b1, b2, b3)

15 1 5 b1 1  −    x = 2 1 1 1 b2  −     5 1 1   b3  − − 15b1 + b2 + 5b3 1  −  = 2 b1 + b2 b3  −   5b1 b2 b3  − −

16 3.2, 3.3 Inverting Matrices P. Danziger

Elementary Matrices

Definition 13 An is a matrix obtained by preforming a single row on the .

Example 14

1. 2 0 0   0 1 0 (R1 2R1)   →  0 0 1 

2. 1 0 0   3 1 0 (R2 R2 + 3R1)   →  0 0 1 

3. 1 0 0   0 0 1 (R1 R2)   ↔  0 1 0 

17 3.2, 3.3 Inverting Matrices P. Danziger

Theorem 15 If E is an elementary matrix ob- tained from Im by preforming the row operation R and A is any m n matrix, then EA is the matrix × obtained by preforming the same row operation R on A.

Example 16

1 1 1   A = 2 1 0    3 2 1 

1. 2 0 0 1 1 1 2 2 2       0 1 0 2 1 0 = 2 1 0 2R2 on A       ∼  0 0 1   3 2 1   3 2 1 

2. 1 0 0 1 1 1 1 1 1       R2 R2 + 3R1 3 1 0 2 1 0 = 5 4 3 ∼ →       on A  0 0 1   3 2 1   3 2 1 

3. 18 3.2, 3.3 Inverting Matrices P. Danziger 1 0 0 1 1 1 1 1 1       R2 R3 0 0 1 2 1 0 = 3 2 1 ∼ ↔       on A  0 1 0   3 2 1   2 1 0 

Inverses of Elementary Matrices

If E is an elementary matrix then E is invertible and 1 E− is an elementary matrix corresponding to the row operation that undoes the one that generated E. Specifically:

If E was generated by an operation of the form • 1 1 Ri cRi then E is generated by Ri Ri. → − → c

If E was generated by an operation of the form • 1 Ri Ri + cRj then E is generated by → − Ri Ri cRj. → −

If E was generated by an operation of the form • 1 Ri Rj then E is generated by Ri Rj. ↔ − ↔ 19 3.2, 3.3 Inverting Matrices P. Danziger

Example 17

1 2 0 0 2 0 0   1   1.E = 0 1 0 E− = 0 1 0      0 0 1   0 0 1  1 0 0 1 0 0   1   2.E = 3 1 0 E− = 3 1 0    −   0 0 1   0 0 1  1 0 0   1 3.E = 0 0 1 E− = E    0 1 0 

20 3.2, 3.3 Inverting Matrices P. Danziger

Elementary Matricies and Solv- ing Equations Consider the steps of Gauss Jordan elimination to find the solution to a system of equations Ax = b. This consists of a series of row operations, each of which is equivalent to multiplying on the left by an elementary matrix Ei. Ele. row ops. A B, − − − −→ Where B is the RREF of A.

So EkEk 1 ...E2E1A = B for some appopriately − defined elementary matrices E1 ...Ek. 1 1 1 1 Thus A = E1− E2− ...Ek− 1Ek− B − Now if B = I (so the RREF of A is I), then 1 1 1 1 A = E1− E2− ...Ek− 1Ek− − 1 and A− = EkEk 1 ...E2E1 − Theorem 18 A is invertable if and only if it is the of elementary matrices.

21 3.2, 3.3 Inverting Matrices P. Danziger Summing Up Theorem

Theorem 19 (Summing Theorem Version 1) For any square n n matrix A, the following are × equivalent statements:

1. A is invertible.

2. The RREF of A is the identity, In.

3. The equation Ax = b has unique solution 1 (namely x = A− b).

4. The homogeneous system Ax = 0 has only the trivial solution (x = 0)

5. The REF of A has exactly n pivots.

6. A is the product of elementary matrices.

22