<<

Lecture Notes 1: Algebra Part C: Pivoting and Reduced

Peter J. Hammond

revised 2018 September 23rd

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 88 Lecture Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Column Rank, Row Rank, and Determinantal Rank Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 2 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 3 of 88 Three Simultaneous Equations Consider the following system of three simultaneous equations in three unknowns, which depends upon two “exogenous” constants a and b: x + y − z = 1 x − y + 2z = 2 x + 2y + az = b It can be expressed, using an augmented 3 × 4 matrix, as : 1 1 −1 1 1 −1 2 2 1 2 a b Perhaps even more useful is the doubly augmented 3 × 7 matrix: 1 1 −1 1 1 0 0 1 −1 2 2 0 1 0 1 2 a b 0 0 1

whose last 3 columns are those of the 3 × 3 I3. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 4 of 88 The First Pivot Step Start with the doubly augmented 3 × 7 matrix: 1 1 −1 1 1 0 0 1 −1 2 2 0 1 0 1 2 a b 0 0 1 First, we pivot about the element in row 1 and column 1 to eliminate or “zeroize” the other elements of column 1. This elementary row operation requires us to subtract row 1 from both rows 2 and 3. It is equivalent to multiplying  1 0 0 by the lower E1 = −1 1 0. −1 0 1 Note: this is the result of applying the same row operations to I3. The resulting 3 × 7 matrix is: 1 1 −1 1 1 0 0 0 −2 3 1 −1 1 0 0 1 a + 1 b − 1 −1 0 1

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 5 of 88 The Second Pivot Step After augmenting again by the identity matrix, we have:

1 1 −1 1 1 0 0 1 0 0 0 −2 3 1 −1 1 0 0 1 0 0 1 a + 1 b − 1 −1 0 1 0 0 1

Next, we pivot about the element in row 2 and column 2. 1 Specifically, multiply the second row by − 2 , then subtract the new second row from the third to obtain: 1 1 −1 1 1 0 0 1 0 0 3 1 1 1 1 0 1 − 2 − 2 2 − 2 0 0 − 2 0 5 1 3 1 1 0 0 a + 2 b − 2 − 2 2 1 0 2 1 Again, the pivot operation is equivalent to multiplying 1 0 0 1 by the lower triangular matrix E2 = 0 − 2 0, 1 0 2 1 which is the result of applying the same row operation to I3. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 6 of 88 Case 1: Dependent Equations

5 In case 1, when a + 2 = 0, the equation system reduces to:

x + y − z = 1 3 1 y − 2 z = − 2 1 0 = b − 2

1 In case 1A, when b 6= 2 , neither the last equation, nor the system as a whole, has any solution. 1 In case 1B, when b = 2 , the third equation is redundant. In this case, the first two equations have a general solution 3 1 3 1 3 1 with y = 2 z − 2 and x = z + 1 − y = z + 1 − 2 z + 2 = 2 − 2 z, where z is an arbitrary . In particular, there is a one-dimensional set of solutions 3 along the unique straight line in R that passes through both: 3 1 (i) ( 2 , − 2 , 0), when z = 0; (ii) (1, 1, 1), when z = 1.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 7 of 88 Case 2: Three Independent Equations

1 1 −1 1 1 0 0 1 0 0 3 1 1 1 1 0 1 − 2 − 2 2 − 2 0 0 − 2 0 5 1 3 1 1 0 0 a + 2 b − 2 − 2 2 1 0 − 2 1 5 Case 2 occurs when a + 2 6= 0, 5 and so the reciprocal c := 1/(a + 2 ) is well defined. 5 Now divide the last row by a + 2 , or multiply by c, to obtain:

1 1 −1 1 1 0 0 3 1 1 1 0 1 − 2 − 2 2 − 2 0 1 3 3 1 0 0 1 (b − 2 )c − 2 c 2 c 2 c The system has been reduced to row echelon form in which the leading zeroes of each successive row form the steps (in French, ´echelons, meaning rungs) of a ladder (or ´echelle in French) which descends steadily as one goes from left to right.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 8 of 88 Case 2: Three Independent Equations, Third Pivot

1 1 −1 1 1 0 0 3 1 1 1 0 1 − 2 − 2 2 − 2 0 1 3 3 1 0 0 1 (b − 2 )c − 2 c 2 c 2 c Next, we zeroize the elements in the third column above row 3. To do so, pivot about the element in row 3 and column 3. This requires adding 1 times the last row to the first, 3 and 2 times the last row to the second. In effect, one multiplies 1 1 1 3 by the upper triangular matrix E3 := 0 1 2  0 0 1 1 1 0 The first three columns of the result are 0 1 0 0 0 1

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 9 of 88 Case 2: Three Independent Equations, Final Pivot

As already remarked, the first three columns of the matrix we are left with are 1 1 0 0 1 0 0 0 1 The final pivoting operation involves subtracting the second row from the first, so the first three columns become the identity matrix

1 0 0 0 1 0 0 0 1

This is a matrix in reduced row echelon form because, given the leading non-zero element of any row (if there is one), all elements above this element are zero.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 10 of 88 Final Exercise

Exercise 1. Find the last 4 columns of each 3 × 7 matrix produced by these last two pivoting steps. 2. Check that the fourth column solves the original system of 3 simultaneous equations. 3. Check that the last 3 columns form the inverse of the original coefficient matrix.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 11 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 12 of 88 Definition of Row Echelon Form

Definition An m × n matrix A is in row echelon form just in case:

1. The first r ≤ m rows i ∈ Nr

each have a non-zero leading entry ai,`i in column `i such that aij = 0 for all j < `i . 2. Each successive leading entry is in a column to the right of the leading entry in the previous row.

That is, given the leading element ai,`i 6= 0 of row i, one has ahj = 0 for all h > i and all j ≤ `i . 3. If r < m, then any row i ∈ {r + 1,..., m} = Nm \ Nr has no leading entry, because all its elements are zero. This row without a leading entry must be below any row with a leading entry.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 13 of 88 Examples

Assuming that α, β, γ ∈ R \{0}, here are three examples of matrices in row echelon form:

α 2 0 0 α 2 0 0 α 0 0 0 β 0 A = 0 0 β 0 ; B =   ; C = 0 β   0 0 0 γ   0 0 0 γ   0 0 0 0 0 0

Here are three examples of matrices that are not in row echelon form 0 1 1 2 1 0 D = 1 0 ; E = 0 1 ; F = 0 0 0 0 0 1 0 1

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 14 of 88 Pivoting to Reach a Generalized Row Echelon Form

Any m × n matrix A can be transformed into row echelon form by applying a series of preserving row operations involving non-zero pivot elements.

1. Look for the first or leading non-zero column `1 in the matrix.

2. Find within column `1 an element ai1`1 6= 0

with a large absolute value |ai1`1 |; this will be the first pivot. 3. Interchange rows 1 and i1, moving the pivot to the top row. 4. To preserve the determinant, change the sign of either row 1 or row i1 (not both) by multiplying that entire row by −1.

5. Subtract ai`1 /a1`1 times the new row 1 from each new row i > 1. This first pivot operation will eliminate all the elements of the pivot column `1 that lie below the new row 1.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 15 of 88 The Intermediate Matrices and Pivot Steps After k − 1 pivoting operations have been completed, and column `k−1 (with `k−1 ≥ k − 1) was the last to be used: 1. The first or “top” k − 1 rows of the m × n matrix form a (k − 1) × n submatrix in row echelon form. 2. The last or “bottom” m − k + 1 rows of the m × n matrix form an (m − k + 1) × n submatrix whose first `k−1 columns are all zero. 3. Find the first column `k that has at least one non-zero element below row k − 1.

4. Choose as the kth the aik `k with ik ≥ k

which has the large absolute value |aik `k |. 5. Interchange rows k and ik , moving the pivot up to row k, and change the sign of just one of these rows.

6. Subtract ai`k /ak`k times the new row k from each new row i > k. This kth pivot operation will eliminate all the elements of the pivot column `k that lie below the new row k. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 16 of 88 Ending the Pivoting Process

1. Continue pivoting about successive pivot elements aik `k 6= 0, moving row ik ≥ k up to row k at each stage k, while leaving all rows above k unchanged. 2. Stop after r steps when either r = m, or else all elements in the remaining m − r rows are zero, so no further pivoting is possible.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 17 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 18 of 88 Definition of Reduced Row Echelon Form Definition An m × n matrix A is in reduced row echelon form just in case it is in row echelon form,

and the leading entry ai,`i 6= 0 in each row i is the only non-zero entry in its column.

That is, aij = 0 for all j 6= `i . When m = n, it is obvious that any in which all diagonal elements are non-zero is in reduced row echelon form. Assuming that α, β, γ ∈ R \{0}, here are three more examples of matrices in reduced row echelon form: α 2 0 0 α 2 0 0 α 0 0 0 β 0 A = 0 0 β 0 ; B =   ; C = 0 β   0 0 0 γ   0 0 0 γ   0 0 0 0 0 0

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 19 of 88 Reaching a Reduced Row Echelon Form

Consider an m × n matrix C that is already in row echelon form.

Suppose it has r leading non-zero elements ck,`k in rows k = 1, 2,..., r, where `k is increasing in k.

Starting at the pivot element cr,`r 6= 0 in the last pivot row r, zeroize all the elements in column `r above this element by subtracting from each row k above r

the multiple ck,`r /cr,`r of row r of the matrix C, while leaving row r itself unchanged.

Repeat this pivoting operation for each of the pivot elements ck,`k ,

working from cr−1,`r−1 all the way back and up to c1,`1 .

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 20 of 88 Permuting the Columns We have shown how to transform a general m × n matrix A into a matrix C = RA in reduced row echelon form by applying the row operation R that equals the product of several determinant preserving row operations. Denote the leading non-zero elements in the first r rows of C

by ck`k , where `k is increasing in k for k = 1, 2,..., r. Finally, post multiply C by an n × n P that moves column `k to column k, for k = 1, 2,..., r. It also partitions the matrix columns into two sets: 1. first, a complete set of r columns containing all the r pivots, with one pivot in each row and one in each column; 2. then second, the remaining n − r columns without any pivots.

So the resulting matrix CP = RAP has a diagonal sub-matrix Dr×r in its top left-hand corner; the diagonal elements of Dr×r are the pivots, all of which must be non-zero, by construction.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 21 of 88 A Partially Diagonalized Matrix

Our constructions have led to the equality

 D B  RAP = r×r r×(n−r) 0(m−r)×r 0(m−r)×(n−r)

The right-hand side is a partitioned m × n matrix, whose four sub-matrices have the indicated dimensions. We may call it a partially diagonalized matrix.

Because the diagonal of Dr×r = diag(d1, d2,..., dr ) consists of all the non-zero pivots, −1 the inverse Dr×r = diag(1/d1, 1/d2,..., 1/dr ) exists.

Provided that the non-negative integer r ≤ m is unique, independent of what pivots are chosen, we may want to call r the pivot rank of the matrix A.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 22 of 88 Decomposing an m × n Matrix

Premultiplying the equality

 D B  RAP = r×r r×(n−r) 0(m−r)×r 0(m−r)×(n−r)

by the inverse matrix R−1, which certainly exists, gives

 D B  AP = R−1 r×r r×(n−r) 0(m−r)×r 0(m−r)×(n−r)

Postmultiplying the result by P−1 leads to

 D B  A = R−1 r×r r×(n−r) P−1 0(m−r)×r 0(m−r)×(n−r)

This is a decomposition of A into the product of three matrices that are much easier to manipulate.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 23 of 88 Three Special Cases

So far we have been writing out full partitioned matrices, as is required when the number of pivots satisfies r < min{m, n}. There are three other special cases when r = min{m, n}. In these three cases, the partially diagonalized m × n matrix

 D B  RAP = r×r r×(n−r) 0(m−r)×r 0(m−r)×(n−r)

reduces to:

1. Dn×n in case r = m = n, so m − r = n − r = 0;  2. Dm×m Bm×(n−m) in case r = m < n, so m − r = 0;  D  3. n×n in case r = n < m, so n − r = 0. 0(m−n)×n

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 24 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 25 of 88 Finding the Determinant of a In the case of an n × n matrix A, our earlier equality becomes

 D B  RAP = r×r r×(n−r) 0(n−r)×r 0(n−r)×(n−r)

The determinant of this upper triangular matrix is clearly 0 except in the special case when r = n. When r = n, there is a complete set of n pivots. There are no missing columns, so no need to permute the columns by applying the permutation matrix P. Instead, we have the complete diagonalization RA = D. Because R is determinant preserving, Qn one has |RA| = |A| = |D| = i=1 di . So, to calculate the determinant when r = n, it is enough: 1. to pivot to reduce A to row echelon form or diagonal form; 2. then multiply the diagonal elements.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 26 of 88 Inverting a Square Matrix: Necessary Condition

Suppose that A is n × n, and consider the equation system AX = In. The corresponding system with a partially diagonalized matrix A is

 D B  RAX = RAPP−1X = r×r r×(n−r) P−1X = R 0(n−r)×r 0(n−r)×(n−r)

This has a solution only if the last n − r rows of R are all zero. But |R| = 1, so this implies that r = n. That is, a necessary condition for A to be invertible is that r = n, implying that A has a full set of n pivots.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 27 of 88 Inverting a Square Matrix: Sufficient Condition

Conversely, if r = n, then there is a complete set of pivots, so one can take P = I,. Then the partially diagonalized system becomes fully diagonalized, so AX = I is equivalent to RAX = DX = R. The unique solution is X = A−1 = D−1R.

In this case pivoting does virtually all the work of matrix inversion; because all that is left to do is: 1. invert the resulting diagonal matrix D; 2. postmultiply D−1 by the matrix R, which represents the product of all the pivoting operations.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 28 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 29 of 88 Eight Basic Rules (Rules A–H of EMEA, Section 16.4) Let |A| denote the determinant of any n × n matrix A. 1. |A| = 0 if all the elements in a row (or column) of A are 0. 2. |A>| = |A|, where A> is the of A. 3. If all the elements in a single row (or column) of A are multiplied by a scalar α, so is its determinant. 4. If two rows (or two columns) of A are interchanged, the determinant changes sign, but not its absolute value. 5. If two of the rows (or columns) of A are proportional, then |A| = 0. 6. The value of the determinant of A is unchanged if any multiple of one row (or one column) is added to a different row (or column) of A. 7. The determinant of the product |AB| of two n × n matrices equals the product |A| · |B| of their determinants. 8. If α is any scalar, then |αA| = αn|A|.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 30 of 88 Verifying the Transpose Rule 2 The transpose rule 2 is very useful: it implies that for any statement S about how |A| depends on the rows of A, there is an equivalent “transpose” statement S> about how |A| depends on the columns of A. Exercise Verify Rule 2 directly for 2 × 2 and then for 3 × 3 matrices.

Proof of Rule 2 The expansion formula implies that X Yn X Yn |A| = sgn(π) aiπ(i) = sgn(π) aπ−1(j)j π∈Π i=1 π∈Π j=1

But we proved earlier that sgn(π−1) = sgn(π). > Also aπ−1(j)j = ajπ−1(j) by definition of transpose. Hence, because π ↔ π−1 is a bijection on the set Π, the expansion formula with π replaced by π−1 P −1 Qn > > implies that |A| = π−1∈Π sgn(π ) j=1 ajπ−1(j) = |A |.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 31 of 88 Verifying the Alternation Rule 4 Recall the notation τr,s for the transposition of r, s ∈ Nn.

Let Ar↔s denote the matrix that results from applying τr,s to the rows of the matrix A — i.e., interchanging rows r and s. Theorem Given any n × n matrix A and any transposition τr,s , one has det Ar↔s = − det A. Proof. −1 Write τ for τr,s . Then, because π ↔ τ ◦ π is a bijection on Πn −1 and sgn(τ ◦ π) = − sgn(π) for all π ∈ Πn, we have

det A = P sgn(π) Qn a r↔s π∈Πn i=1 τ(i),π(i) P Qn = sgn(π) a −1 π∈Πn i=1 i,(τ ◦π)(i) P −1 Qn = − sgn(τ ◦ π) a −1 π∈Πn i=1 i,(τ ◦π)(i) = − P sgn(π) Qn a = − det A π∈Πn i=1 i,π(i)

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 32 of 88 The Duplication Rule, and Rule 8 The following duplication rule is a special case of Rule 5. Proposition If two different rows r and s of A are equal, then |A| = 0. Proof. Suppose that rows r and s of A are equal.

Then Ar↔s = A, and so |Ar↔s | = |A|.

Yet the alternation Rule 4 implies that |Ar↔s | = −|A|. Hence |A| = −|A|, implying that |A| = 0. n Rule 8: |αA| = α |A| for any α ∈ R. Proof. The expansion formula implies that

P Qn |αA| = π∈Π sgn(π) i=1(αaiπ(i)) n P Qn n = α π∈Π sgn(π) i=1 aiπ(i)) = α |A|

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 33 of 88 First Implications of Multilinearity: Rules 1 and 3

> Recall the notation A/br for the matrix that results > > after the rth row ar of A has been replaced by br . > With this notation, the matrix A/αar is the result > > of replacing the rth row ar of A by αar . > That is, it is the result of multiplying the rth row ar of A by the scalar α. Rule 3: If all the elements in a single row of A are multiplied by a scalar α, so is its determinant. Proof. > > By multilinearity one has |A/αar | = α|A/ar | = α|A|. Rule 1: |A| = 0 if all the elements in a row of A are 0. Proof. This follows from putting α = 0 in Rule 3.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 34 of 88 More Implications of Multilinearity: Rules 5 and 6 Rule 5: If two rows of A are proportional, then |A| = 0. Proof. > > Suppose that ar = αas where r 6= s. > > Then |A| = |A/(αas )r | = α|A/(as )r | = 0 by duplication. Rule 6: |A| is unchanged if any multiple of one row is added to a different row of A. Proof. > > For the matrix A/(ar + αas )r , where α times row s of A has been added to row r, row multilinearity implies that

> > > > |A/(ar + αas )r | = |A/(ar )r | + α|A/(as )r |

> > But A/(ar )r = A and A/(as )r has a copy of row s in row r. By the duplication rule, it follows that

> > > > |A/(ar + αas )r | = |A/(ar )r | + α|A/(as )r | = |A| + 0 = |A|

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 35 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 36 of 88 Verification of the Product Rule 7: Diagonal Case Recall that Rule 7 is the product rule stating that |AB| = |A| · |B|. First we consider the special case when A is the n × n diagonal matrix D = diag(d1, d2,..., dn). Proposition Qn For any n × n matrix B, one has |DB| = |D| · |B| = ( k=1 dk ) |B|. Proof. Pn 2 First, note that (DB)i,j = k=1 di δik bkj = di bij for all (i, j) ∈ Nn. Then applying the expansion formula thrice implies that X Yn Yn Yn |D| = sgn(π) di δi,π(i) = di δii = di π∈Π i=1 i=1 i=1 because the only non-zero term comes when π = ι, and also

P Qn |DB| = π∈Π sgn(π) i=1 di bi,π(i) Qn P Qn = ( k=1 dk ) π∈Π sgn(π) i=1 bi,π(i) = |D| · |B|

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 37 of 88 Determinant Preserving Row Operations: Definition

Definition Let Mm×n denote the family of all m × n matrices. Then any m × m matrix R induces, for every n ∈ N, a row operation Mm×n 3 A 7→ RA ∈ Mm×n. The row operation represented by the m × m matrix R is determinant preserving just in case, given any m × m matrix X, one has |RX| = |X|. Lemma If the m × m matrix R is determinant preserving, then |R| = 1. Proof. Putting X = I in the definition gives |R| = |RI| = |I| = 1.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 38 of 88 Determinant Preserving Row Operations: Examples Let X denote an arbitrary n × n matrix.

Recall the notation Er+αq and Er+αqX for the matrices which result from applying to I and X respectively the elementary row operation of adding α times row q to row r.

Recall too that Tr↔s denotes the elementary row operation of interchanging rows r and s; we know that |Tr↔s | = −1. ∗ To preserve determinants, define Tr→s→r − as Tr↔s followed by changing the sign of the new row r (but not row s).

Evidently, for any m × m matrix X ∗ we have |Er+αqX| = |Tr→s→r − X| = |X|. ∗ So the row operations Er+αq and Tr→s→r − are all determinant preserving. This implies that their inverses all exist, −1 ∗ −1 ∗ with Er+αq = Er−αq and (Tr→s→r − ) = Ts→r→s− .

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 39 of 88 The Subgroup of Determinant Preserving Row Operations

The set of all non-singular m × m matrices forms a group Gm under , with identity I and matrix inversion.

The set Rm of all determinant preserving row operations on m × n matrices is a subgroup of Gm because: 1. if the m × m matrix R is determinant preserving, then it is non-singular because |R| = |RI| = |I| = 1; 2. if the two m × m matrices R and S are both determinant preserving, then for every m × m matrix X one has

|(RS)X| = |R(SX)| = |SX| = |X|

implying that RS is also determinant preserving.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 40 of 88 Verification of the Product Rule 7: Non-Singular Case

Proposition For any two n × n matrices A and B where |A|= 6 0, one has |AB| = |A| · |B|. Proof. Because |A|= 6 0, there exist a non-singular diagonal matrix D m and a sequence of determinant preserving row operations hRk ik=1 Qm such that RA = D where R = q=1 Rq. Because the family of all determinant preserving row operations is a subgroup, and so closed under matrix multiplication, the matrix R, as well as its inverse R−1, are also determinant preserving row operations. It follows that |A| = |R−1D| = |D| and also |AB| = |R−1DB| = |DB| = |D| · |B|. Hence |AB| = |D| · |B| = |A| · |B|.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 41 of 88 Verification of the Product Rule 7: Singular Case In case the n × n matrix A satisfies |A| = 0, pivoting reaches a partially diagonalized matrix of the form

 D C  RAP = r×r r×(n−r) 0(n−r)×r 0(n−r)×(n−r)

where n − r ≥ 1, while P is a n × n permutation matrix, and the m × m matrix R is determinant preserving. So there exist matrices S, T, U, V of suitable such that RAB = (RAP)P−1B takes the form     Dr×r Cr×(n−r) Sr×r Tr×(n−r) 0(n−r)×r 0(n−r)×(n−r) U(n−r)×r V(n−r)×(n−r)

DS + CU DT + CV Hence |AB| = |RAB| = = 0 = |A| · |B| 0(n−r)×r 0(n−r)×(n−r) also in this case when |A| = 0.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 42 of 88 Verification of the Product Rule 7: Summary

Finally, therefore, in view of the previous proposition when |A|= 6 0, we have proved: Theorem For any n × n matrices A and B, one has |AB| = |A| · |B|.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 43 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 44 of 88 Cofactor Expansion: Theorem

Definition Given any element ars of the matrix n × n matrix A, the associated( r, s)-cofactor |Crs | is the determinant of the (n − 1) × (n − 1) matrix Crs obtained by omitting row r and column s from A. The cofactor expansions of |A| along any row r or column s Pn r+j Pn i+s are respectively j=1(−1) arj |Crj | and i=1(−1) ais |Cis |. Theorem For every row r and column s of any n × n matrix A, these cofactor expansions are valid — i.e., one has

Xn r+j Xn i+s |A| = (−1) arj |Crj | = (−1) ais |Cis | j=1 i=1

The proof of this theorem will occupy the next 6 slides.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 45 of 88 Cofactor Expansion: Proof, Part 1 Later we will prove the row expansion formula. If it is valid, then applying it to the transpose matrix A> gives Xn |A>| = (−1)r+j a>|C>| j=1 rj rj Taking transposes throughout gives

Xn r+j |A| = (−1) ajr |Cjr | j=1 Replacing j by i and r by s, then s + i by i + s, one obtains

Xn i+s |A| = (−1) ais |Cis | i=1 This is the column expansion formula. So we have proved that the column expansion formula is implied by the row expansion formula, leaving us to prove the latter.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 46 of 88 Cofactor Expansion: Proof, Part 2 To verify the row expansion formula, > Pn > first note that the rth row vector satisfies ar = j=1 arj ej , > n where ej is defined as the jth unit row vector in R , equal to the jth row of the n × n identity matrix In. Because the determinant is multilinear, it follows that

Xn > |A| = arj |A/(e )r | j=1 j

> which is a of the n determinants |A/(ej )r | > in which row ar of A gets successively replaced > by each corresponding jth unit row vector ej . Therefore, to verify the row expansion formula

Xn r+j |A| = (−1) arj |Crj | j=1

> r+j it suffices to verify that |A/(ej )r | = (−1) |Crj | for each j ∈ Nn. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 47 of 88 Cofactor Expansion: Proof, Part 3

C (a )  Consider the bordered n × n matrix Cˆ = rj j −r whose: rj 0> 1 1. top left hand corner is the (n − 1) × (n − 1) cofactor matrix Cˆrj ; n−1 2. top right hand border is the column vector (aj )−r ∈ R that is constructed by dropping the rth element from the jth column aj of the original matrix A; 3. bottom left hand border is the n − 1-dimensional row vector 0> of zeros; 4. bottom right hand corner is the number 1.

Three lemmas will be used to show that, for each j ∈ Nn: r%n j%n (i) there exist permutation matrices such that Cˆrj = P AP ; > r+j ˆ ˆ (ii) |A/ej | = (−1) |Crj |; and (iii) |Crj | = |Crj |. > r+j This will complete the proof that |A/(ej )r | = (−1) |Crj |.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 48 of 88 Cofactor Expansion: Proof, Part 4

k%` Given k ≤ ` ≤ n, recall that π ∈ Πn moves k to `, and then moves each q ∈ {k + 1, . . . , `} to q − 1. Let Pk%` denote the corresponding permutation Pπk%` . Lemma For each r, j ∈ Nn, one has C (a )  Cˆ = rj j −r = Pr%n [A/(e>) ] Pj%n rj 0> 1 j r

Proof. Premultiplying by Pr%n applies πr%n to the rows, whereas postmultiplying by Pj%n applies πj%n the columns. Now the result follows immediately from the definitions of: r%n j%n (i) the matrix Cˆrj ; (ii) the permutations π and π ; (iii) the associated permutation matrices Pr%n and Pj%n.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 49 of 88 Cofactor Expansion: Proof, Part 5 Lemma > r+j ˆ For each r, j ∈ Nn one has |A/(ej )r | = (−1) |Crj |. Proof. ˆ r%n > j%n The previous Lemma implies that |Crj | = |P [A/(ej )r ] P |. In earlier results we showed that |PπA| = |APπ| = sgn(π)|A| and also that sgn(πk%` = ` − k. r%n > r%n > Hence we have |P [A/(ej )r ]| = sgn(π )|A/(ej )r | and so

r%n > j%n r%n j%n > |P [A/(ej )r ] P | = sgn(π ) sgn(π )|A/(ej )r | n−r n−j > = (−1) (−1) |A/(ej )r |

Because (−1)2n = 1, it follows that

> r+j−2n r%n > j%n r+j ˆ |A/(ej )r | = (−1) |P [A/(ej )r ] P | = (−1) |Crj |

.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 50 of 88 Cofactor Expansion: Proof, Part 6 Lemma

ˆ Crj (aj )−r For each j ∈ Nn one has |Crj | = > = |Crj | . 0 1 Proof. ˆ Note that (Crj )n,π(n) = δn,π(n), so the expansion formula yields

ˆ X Yn ˆ X Yn−1 ˆ |Crj | = (Crj )i,π(i) = (Crj )i,π(i) π∈Πn i=1 π∈Πn−1 i=1 because all other terms are equal to zero.

But then the definition of the bordered matrix Cˆrj implies that

ˆ X Yn−1 |Crj | = (Crj )i,π(i) = |Crj | π∈Πn−1 i=1

This completes all the parts of the proof that the row r cofactor expansion of |A| is valid. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 51 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 52 of 88 Expansion by Alien Cofactors

Expanding along either row r or column s gives Xn Xn |A| = arj |Crj | = ais |Cis | j=1 i=1 when one uses matching cofactors. Expanding by alien cofactors, however, from either the wrong row i 6= r or the wrong column j 6= s, gives Xn Xn 0 = arj |Cij | = ais |Cij | j=1 i=1 This is because the answer will be the determinant of an alternative matrix in which:

I either row i has been duplicated and put in row r;

I or column j has been duplicated and put in column s.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 53 of 88 The Adjugate Matrix

Definition The adjugate (or “(classical) adjoint”) adj A of an order n square matrix A has elements given by (adj A)ij = |Cji |. It is therefore the transpose (C+)> of the cofactor matrix C+ + whose elements (C )ij = |Cij | are the respective cofactors of A.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 54 of 88 Main Property of the Adjugate Matrix

Theorem For every n × n square matrix A one has

(adj A)A = A(adj A) = |A|In Proof. The (i, j) elements of the two product matrices are respectively Xn Xn [(adj A) A]ij = |Cki |akj and [A(adj A)]ij = aik |Cjk | k=1 k=1 These are both cofactor expansions, which are expansions by:

I alien cofactors in case i 6= j, implying that both equal 0;

I matching cofactors in case i = j, implying that both equal |A|. Hence for each pair (i, j) one has

[(adj A) A]ij = [A (adj A)]ij = |A|δij = |A|(In)ij

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 55 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 56 of 88 Existence of the Inverse Matrix

Theorem An n × n matrix A has an inverse if and only if |A|= 6 0, which holds if and only if at least one of the two matrix equations AX = In and XA = In has a solution. Proof. Provided that |A|= 6 0, the identity (adj A)A = A(adj A) = |A|In shows that the matrix X := (1/|A|) adj A is well defined −1 and satisfies XA = AX = In, so X is the inverse A .

Conversely, if XA = In has a solution, then the product rule for determinants implies that 1 = |In| = |XA| = |X||A|.

Similarly if AX = In has a solution. In either case one has |A|= 6 0. The rest follows from the paragraph above.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 57 of 88 Singularity versus Invertibility

So A−1 exists if and only if |A|= 6 0. Definition 1. In case |A| = 0, the matrix A is said to be singular; 2. In case |A|= 6 0, the matrix A is said to be non-singular or invertible.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 58 of 88 Example and Application to Simultaneous Equations

Exercise ! ! 1 1 1 1 Verify that A = =⇒ A−1 = C := 2 2 1 1 1 −1 2 − 2

by using direct multiplication to show that AC = CA = I2. Example Suppose that a system of n simultaneous equations in n unknowns is expressed in matrix notation as Ax = b. Of course, A must be an n × n matrix. Suppose A has an inverse A−1. Premultiplying both sides of the equation Ax = b by this inverse gives A−1Ax = A−1b, which simplifies to Ix = A−1b. Hence the unique solution of the equation is x = A−1b.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 59 of 88 Inverting Triangular Matrices

Theorem If the inverse U−1 of an upper triangular matrix U exists, then it is upper triangular.

Taking transposes leads immediately to: Corollary If the inverse L−1 of a lower triangular matrix L exists, then it is lower triangular.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 60 of 88 Inverting Triangular Matrices: Proofs

Recall the (n − 1) × (n − 1) cofactor matrix Crs that results from omitting row r and column s of U = (uij ). When it exists, U−1 = (1/|U|) adj U, so it is enough to prove that the n × n matrix (|Crs |) of cofactor determinants, > whose transpose (|Crs |) is the adjugate, is lower triangular.

In case r < s, every element below the diagonal of the matrix Crs is also below the diagonal of U, so must equal 0.

Hence Crs is upper triangular, with determinant equal to the product of its diagonal elements.

Yet s − r of these diagonal elements are ui+1,i for i = r,..., s − 1. These elements are from below the diagonal of U, so equal zero.

Hence r < s implies that |Crs | = 0, so the n × n matrix (|Crs |) of cofactor determinants is indeed lower triangular, as required.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 61 of 88 Cramer’s Rule: Statement

Notation Given any m × n matrix A, let [A−j , b] denote the new m × n matrix in which column j has been replaced by the column vector b.

Evidently [A−j , aj ] = A. Theorem Provided that the n × n matrix A is invertible, the simultaneous equation system Ax = b has a unique solution x = A−1b whose ith component is given by the ratio of determinants xi = |[A−i , b]|/|A|. This result is known as Cramer’s rule.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 62 of 88 Cramer’s Rule: Proof Proof. Given the equation Ax = b, each cofactor |Cij | of the coefficient matrix A is formed by dropping row i and column j of A.

It therefore equals the (i, j) cofactor of the matrix |[A−j , b]|. Expanding the determinant by cofactors along column j therefore gives Xn Xn |[A−j , b]| = bi |Cij | = (adj A)ji bi i=1 i=1 by definition of the adjugate matrix. Hence the unique solution to the equation system has components

−1 1 Xn 1 xi = (A b)i = (adj A)ji bi = |[A−i , b]| |A| i=1 |A|

for i = 1, 2,..., n.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 63 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 64 of 88 Definition of Dimension

The dimension of a linear space is the number of elements in the largest linearly independent subset. Theorem m The dimension of R is m. To prove this, we first construct a linearly independent set of m vectors. m m Indeed, consider the list (ej )j=1 of m unit column vectors in R with each ej equal to jth column of the m × m identity matrix Im.

Obviously 0 = Imx implies that x = 0, so this list does form a linearly independent set.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 65 of 88 Too Many Vectors Are Linearly Dependent

m To complete the proof that R has dimension m, n m consider any list (yj )j=1 of n > m vectors in R . These n vectors form the columns of an m × n matrix Y. After applying enough suitable pivoting operations, the matrix equation Yx = 0 reduces to

 D B  z  RYPz = r×r r×(n−r) 1 = 0 0(m−r)×r 0(m−r)×(n−r) z2

where r ≤ m < n and z = P−1x. This equation system has many non-trivial solutions −1 n−r of the form z1 = −D Bz2 where z2 ∈ R is arbitrary.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 66 of 88 of Matrix Columns

The n column vectors of the m × n matrix A are linearly independent just in case Pn m the vector equation 0m = j=1 ξj aj in R implies that ξj = 0 for each j = 1, 2,..., n,

Or equivalently, just in case the only solution of 0m = Ax is the trivial solution x = 0n.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 67 of 88 Spanning

Definition j n n Given any finite set S = {x ∈ R | j ∈ Nm} of m vectors in R , the set of vectors spanned by S, or the span of S, is the set

n Xm j sp S := {z ∈ | ∀j ∈ m; ∃yj ∈ : z = yj x } R N R j=1

Note that any vector z ∈ sp S is a linear combination of the vectors in S.

Exercise n Verify that sp A is a of R — i.e., it satisfies the axioms.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 68 of 88 The Column and Row Spaces

m In case the set S = {a1,..., an} ⊂ R consists of the n columns of the m × n matrix A, one has

m n sp({a1,..., an}) = {y ∈ R | ∃x ∈ R | y = Ax} This is the column space of A; the row space spanned by its rows, which equals the column space of A>, is given by

> > > n > n > > sp({a1 ,..., am}) = {w ∈ R | ∃z ∈ R | w = z A}

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 69 of 88 Column and Row Rank

Definition The column rank of the m × n matrix A is the dimension rC ≤ m of its column space, which is the maximum number of linearly independent columns. The row rank of the m × n matrix A is the dimension rR ≤ n of its row space, which is the maximum number of linearly independent rows. Obviously, the row rank of A equals the column rank of the transpose A>.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 70 of 88 The Column Rank of a Partially Diagonalized Matrix Theorem The partially diagonalized m × n matrix   Dr×r Br×(n−r) 0(m−r)×r 0(m−r)×(n−r)

where Dr×r is invertible, has column rank r. Proof. r m−r Given an arbitrary z ∈ R and w ∈ R , the vector equation  D B  x  z  r×r r×(n−r) = 0(m−r)×r 0(m−r)×(n−r) y w

−1 has a solution given by x = D (z − By) ∈ Rr iff w = 0m−r . r Hence the column space is R × {0m−r }. r It is isomorphic to R , whose dimension is r, the number of pivots. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 71 of 88 The Row Rank of a Partially Diagonalized Matrix Theorem The partially diagonalized m × n matrix   Dr×r Br×(n−r) 0(m−r)×r 0(m−r)×(n−r)

where Dr×r is invertible has row rank r. Proof. > > r m−r Given an arbitrary row vector (z , w ) ∈ R × R , the equation  D B  (x>, y>) r×r r×(n−r) = (z>, w>) 0(m−r)×r 0(m−r)×(n−r)

> > > −1 > has a solution given by (x , y ) = (z D , 0m−r ) if and only if w> = z>D−1B. > > r m−r > > −1 Hence the row space is {(z , w ) ∈ R × R | w = z D B}. r It is isomorphic to R , whose dimension is r. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 72 of 88 Invariance of Row Space

Theorem Let A be any m × n matrix and R any determinant preserving row operation. Then A and RA have the same row space. Proof. > n Suppose that w ∈ R is in the row space of A, > > > m with w = z A where z ∈ R . Then w> = (z>R−1)RA, > n so w ∈ R is in the row space of RA. > n Conversely, suppose w ∈ R is in the row space of RA, > > > m with w = z RA where z ∈ R . Then w> = (z>R)A, > n so w ∈ R is in the row space of A.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 73 of 88 of Column Spaces Theorem Let A be any m × n matrix and R any determinant preserving row operation. Then A and RA have isomorphic column spaces. Proof. m Suppose that y ∈ R is in the column space of A, n with y = Ax where x ∈ R . Then Ry = (RA)x, so Ry is in the column space of RA. Conversely, suppose Ry is in the column space of RA, n with Ry = (RA)x where x ∈ R . Because R is determinant preserving, it is invertible. Then y = R−1(RA)x = Ax, so y is in the column space of A. It follows that y ↔ Ry is a linear bijection between the column spaces of A and RA.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 74 of 88 Column Rank Equals Row Rank

Theorem Suppose the m × n matrix A can be partially diagonalized   Dr×r Br×(n−r) −1 as RAP = where Dr×r exists, 0(m−r)×r 0(m−r)×(n−r) while R is determinant preserving, and P is a permutation. Then both the column and row rank of A are equal to r. Proof. Because permuting the columns of a matrix makes no difference to its row or column rank, the row and column ranks of RA are equal to those of RAP, both of which equal r. By the previous theorems, the two matrices A and RA have isomorphic , with equal dimensions. So the row and column ranks of A are equal to the row and column ranks of RA, both of which are r.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 75 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 76 of 88 Minors and Minor Rank

Definition Given any m × n matrix A, a minor (determinant) of order k

is the determinant |Ai1i2...ik , j1j2...jk | of a k × k submatrix (aij ), with 1 ≤ i1 < i2 < . . . < ik ≤ m and 1 ≤ j1 < j2 < . . . < jk ≤ n.

The matrix Ai1i2...ik , j1j2...jk is formed by selecting in the right order all the elements that lie in both:

I one of the chosen rows ir (r = 1, 2,..., k);

I one of the chosen columns js (s = 1, 2,..., k) Definition The (minor) rank of a matrix is the dimension of its largest non-zero minor determinant.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 77 of 88 Minors: Some Examples Example 1. In case A is an n × n matrix:

I the whole determinant |A| is the only minor of order n; 2 I each of the n cofactors Cij is a minor of order n − 1. 2. In case A is an m × n matrix:

I each element of the mn elements of the matrix is a minor of order 1; I the number of minors of order k is m n m! n! · = k k k!(m − k)! k!(n − k)!

Exercise Verify that the set of elements that make up

the minor |Ai1i2...ik , j1j2...jk | of order k is completely determined

by its k diagonal elements aih,jh (h = 1, 2,..., k). (These need not be diagonal elements of A).

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 78 of 88 Principal and Leading Principal Minors

Definition If A is an n × n matrix,

the minor |Ai1i2...ik , j1j2...jk | of order k is: I a principal minor if ih = jh for h = 1, 2,..., k,

implying that its diagonal elements aihjh are all on the (principal) diagonal of A;

I a leading principal minor if its diagonal elements are the leading elements of the (principal) diagonal of ahh (h = 1, 2,..., k).

Exercise Explain why an n × n determinant has: 1.2 n − 1 principal minors; 2. n leading principal minors.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 79 of 88 Determinantal Rank

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 80 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 81 of 88 Rank Condition for Existence of a Solution, I

Theorem Let A be an m × n matrix, and b a column m-vector. n Then the equation Ax = b has a solution x ∈ R if and only the rank of the m × (n + 1) (A, b) equals the rank of A. Proof. n Necessity: Suppose that Ax = b has a solution x = (xj )j=1. Now apply to (A, b) the compound column operation of successively subtracting from its last column the multiple xj of each column j. This converts (A, b) to (A, 0) while preserving the column rank. Hence the ranks of (A, b) and (A, 0) are equal, with both equal to the rank of A.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 82 of 88 Rank Condition for Existence of a Solution, II Proof. Sufficiency: Suppose the ranks of A and (A, b) are both r. Then there is an r × n submatrix A˜ consisting of r linearly independent columns of A. Because the rank of (A, b) equals r, and not r + 1, the r + 1 columns of (A˜, b) must be linearly independent. This can only be true because there exists an r-vector ˜x such that b = A˜x˜ . By augmenting ˜x with n − k appropriately placed zero elements, n one can construct x ∈ R to satisfy Ax = b. Exercise Let A and B be m × n and m × k matrices. Prove that the matrix equation AX = B has one or more solutions for the n × k matrix X if and only if both A and the augmented matrix (A, B) have the same rank.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 83 of 88 Superfluous Equations and Degrees of Freedom, I

Theorem Let A be an m × n matrix, and b a column m-vector. Suppose A and the augmented matrix (A, b) have both rank r. 1. If r < m, then Ax = b has m − r superfluous equations. 2. If r < n, then there are n − r degrees of freedom in the solution to Ax = b.

In the following proof, we assume that the m × n matrix A can be partially diagonalized   Dr×r Br×(n−r) −1 as RAP = where Dr×r exists, 0(m−r)×r 0(m−r)×(n−r) while R is determinant preserving, and P is a permutation.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 84 of 88 Superfluous Equations and Degrees of Freedom, II

Proof. Under the previous assumption, the equation system Ax = b is equivalent to RAPz = w where z = P−1x and w = Rb. This system can be written as

   1   1  Dr×r Br×(n−r) zr wr 2 = 2 0(m−r)×r 0(m−r)×(n−r) zn−r wm−r

2 Here the rank of (RAP, w) is r iff wm−r = 0m−r , in which case the last m − r equations are superfluous. 2 n−r Then, for each zn−r ∈ R there is a unique solution 1 −1 1 2 given by zr = Dr×r (wr − Br×(n−r)zn−r ). Hence there are n − r degrees of freedom.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 85 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 86 of 88 Existence of a Solution Consider again the matrix equation AX = Y in its equivalent form

 D B  RAX = RAPP−1X = r×r r×(n−r) P−1X = RY 0(m−r)×r 0(m−r)×(n−r)

Z  Introduce the partitioned matrix 1 as notation for Z = P−1X, Z2 where the r × p matrix Z1 consists of the first r rows of Z, and the (n − r) × p matrix Z2 consists of the other n − r rows. The equation system takes the form

 D B  Z   V  r×r r×(n−r) 1 = RY = r×p 0(m−r)×r 0(m−r)×(n−r) Z2 W(m−r)×p

Because the matrix Dr×r of pivots is invertible, and the last m − r rows of the left-hand side matrix are all zero, a solution exists if and only if W(m−r)×p = 0(m−r)×p.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 87 of 88 The Solution Space

The necessary and sufficient condition for solutions to exist is W(m−r)×p = 0(m−r)×p.

In case this is met, the system reduces to DZ1 + BZ2 = RY1. −1 The general solution is Z1 = D (RY1 − BZ2).

Because the (n − r) × p matrix Z2 can be chosen arbitrarily, there are n − r degrees of freedom in each equation system. The first r rows of the matrix P−1X with permuted columns have been expressed as a linear function of Y and of these last arbitrary n − r rows of P−1X. The remaining m − r equations are redundant.

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 88 of 88