<<

Chapter 1 Lecture Notes

Section 1.2 Notes

1. The study of , and matricies in particular, is motivated by the solution of sytems of linear equations. An example of a linear system of equations is as follows: 3x − y + z =6 x+2y−z =3 −x+4y−z =1 Usually, the goal in such systems is to find the solution to these equa- tions, i.e. the values of x, y, z that satisfies these equations, if they exist. In this particular case, the solution is x =2,y=1,z =1,as you can verify by directly substituting these values into the equations. Part of the goal of this course is to find efficient algorithms for solving such equations. 2. Often, the variables, such as x, y and z, in the above system, just get in the way in computations. The important information is contained in the coefficients, or numbers, that lie in front of the variables. This motivates the definition of a , which is defined as any rectangu- lar array of numbers. For example, the matrices which represents the left and right sides of the above system is     3 −11 6     left side =  12−1 right side =  3  −14−1 1

3. Here are a few terms regarding matrices:

• The dimension is the number of rows and columns - an n × m matrix has n rows and m columns. In the above example, the left side is a 3 × 3 matrix and the right side is a 3 × 1 matrix.

• ai,j is the usual way to designate the element of the matrix in th th the i row and the j column. In the above example, a2,3 = −1 and a3,2 =4.Asquare matrix has the same number of rows and columns.

1 4. Before solving systems of equations with matrices, we’ll have to under- stand some basic operations involving matrix arithmetic:

• Matrices can be added if the number of rows and columns are the same. Here is an example:      3 −11 124 415       12−1+−132=051 −14−1 372 2111

A matrix can be multiplied by a number (or ) - here is an example:    3 −11 9−33    3  12−1=36−3 −14−1 −312−3

•Two matrices can be multiplied provided their dimensions are compatible; if A is n × k and B is k × m,thenAB is defined. So, for example the following matrix products are well defined:      3 −11 27 !−11    3−11   12−11−3 2−1 12−1 −14−1 01 01

whereas the following matrix product is not defined:   3 −11 !   12−1  12−1 −14−1 −14−1

because the the number of columns of the first matrix does not equal the number of rows of the second. The i, j entry of the product AB is obtained by multiplying the ith row of A by the jth column of B. This can be summarized math- Pn ematically as (AB)i,j = k=1 ai,kbk,j (assuming A has n columns and B has n rows). See section 1.2 for various examples of matrix products. Note that is not commutative. That is, in general, AB is not the same as BA - see page 7 of your text for some examples.

2 The system of equations at the start of this section can be written as a matrix product, i.e.

3x − y + z =6 x+2y−z =3 −x+4y−z =1

is equivalent to:   3 −11 x 6    12−1y=3 −14−1 z 1

•There are a couple of special matrices: the , 0, with all zero entries; and the , I,withonesdownthe and zeros elsewhere. You can verify that A +0=A= 0+A and AI = A = IA. Also note that A0=0=0A.Inordinary arithmetic, if a and b are numbers such that ab =0,thenyoucan conclude that either a is zero or b is zero. One of your homework exercises (1.2.19) asks if the same is true for matrix multiplication, i.e. if AB = 0 then is it true that either A =0orB=0.Try some simple 2 × 2 examples to see if you can figure out whether this is true or not. • Powers are defined just as it is with ordinary arithmetic, however, here, the matrix must be square. So suppose that A is n × n, then A2 is defined to be AA,andA3 is defined to be AAA,etc.In ordinary arithmetic, if ak =0,thenamust be zero. You might try to consider whether the same is true in matrix arithmetic. Again, try some 2 × 2 examples. • of a matrix is defined as the sum of the diagonal entries, i.e. Pn k=1 ak,k. HW # 1.2.32 asks you to verify that tr(AB) = tr(BA) (even though AB =6 BA). To get you started, note that (AB)j,j = Pn k=1 aj,kbk,j (now sum over j and do the same with BA).

Section 1.3 Notes

Elementary Operations

3 1. We wish to solve the following system for x, y, z. x +2y−z =3 −x+4y−z =1 3x−y+z=6 The basic idea of this section is as follows: we first put this system into matrix form: 12−1|3 −14−1|1 3−11|6 Next, we add multiples of one row to another in order to zero-out the entries below the diagonal. This operation is called an elementary operation of type 1 (see page 13). For example, adding the first row to the second results in a zero in the (2,1) entry. The result is 12−1|3 06−2|4 3−11|6 Likewise, adding -3 times the first row to the 3rd row results in a zero in the (3,1) entry. The result is the following: 12−1|3 06−2|4 0−74|−3 Since adding a multiple of one equation to another does not change its solution, solving this simpler system is equivalent to solving the original system. Next, our goal is to zero-out the (3,2) entry (which is -7) by multiplying the 2nd row by 7/6 and adding the result to the third row; the result is: 12 −1| 3 06 −2| 4 005/3|5/3 Now we can solve the system by back substitution starting with the last row which is equivalent to the following equation: (5/3)z =5/3. This gives z = 1. The second row is equivalent to 6y − 2z = 4; substituting z = 1 into this equation gives 6y − 2 = 4, or y = 1. The first equation is x +2y−z = 3; substituting z =1,y= 1 into this equation gives x +2−1=3orx= 2 as a solution.

4 2. To summarize, if we start with a system of equations in the form

a1,1x1 + ...+a1,nxn = b1 . . .=. .

an,1x1 + ...+an,nxn = bn

then we put this into matrix form: Ax = b which in form (without the x) becomes

a1,1 ... a1,n | b1 ...... | . an,1 ... an,n | bn

Then we use elementary row operations of type 1 - i.e. adding multiples of one row to the others, to transform this matrix into one which is upper triangular, that is one with zeros below the diagonal:

a1,1 ... a1,n | b1 0 u2,2 ... u2,n | c2 ...... | . 00... un,n | cn

Then, we back substitute: we start with the last row: un,nxn = cn and solve this for xn; then substitute this value of xn into the n − 1st row to solve this for xn−1 and so forth, until we get to the first equation, which is solved for x1. 3. Note that the above process only reduces the matrix to upper triangu- lar form and then performs back substitution. As an alternative, you could continue to use type 1 row operations to zero-out the elements of A that lie above the diagonal. This alternative then simplifies the back substitution process. However, the work required to zero-out the elements above the diagonal is usually greater (especially if the size is large) than the work required to perform back substitution on an upper (see Section 1.7 for more details on this). Elementary Matrices

4. An elementary operation of type 1 is equivalent to multiplying the matrix by an . For example, the first elementary

5 operation in the above example is equivalent to the following matrix product: L1A,whereAis the orginal matrix and   100   E1 =  110 001 In other words,     100 12−1|3 12−1|3     E1A =  110−14−1|1=06−2|4 001 3−11|6 3−11|6 As you can check, the second and third elementary operations which zeros-out the (3,1) and (3,2) entries are equivalent to left-multiplying by the following elementary matrices, respectively:     100 100     E2 =  010 E3=010 −301 07/61

The product L2A is equivalent to multiplying the first row of A by -3 and adding the result to the third row of A. The product L3A is equivalent to multiplying the second row of A by 7/6 and adding the result to the third row of A. We can describe the process of putting A into upper triangular form in terms of the following matrix products: 12 −1| 3 E3E2E1A = U = 06 −2| 4 005/3|5/3 The order of multiplication is important: the above order says that A is multiplied first by E1 (on the left) and then by E2 and then by E3. The result is the upper triangular matrix U.

5. E1,E2 and E3 are examples of special lower triangular matrices since the diagonal consists of only 1’s and the only other nonzero entries lie below the diagonal. Important Fact: the product of lower triangular matrices is again a lower triangular matrix; the same is true of upper triangular matrices (you should check this yourself). LU-Decomposition

6 6. As described on page 17, we can invert the process of multiplying by elementary matrices. Let us consider E1, which is the matrix that simulated addition of the first row to the second. The inverse of this process is equivalent to subtracting the first row from the second, which is given by the following elementary matrix:   100   L1 =  −110 001

(note the -1 in the (2,1) entry). Likewise, the inverses of the E2 and E3 are given by the following matrices:     100 100     L2 =  010 L3=010 301 0−7/61

Inverting the process E3E2E1A = U becomes A = L1L2L3U.Thisis the so-called LU decomposition of the matrix A with L being (special) lower triangular and U being upper triangular. Note that   100   L1L2L3 = L =  −110 3−7/61 In other words, the entries of L in the LU-decomposition are just the negatives of the corresponding values of the elementary E matrices. 7. What is so important about LU decompositions? If A = LU,thensolv- ing the system Ax = b, is equivalent to the equation LUx = b,which in turn is equivalent to solving the following pair of equations: Ly = b and then Ux = y.SinceLis lower triangular and U is upper triangu- lar, each of these equations is much simpler to solve than the original Ax = b. As we have observed, an upper triangular system Ux = y can be solved by backward substitution (solving the last equation, then the next to last equation etc.). Similarly, a lower triangular system, Ly = b can solved by forward subsitution, which can be described as follows: suppose Ly = b is written out as

l1,1 0 ... 0 | b1 l2,1 l2,2 ... 0 | b2 ...... | . ln,1 ln,2 ... ln,n | bn

7 then start with the first row, which represents the equation l1,1x1 = b1 and solve it for x1; then substitute this value into the second equation, which is l2,1x1 +l2,2x2 = b2 and solve this for x2; then substitute x1 and x2 into the third equation and solve it for x3 and so forth. The real savings in work is realized if you have to solve lots of equations Ax = b with the same matrix A, but with many right hand sides, b.In this case, you can find the LU decomposition of A once (which takes a lot of work), and then solve each LUx = b for each right hand side, b, by forward and backward substitution, as described above (which is comparatively easy once the L and U have been found for A).

8. Do all matrices have LU decompositions? No - for example, see exercise 1.3.29. The trouble here is that the (1,1) entry is zero and cannot be used to zero out the (2,1) entry with and elementary row operation of type 1. This is an example of a matrix which is not regular which means that a zero appears on the diagonal at some point during the process of performing elementary operations of type 1. In this case, we can still decompose A into an LU-decomposition after some row switches are performed. This is the content of the next section.

Section 1.4 on Pivoting and Permutations

1. Consider the following matrix   012    −113 2−20

The 0 in the (1,1) entry makes it impossible to zero-out the entries below it. So instead, we switch the first and second rows so that the (1,1) entry is nonzero:   −113    012 2−20 and then proceed to zeroing out the (3,1) entry as usual. This is called pivoting on the (2,1) entry since we are using the (2,1) entry to zero-out the other entries in the first column. Note: we could have switched the

8 first and third row, as well; that is, we could have pivoted on the (3,1) entry. In general, you can pivot on any non-zero entry in a column (although see the discussion on round-off error below). 2. In terms of equation-solving, switching rows has no effect on the solu- tions to a system of equations. 3. Switching two rows is called an elementary operation of type 2. This op- eration can be simulated by matrix multiplication with an elementary matrix of type 2, as seen in the following example. Let   010   P1 =  100 001

Note that P1 is obtained by switching the first and second rows of the identity matrix, I. The product P1A has the affect of switching the first and second rows of A. You can verify this with the above example by checking that      010 012 −113       100−113= 012 001 2−20 2−20

4. Consider the following matrix, which may occur in the middle of the process of elementary row operations:   −113   A =  005 0−20 To get this matrix in upper triangular form, it is now necessary to switch the second and third rows. This elementary operation is the same as multiplying A by the following elementary matrix:   100   P2 =  001 010 Indeed, you can verify that      100 −113 −113       001 005= 0−20 010 0−20 005

9 5. A is nonsingular if it is possible to find a collection of nonzero pivots to bring it to upper triangular form. The following example is not nonsingular (i.e. it is singular):   12−2    24−1 −2−43

After zeroing out the (2,1) and (3,1) entries by the usual elementary operations, we obtain   12−2    00 3 00−1 In this case, there is no pivot we can use for the second column, so this matrix is singular. As we will, systems of equations involving singular systems may have no solutions at all or an infinite family of solutions. 6. If a matrix is nonsingular, then there is a permuted form of the LU- decomposition. In simple terms, after some row switches, the result- ing matrix has an LU-decomposition. In matrix terms, there is some P obtained by switching the rows of the identity matrix, such that PA can be put into an LU decomposition. It is pos- sible to carefully keep track of the pivots and how they were permuted. Example 1.12 in the text gives an illustration of how to do this. Read this example carefully. 7. Pivoting to reduce round-off error. As indicated above, you can pivot on any nonzero element in a column. However, computer codes normally pivot on the largest entry in a column in order to reduce round-off error. The following example illustrates why this is impor- tant. Consider the matrix ! 10−6 1.1 −2 .4

If we pivot on the (1,1) entry, then we must multiply the first row by 2(106) and add this to the second row to zero-out the (2,1) entry. Of course, a computer can only store and operate with approximate values for numbers. So the (2,1) entry of 1.1 can only be approximately repre- sented inside the computer. Any round-off error in this approximation

10 will be multiplied by 2(106) in our first row operation. This greatly magnifies the round-off error in representing 1.1. By contrast, if we pivoted on the (2,1) entry, which is -2, then we would have matrix ! −2 .4 10−6 1.1

In this case, we would multiply the first row by .5(10−6) and add the result to the second row to zero-out the (2,1) entry. Multiplying by this small factor suppresses any round-off error present in the com- puter representation of .4. In sum, using the largest entries as pivots results in elementary operations that involve multiplications by the smallest numbers possible, which tends to reduce the problems posed with round-off. For additional examples and discussion, see pg 56-58 in your text. One final note - in professional computer code, rows are not physically switched. Instead, a counter index keeps track of the pivoting.

Section 1.5 on Inverses

1. If a is a number, then its inverse x =1/a satisfies the equation ax = 1=xa. Likewise, the definition of the inverse of a matrix A is a matrix X with AX = I = XA and is denoted A−1.Hereisanexample ! ! 11 −1 −11 A= A = 21 2−1

since you can verify directly that AA−1 = I = A−1A. We’ll find a way to compute inverses shortly. 2. Any nonzero number has an inverse, but there are plenty of nonzero matrices which do not have inverses, as we will see. Some non-square matrices have one-sided inverses - for example, a matrix X with AX = I but where XA does not equal the identity (such an X is called a right-inverse of A). Here is an example:   ! −11 112   A = X= 2−1 210 00

11 You can verify directly that AX = I (the 2 × 2 identity), but XA does not equal I (the 3 × 3identity).

3. If A−1 exists, then it is unique; that is, you cannot have more than one inverse to a matrix (see Lemma 1.19).

4. The inverse of the inverse of a matrix is itself, i.e. (A−1)−1 = A (see Lemma 1.20).

5. The inverse of AB is the product of the inverses in reverse order, i.e. (AB)−1 = B−1A−1 (see Lemma 1.21).

6. As with usual arithmetic of numbers, (A + B)−1 is not A−1 + B−1.

7. How to Compute Inverses. As indicated in Section 1.5, to find an inverse of the n × n matrix A, we augment this matrix by attaching the n × n identity matrix, I:   a1,1 ... a1,n | 1 ... 0    ......  A|I =  . .. . | . .. .  an,1 ... an,n | 0 ... 1

and then apply elementary row operations of type 1 (add a multiple of one row to another), type 2 (switch any two rows), or type 3 (multiply any row by a number) until A becomes the identity. The key here is to apply the same elementary operations to the identity part (I)asyoudo to A. After reducing A to the identity, the result will be I|X where X is some n × n matrix. We claim that X is the inverse of A.Toseethis, recall that each row operation (of type 1-3) is the same as multiplying by a elementary matrix. So the process of reducing A to I is equivalent to multiplying A and I by a sequence of elementary matrices:

E1 ...Ek(A|I)=(E1...EkA|E1 ...EkI)=(I|X)

Since (E1 ...Ek)A is the identity matrix, the product X = E1 ...Ek, which appears on the right side is the inverse of A. This process is illustrated by Example 1.23.

8. (Prop. 1.27) If L is lower triangular, with nonzero entries on the di- agonal, then the inverse of L is also lower triangular. This is because

12 in the above described process for finding the inverse, only elementary matrices of types 1 and 3 are needed (no row switches - type 2 - will be needed). This means that only elementary matrices which are lower triangular are used in the process of reducing A to the identity. Since the product of lower triangular matrices is a lower triangular matrix, the inverse X = E1 ...Ek is lower triangular. A similar statment can be made for the inverse of an upper triangular matrix.

9. Here is an example of a nonzero matrix without any inverse: ! 12 A = −2−4

Row reducing A|I leads to ! ! 12|10 12|10 (A|I)= 7→ −2−4|01 00|21

and you cannot proceed any further. You can also show explicitly that there is no matrix X with AX = I (try writing out the 4 equations associated to AX = I and show they are contradictory).

10. If A−1 exists, then the system of equations Ax = b can be solved by multiplying each side by the inverse of A,since

Ax = b 7→ A−1Ax = A−1b 7→ x = A−1b

This is the same strategy used in simple algebra with numbers, i.e. the equation ax = b is equivalent to x = a−1b. Though solving systems of equations by computing inverses, is an im- portant theoretical device. However, in practice, using row operations and back substitution as in section 1.2 is more efficient.

11. LDV . Suppose A is regular. Recall this means that no row switches are necessary when reducing the equation Ax = b (or when finding the inverse of A). In this case, it is possible to find a special lower triangular matrix L, a special upper triangular matrix V and a D (only nonzero entries of D are on the diagonal) with

13 A = LDV . This is because we can row reduce A to an upper triangular matrix U by a sequence of elementary row operations of type 1 only (see section 1.3). Thus, there is a lower triangular matrix E and an upper triangular matrix U with EA = U.LetDequal the diagonal entries of U. This allows us to write U = DV where V is special upper triangular. Now let L = E−1, which is lower triangular (since E is) and we have A = LU = LDV as desired. If A is nonsingular, then some row switches may be nec- essary in this process and so the above result would then read that PA=LDV for some permutation matrix, P .

Section 1.6 on Transposes and Symmetric Ma- trices

1. The of a matrix is a process of turning the rows into columns. The transpose of A is denoted by AT .Hereisanexample:   !T 1−2 124   A= =23 −237 47

T T Note that the (i,j)th entry of A is the (j,i)th entry of A (i.e. ai,j = aj,i).

2. Here are some simple rules: (A + B)T = AT + BT ;(AT)T =A;and (AB)T = BT AT . Note the similarity of the latter two rules and their analogues with inverses. The proofs of these rules are in the exercises. To get started with theP third rule (HW exercise 1.6.4), note that the (i,j)th entry of AB is k ai,kbk,j; then take the transpose (switch i and j) and compare with the (i,j)th entry of BT AT . As a final rule: (AT )−1 =(A−1)T (Lemma 1.32).

3. Note that the transpose of a lower triangular matrix is an upper trian- gular matrix and vice-versa.

4. Symmetric Matrices. A is a square matrix A

14 where AT = A.Hereisanexample:   137   A=3−24 740 Note that all diagonal matrices are symmetric.

5. The LDV representation of a regular matrix take a special form when the matrix is symmetric. Suppose A is symmetric, then there is a special lower triangular matrix L and a diagonal matrix D with A = LDLT . One way to see this is to recall the technique of reducing A to an upper triangular matrix U = DV , as in (11) above by multiplying E by an appropriate lower triangular matrix E: EA = U = DV .Since Ais symmetric, the part of A above the diagonal is a mirror image of the part of A below the diagonal. Therefore multiplying ET on the right side of A reduces the part of A that lies above the diagonal in the same way that multiplying A on the left by E reduces the part of A below the diagonal. Therefore EAET = D. Letting L = E−1,wehave A=E−1D(E−1)T =LDLT , as desired. Example 1.35 illustrates this idea.

Section 1.7 on Practical Linear Algebra

1. The main point of this section is to count the number of operations (multiplications, and additions) that is required to do certain matrix manipulations. For example to multiply two n × n matrices requires about n3 multiplications. Why? - the (i,j)th entry of the product of Pn AB is k=1 ai,kbk,j which requires n multiplications. Since there are n2 entries in AB, each requiring n multiplications, the total number of multiplications required to compute AB is about n3.

2. Similarly, the number of multiplications needed to reduce A to upper triangular form can be shown to be about n3/3; to back substitue is about n2/2; and to find an inverse of a matrix is about n3.

3. Thus to compute a solution to Ax = b by multiplying both sides by the inverse of A is about 3 times as much work (in n is large) than by row reducing A to upper triangular form and then back substituting.

15 Section 1.8 on Solving General Linear Systems

1. Elementary row operations of types 1, 2 and 3 allow you to reduce an n × m matrix to which looks as follows:   1 ∗ ... ∗    ... ∗ ... ∗   00 1   . .   . .   . .     00... 00 1∗... ∗     ...   00 0   . .   . .  00... 0

The only difference between this definition and the one in the book (Def 1.38) is that the above definition has 1’s for leading nonzero terms in each row whereas the book just requires the leading terms to be nonzero. The book’s version of row echelon form can be transformed to the above definition just by dividing each row by the leading nonzero term (an elementary operation of type 3).

2. With a square nonsingular matrix, the row echelon form looks simpler, with ones down the main diagonal:   1 ∗ ... ∗    ... ∗   01     . .. .  .0 .∗ .    0 ... 1 ∗  00... 01

If the matrix is regular, then it can be put in this form without row switches (type 1 and 3 operations only). If it is nonsingular, then row switches (type 2) may also be necessary.

3. Each leading 1 appearing in the row echelon form is called a pivot. Each variable corresponding to a pivot is called a basic variable.Each of the other variables without pivots are called free variables.The number of pivots is called the of the matrix. Note that the rank of a nonsingular n × n matrix is equal to n and that there are no free variables.

16 4. When solving a system of equations Ax = b,whereAis nonsingular, we reduce it to row echelon form Ux = c, which when written out becomes   1 ∗ ... ∗| c1    ... ∗| c  01 2    . .. . .  .0 .∗ .| .    0 ... 1 ∗|cn−1 00... 01| cn

The last equation reads xn = cn and the other variables can be solved for xn−1,xn−2,...,x1 by back substitution as usual. The important observation is that the solution vector (x1,x2,...,xn), is unique - There can be only one solution vector, since xn is forced to be cn and the other values are determined from this one by back substitution. In particular, note that if b = 0, then row operations do not change this value, so c = 0; hence xn = 0 and all the other xi = 0 as well. That is, the only solution to Ax =0isx=0.

5. If A is not square, or the rank of a square matrix is less than maximal, then the solution to Ax = b may not exist, and if it exists, it may not be unique. Here is a simple example of a 2 × 3 matrix in row echelon form: 12−1|4 00 1|3

The last row represents the equation x3 = 3; substituting this into the first row gives the equation x1 +2x2 −3=4orx1+2x2 =7.Note that this equation does not have a unique solution. The variable x2 can be given any value, and then x1 =7−2x2.Notethatx2is also a free variable, since it does not have a pivot (a leading 1 in the row echelon form). The name free means that it can be given any value in the process of solving equations.

6. In general, the free variables can be given any value in the process of solving equations. The pivot variables can then be solved in terms of the free variables.

7. A system of equations Ax = b may not have a solution. Here is an

17 example in row echelon form: 12−1|4 00 1|3 00 0|2

The last equation reads 0x1 +0x2 +0x3 = 2. Since the left side is always zero, this equation can not have a solution. 8. Homogeneous Systems of equations are those where the right side of the equation is zero, i.e. Ax =0.Inthiscase,x=0isalwaysa solution. In view of the above discussion, x = 0 is the only solution if the matrix is square and the rank is maximal - equal to n for an n × n matrix. If the rank of a square matrix is less than n, or more generally, if there are any free variables in its row echelon form, then x =0isnot the only solution to Ax = 0. Any nonzero solution is called a nontrivial solution. Here is an example in row echelon form: 12−1|0 00 1|0

The last row reads x3 = 0 (no choice here); but the first equation then reads x1 +2x2 +0=0,orx1 =−2x2.Wecanletx2=cbe any value, and then x1 = −2c. The entire solution vector can be described as (−2c, c, 0) where c can be any value. Any nonzero value of c leads to a nontrivial solution.

Section 1.9 on

1. The of a 2 × 2 matrix is defined as follows: ! ab ad − bc det cd =

If a is nonzero, then an elementary operation of type 1 reduces this matrix to: ! ab ad−bc 0a Observe that the rank of this matrix is 2 (i.e. the matrix is nonsingular) precisely when the determinant ad − bc is not zero. Also notice that

18 this reduced matrix has the same determinant as the original matrix, i.e. a type 1 operation did not change the determinant.

2. The determinant of an n × n matrixisdefinedbyasetoffouraxioms given in Theorem 1.50 (some texts define the determinant differently and then prove Theorem 1.50 as a consequence - this text takes Th 1.50 as the definition of the determinant). In a nutshell, the determinant is defined as the product of the diagonal entries in the case where the matrix is upper triangular (as in the 2 × 2 case). The more general determinant is computed by reducing the matrix into upper triangular form using type 1 row operations. If row switches are necessary, then each row switch changes the determinant by a factor of -1. Example 1.53 is illustriative in computing a determinant this way.

3. If a row of zeros appears at any stage during the row reduction process, then the determinant of the matrix is zero.

4. Proposition 1.54. det(AB)=detAdet B. The proof of this is out- lined in homework exercise 1.9.12. Note that there is no version of this for sums, that is - the determinant of A + B is usually different from det A +detB. Can you find an example to illustrate this?

5. Note that if A is n × n and is nonsingular, then A−1 exists and 1 = det I =det(AA−1)=detAdet A−1;so 1 det A−1 = det A Conversely, if det A is nonzero, then its row echelon form has n piv- ots and no free variables (otherwise the zeros on the diagonal would make its determinant equal to zero). Therefore, a matrix with nonzero determinant is nonsingular (has rank equal to n).

6. Proposition 1.56.detA=detAT, i.e. the transpose has no effect on the determinant. To see this, note that to compute det AT ,wefirstrow- reduce this to an upper triangular matrix U. This can be represented by a series of elemantary row operations:

−1 −1 En ...E1A=U or A = E1 ...En U

19 with the determinant of each Ei being either +1 or -1 depending on whether it is type 1 or type 2; and so det A =  det U.Note

T T −1 T −1 T A =(E1) ...(En) U

Since U T is lower triangular, its determinant is equal to the product of its diagonal entries (why?) which are the same as that of U.So T det U =detU . Also, the determinant of each Ei isthesameasthe determinant of its transpose (why?). So, det A =detAT.

20