Elementary Row Operations and Matrix Factorization 1 3 Let a =
Total Page:16
File Type:pdf, Size:1020Kb
Elementary row operations and matrix factorization 1 3 Let A = . Consider the result of the elementary row operation −2R1 + R2 → R2: 2 5 1 3 1 3 − 2R1 + R2 → R2 2 5 0 −1 Now consider the multiplication: 1 0 1 3 1 3 = −2 1 2 5 0 −1 1 0 Multiplying A on the left by the matrix produces exactly the same effect as performing the −2 1 elementary row operation −2R1 + R2 → R2. Now a 3 by 3 example: −4 3 −1 1 −1 3 1 −1 3 −2 2 −5 R1 ↔ R3 −2 2 −5 4R1 + R3 → R3 −2 2 −5 1 −1 3 −4 3 −1 0 −1 11 0 0 1 −4 3 −1 1 −1 3 0 1 0 −2 2 −5 = −2 2 −5 (same effect as R1 ↔ R3) 1 0 0 1 −1 3 −4 3 −1 1 0 0 1 −1 3 1 −1 3 0 1 0 −2 2 −5 = −2 2 −5 (same effect as 4R1 + R3 → R3) 4 0 1 −4 3 −1 0 −1 11 Observations: How am I constructing those multiplying matrices so they have the same effect as performing row operations? Matrices that represent row operations (i.e, they’re one row operation away from being the identity matrix) are called elementary matrices. They’ve got several useful properties, some of which we’ll use later. • First of all, they’re easy to construct. – Write an elementary matrix that would operate on a 4 by 4 matrix and perform the row operation −5R2 + R4 → R4. – Write an elementary matrix that would operate on a 3 by 3 matrix and perform the row operation 4R3 → R3. • They’re easy to invert. Write, without doing any matrix computations, the inverses of the matrices above. Hint: think about how you’d reverse the operation. • This gives us another way to see why performing Gaussian elimination produces an inverse. Suppose you start with a matrix A (and assume up front that A is invertible; if A were a coefficient matrix, the system would have a unique solution). You perform elementary row operations on A until it’s in reduced row echelon form. What does the rref of A look like? Now, use elementary matrices to represent the row ops: • Having that big chain of matrices above doesn’t look particularly pleasant, except for one thing... elementary matrices are easy to multiply. An upper triangular matrix U is one where all the entries below the diagonal are zero: ui,j =0 if i > j A lower triangular matrix L is one where all the entries above the diagonal are zero: li,j =0 if j > i Any elementary matrix will be either upper or lower triangular. Prove that the product of two upper triangular matrices will be also be upper triangular: That’s a useful result to remember for the future; what it gives us here is that if you’ve got a couple of triangular matrices needing to be multiplied; it’s only half the storage and half the work – the lower (or upper) parts of all the matrices involved will be all zeros. Multiplying elementary matrices in particular is even easier than that, though. Multiply 1 00 1 0 0 0 10 3 1 0 −4 0 1 0 0 1 What do you notice? Elementary matrices can also help us do matrix factorization: factoring a given matrix into a product of two matrices (usually with special properties). Here’s an example that illustrates the process: 1 −1 Example: Factor A = into the product of an lower triangular and an upper triangular −3 2 matrix. 1 −1 1 Example: Factor A = −3 2 5 into the product of an lower triangular and an upper 2 1 2 triangular matrix. .