Numerical Linear Algebra Revised February 15, 2010 4.1 The LU Decomposition The Elementary Matrices and badgauss In the previous chapters we talked a bit about the solving systems of the form Lx = b and Ux = b where L is lower triangular and U is upper triangular. In the exercises for Chapter 2 you were asked to write a program x=lusolver(L,U,b) which solves LUx = b using forsub and backsub. We now address the problem of representing a matrix A as a product of a lower triangular L and an upper triangular U: Recall our old friend badgauss. function B=badgauss(A) m=size(A,1); B=A; for i=1:m-1 for j=i+1:m a=-B(j,i)/B(i,i); B(j,:)=a*B(i,:)+B(j,:); end end The heart of badgauss is the elementary row operation of type 3: B(j,:)=a*B(i,:)+B(j,:); where a=-B(j,i)/B(i,i); Note also that the index j is greater than i since the loop is for j=i+1:m As we know from linear algebra, an elementary opreration of type 3 can be viewed as matrix multipli- cation EA where E is an elementary matrix of type 3. E looks just like the identity matrix, except that E(j; i) = a where j; i and a are as in the MATLAB code above. In particular, E is a lower triangular matrix, moreover the entries on the diagonal are 1's. We call such a matrix unit lower triangular. Theorem. 1. The product of (unit) lower triangular matrices is (unit) lower triangular. 2. The inverse of a (unit)lower triangular matrix is (unit) lower triangular. From badgauss we can conclude the following: Lk :::L1A = U where Lk :::L1 is a sequence of unit lower triangular matrices (in fact elementary matrices of type 3.) The −1 product of these is unit lower triangular and so is the inverse, thus letting L = (Lk :::L1) ; we get A = LU: There is an easier and more efficient way to get L: It turns out that L(j; i) = −a where a is given in the badgauss code above. Row Switching The necessity of row switching is seen in badgauss to avoid division by 0. It turns out that it is necessary to perform row interchanges for numerical reasons as well. Consider the following example of Forsythe and Moler. ulp 1 x 1 = 1 1 y 2 1 where u=ulp. The exact answer is 1 1 − 2ulp x = y = 1 − ulp 1 − ulp Rounding gives the respectable answer, x = 1 and y = 1: See if MATLAB agrees with this answer. Now suppose that we run badgauss on [A; b]; then we get ulp 1 1 0 −1=ulp −1=ulp Which gives the answer x = 0 and y = 1 by back substitution. What is the relative error of this answer? 1 0 k − k 1 1 2 1 k k 1 2 Now switch the first and second rows of [A; b] and run badgauss again. You should get 1 1 2 0 1 1 Which gives x = 1 and y = 1 by back substitution, an excellent answer. An approach to correcting this type of error is the partial pivoting strategy. Look at the badgauss code on the previous page. On the ith iteration of the Gaussian Elimination instead of automatically pivoting on B(i; i), choose k ≥ i where jB(k; i)j = max(jB(i : n; i)j) and pivot on B(k; i) instead (this will be B(j; j) after a row switch.) We are faced with the prospect of row interchanges, i.e. multiplication by elementary matrices of type 1. This will kill the possibility of L being lower triangular. The way out of this is by keeping track of all the row switches. A permutation matrix is a product of row switching matrices. Permutation matrices are easily identifiable as each row and each column has exactly on 1 and the other entries in the row or column are 0. The PLU Decomposition combines the row switching with the LU Decomposition. Theorem. The PLU Decomposition. If A is an n × n matrix then there is a permutation matrix P; a unit lower triangular matrix L and an upper triangular matrix U such that PA = LU MATLAB has a function which computes the LU Decomposition. A=ones(5)+diag(1:5) [L,U]=lu(A), A-L*U The lower triangular matrix L obtained always has 1's on the main diagonal. This permits a storage method which holds U on and above the diagonal and L below the diagonal. The call B=lu(A) will return a matrix B which stores L and U in this manner. The call [L,U]=lu(A) will return two matrices; with U an upper triangular matrix though the L will not necessarily be lower triangular { the rows need to be permuted. But it is the case that A=L*U. B=lu(A) triu(B)-U 2 (-tril(B,-1)+eye(5))- L But notice that L may not be lower triangular A=magic(5); [L,U]=lu(A) By permuting the rows of L we can make it into a lower triangular matrix. MATLAB will do a PLU Decomposition with [L,U,P]=lu(A) P*A-L*U We solve a system of equations using the PLU Decomposition in basically the same way as with an LU Decomposition. Suppose that PA = LU and Ax = b is to be solved, then A = P −1LU. Now solve Ly = P b and Ux = y, then Ax = P −1LUx = P −1Ly = P −1P b = b: 4.2 The QR Decomposition The QR Decomposition factors a matrix A as A = QR where Q is orthogonal and R is upper triangular. If the matrix A has linearly independent columns, then n ≥ m; and using the Gram-Schmidt process, we can find an m × n orthogonal matrix using B=grmsch(A) and an n × n upper triangular matrix R: In general given any A we can find using either Householder reflections or Givens rotations an m×m orthogonal matrix Q and an m × n upper triangular matrix R: We will do both versions of the QR decomposition. Gram Schmidt QR The Gram-Schmidt process is the following: Given vectors v1; : : : ; vn which are linearly independent, u1 = v1=kv1k for i = 2 : n w = vi for j = 1 : i − 1 w = w − (vi · uj)uj end ui = w=kwk end We have deliberately written Gram-Schmidt in this pseudo-code rather than MATLAB to highlight the dot product in the inner loop. Now if we solve for vi in this we see that vi = kwkui + (vi · u1)u1 + ::: + (vi · ui−1)ui−1 Or in matrix form if we let Q = [u1; : : : ; un]; and define ( ui · vj if j > i R(i; j) = kwk if j = i 0 if j < i then it is easy to check that A = QR: Note that if we know Q; then it is easy to find R by multiplying both sides by QT ; to get R = QT A: This would be an inefficient way to compute R: Using the Gram-Schmidt QR we can solve the Least Squares problem for Ax = b by looking at the normal equations: AT Ax = AT b; AT Ax = RT QT QRx = RT Rx = RT QT b 3 And clearly the solution to Rx = QT b will solve this equation. Use backsub to solve Rx = QT b: Modified Gram Schmidt The Gram Schmidt algorithm is unstable for certain matrices (see the Project) but a rather minor rearrangement of the computation will greatly improve the accuracy. The Modified Gram Schmidt method is the following: Again assuming v1; : : : vn are linearly independent. for i = 1 : n − 1 ui = vi=kvik for j = i + 1 : n vj = vj − (vj · ui)ui end end un = vn=kvnk The result of using the modified Gram Schmidt method will produce an m × n orthogonal matrix Q: The upper triangular matrix R comes from the basic projection relation Pi−1 w = vi − k=1(vi · uk)uk ui = w=kwk Which under the Modified Gram Schmidt arrangement becomes Pi−1 vi = k=1(vi · uk)uk + kvikui Thus, R(i; j) = vj · ui; while R(i; i) = kvik: Givens QR A Givens rotation, is arrived at by starting with an identity matrix G = I and assigning G(i; i) = c; G(i; j) = s; G(j; i) = −s; and G(j; j) = c where c2 + s2 = 1: Call this matrix givrot(n; i; j; c; s): This produces a rotation in the ij−plane. Theorem. G = givrot(n; i; j; c; s) is an orthogonal matrix. The method is based on the fact that a Givens rotation affects only entries in rows i and j: Theorem. For any vector v and integers i < j there is a Givens rotation G such that for w = Gv, w(j) = 0 and w(k) = v(k) for all k 6= i; j. Proof: Since Gv changes only the i and j entries, we get w(k) = v(k) for k 6= i; j: To get w(j) = 0 we simply need to solve these equations for c and s −sv(i) + cv(j) = 0 c2 + s2 = 1: Let v(j) v(i) s = and c = pv(i)2 + v(j)2 pv(i)2 + v(j)2 Suppose that we have v=rand(5,1) and we want to zero out the entry in the third position using the entry in the second position.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-