ECE 3040 Lecture 16: Systems of Linear Equations II © Prof
Total Page:16
File Type:pdf, Size:1020Kb
ECE 3040 Lecture 16: Systems of Linear Equations II © Prof. Mohamad Hassoun This lecture covers the following topics: Introduction LU factorization and the solution of linear algebraic equations Matlab’s lu function and the left-division operator “\” More on Matlab’s left-division operator Iterative methods: Jacobi and Gauss-Seidel algorithms Introduction In the previous lecture, Gauss elimination with pivoting was used to solve a system of linear algebraic equations of the form: 퐀퐱 = 퐝. This method becomes inefficient for solving problems where the coefficient matrix 퐀 is constant, but with different right-hand-side vectors 퐝. Recall that Gauss elimination involves a forward elimination step (which dominates the computation time) that couples the vector 퐝 and the matrix 퐀 through the augmented matrix [퐀 퐝]. This means that every time 퐝 changes we have to repeat the elimination step all over. The LU factorization-based solution method separates (decouples) the time-consuming elimination of the matrix A from the manipulations of the vector 퐝. Thus, once matrix 퐀 has been factored as the product of two square matrices, 퐀 = 퐋퐔, multiple right-hand vectors 퐝 (as they become available) can be evaluated in an efficient manner. Note: If multiple right hand vectors 퐝1, 퐝2, …, 퐝푚 are available simultaneously, then we can still apply (efficiently) Gauss elimination to the augmented system [퐀 퐝1 퐝2 … 퐝푚] and obtain [퐀′ 퐝′1 퐝′2 … 퐝′푚] The solution vectors 퐱1, 퐱2, …, 퐱푚 can then be determined, respectively, by back substitution using the systems ′ ′ ′ 퐀 퐱1 = 퐝′1, 퐀 퐱2 = 퐝′2, …, 퐀 퐱푚 = 퐝′푚. However, the complication arises when the 퐝푖 vectors are not available simultaneously. Here, the elimination step would have to be repeated 푚 times which renders the Gauss elimination method impractical for solving large systems of equations. LU Factorization and the Solution of Linear Algebraic Equations The LU factorization of an 푛x푛 matrix A requires pivoting (i.e., row permutations) just as was the case for Gauss elimination. If matrix A is nonsingular (i.e., |A| ≠ 0), then it can be shown that PA = LU, where P is a permutation matrix, L is a lower triangular matrix with 1’s on the diagonal, and U is an upper triangular matrix. For simplicity, the factorization method will be described for a 3x3 matrix that requires no pivoting (i.e., P = I or A = LU), where 1 0 0 푢11 푢12 푢13 L = [푙21 1 0], U = [ 0 푢22 푢23] 푙31 푙32 1 0 0 푢33 The coefficients 푙푖푗 and 푢푖푗 can be computed from the 푎푖푗 coefficients, as follows. Set A equal to the product LU (note that, in general, matrix multiplication is not commutative LU ≠ UL): 푎11 푎12 푎13 1 0 0 푢11 푢12 푢13 [푎21 푎22 푎23] = [푙21 1 0] [ 0 푢22 푢23] 푎31 푎32 푎33 푙31 푙32 1 0 0 푢33 or, 푎11 푎12 푎13 푢11 푢12 푢13 [푎21 푎22 푎23] = [푙21푢11 푙21푢12 + 푢22 푙21푢13 + 푢23 ] 푎31 푎32 푎33 푙31푢11 푙31푢12 + 푙32푢22 푙31푢13 + 푙32푢23 + 푢33 By matching coefficients in the above matrix equation we obtain the following formulas: 푢11 = 푎11 푢12 = 푎12 푢13 = 푎13 푎21 푎21 푙21 = = 푢11 푎11 푎21푎12 푢22 = 푎22 − 푙21푢12 = 푎22 − 푎11 푎21푎13 푢23 = 푎23 − 푙21푢13 = 푎23 − 푎11 푎31 푎31 푙31 = = 푢11 푎11 푎 푎 푎 − 31 12 푎 − 푙 푢 32 푎 푎 푎 − 푎 푎 푙 = 32 31 12 = 11 = 11 32 31 12 32 푢 푎21푎12 푎 푎 − 푎 푎 22 푎22 − 11 22 21 12 푎11 푢33 = 푎33 − 푙31푢13 − 푙32푢23 푎31푎13 (푎11푎32 − 푎31푎12)(푎11푎23 − 푎21푎13) = 푎33 − − 푎11 푎11(푎11푎22 − 푎21푎12) Regrouping the above results, gives 푢11 = 푎11 푢12 = 푎12 푢13 = 푎13 푎21푎12 푢22 = 푎22 − 푎11 푎21푎13 푢23 = 푎23 − 푎11 푎31푎13 (푎11푎32 − 푎31푎12)(푎11푎23 − 푎21푎13) 푢33 = 푎33 − − 푎11 푎11(푎11푎22 − 푎21푎12) 푎21 푙21 = 푎11 푎31 푙31 = 푎11 푎11푎32 − 푎31푎12 푙32 = 푎11푎22 − 푎21푎12 Notice that 푎11푎22 − 푎21푎12 ≠ 0 and 푎11 ≠ 0 are required for the decomposition to exist. However, if A is nonsingular and we allow row permutations, we may relax those conditions. Your turn: Derive the above formulas for a 4x4 matrix, assuming that row permutations are not necessary. Example. Perform LU factorization for the following matrix. 1 1 1 [ 4 2 ] 1 1 1 16 4 1 푎21 1 푙21 = = = 4 푎11 1 4 푎31 16 푙31 = = = 64 푎11 1 4 푎 푎 − 푎 푎 1 − 8 푙 = 11 32 31 12 = = +28 32 1 1 푎11푎22 − 푎21푎12 − 4 2 Therefore the lower triangular matrix L is, 1 0 0 1 0 0 L = [푙21 1 0] = [ 4 1 0] 푙31 푙32 1 64 28 1 And, 1 푢 = 푎 = 11 11 4 1 푢 = 푎 = 12 12 2 푢13 = 푎13 = 1 1 (1) ( ) 푎21푎12 2 푢22 = 푎22 − = 1 − = −1 푎11 1 4 푎21푎13 (1)(1) 푢23 = 푎23 − = 1 − = −3 푎11 1 4 푎31푎13 (푎11푎32 − 푎31푎12)(푎11푎23 − 푎21푎13) 푢33 = 푎33 − − = 푎11 푎11(푎11푎22 − 푎21푎12) 1 (16)(1) (1 − 8) ( − 1) 1 − − 4 = 1 − 64 + 84 = 21 1 1 1 1 ( − ) 4 4 4 2 leading to the upper triangular matrix, 푢11 푢12 푢13 1/4 1/2 1 U = [ 0 푢22 푢23] = [ 0 −1 −3] 0 0 푢33 0 0 21 Verification: Now, let us go back to the system Ax = d. We may rewrite it as LUx = d or L(Ux) = d. Define the intermediate vector z, as z = Ux. Then, we may write the original system as Lz = d, or explicitly as 1 0 0 푧1 푑1 [푙21 1 0] [푧2] = [푑2] 푙31 푙32 1 푧3 푑3 The above formulation can be solved by forward-substitution, 푧1 = 푑1 푧2 = 푑2 − 푙21푧1 푧3 = 푑3 − 푙31푧1 − 푙32푧2 Once z is determined, we can solve for x using back-substitution with Ux = z, or explicitly, 푢11 푢12 푢13 푥1 푧1 [ 0 푢22 푢23] [푥2] = [푧2] 0 0 푢33 푥3 푧3 The solution for x, assuming non-zero diagonal elements (푢푖푖 ≠ 0), is 푧3 푥3 = 푢33 푧2 − 푢23푥3 푥2 = 푢22 푧1 − 푢12푥2 − 푢13푥3 푥1 = 푢11 Returning to our original 3x3 A matrix and its associated LU factorization, and assuming a right-hand vector d = [2 1 8]T, we can then apply the forward- substitution step to the system 1 0 0 푧1 2 [ 4 1 0] [푧2] = [1] 64 28 1 푧3 8 to obtain the intermediate vector z, 푧1 = 푑1 = 2 푧2 = 푑2 − 푙21푧1 = 1 − 4(2) = −7 푧3 = 푑3 − 푙31푧1 − 푙32푧2 = 8 − 64(2) − 28(−7) = 76 leading to the intermediate solution, z = [2 -7 76]T. Finally, the back-substitution step is applied to the system 1/4 1/2 1 푥1 푧1 2 [ 0 −1 −3] [푥2] = [푧2] = [−7] 0 0 21 푥3 푧3 76 and returns the solution x, 푧3 76 푥3 = = ≅ 3.6190 푢33 21 76 −7 − (−3) ( ) 푧2 − 푢23푥3 21 −27 푥2 = = = ≅ −3.8571 푢22 −1 7 1 27 76 2 − ( ) (− ) − (1) ( ) 푧1 − 푢12푥2 − 푢13푥3 2 7 21 26 푥1 = = = − ≅ 1.238 푢11 1 21 4 1.2381 or, x ≅ [−3.8571]. 3.6190 The forward-substitution step can be represented concisely, for a system of 푛 equations, as 푖−1 푧푖 = 푑푖 − ∑ 푙푖푗푧푗 for 푖 = 1,2, … , 푛 푗=1 The back-substitution step can be written concisely as 푧푛 푥푛 = 푢푛푛 푛 푧푖 − ∑푗=푖+1 푢푖푗푥푗 푥푖 = for 푖 = 푛 − 1, 푛 − 2, … ,1 푢푖푖 Matlab’s lu Function and the Left-Division Operator “\” Matlab has a built-in function “lu” that generates the LU factorization. It has the syntax [L,U,P] = lu(A). This function employs row permutations in order to pivot. These permutations are captured in the matrix P, so that PA=LU. Example: Notice how the P matrix permutes (swaps) the first and last rows of matrix A. That happened because the last row has the largest pivot in column one. No swapping of the last two rows occurs because after the first swap, the second row pivot (in column two), 1.0, is larger than the one in the last row, 0.5. The forward and back substitution functions are implemented by the left-division operator “\” (see the next section for more discussion of this operator). The following is an example that illustrates the use of lu and \ to solve a system of linear equations. Example. Use Matlab to solve the following system of equations using the LU factorization method. 30 −1 −2 푥1 8 [ 1 70 −3 ] [푥2] = [−20] 3 −2 100 푥3 80 LU factorization step: No row swapping in this case (P=I) because the first row has the largest pivot (in column one), 30, and the second row has the largest pivot (in column two), 70, compared to that in the last row, |-2| = 2. Forward and back substitution steps: Alternatively, the substitution steps can be computed in one instruction, as follows (the use of parenthesis is required): The above example had the permutation matrix 퐏 = 퐈 (the identity matrix). When 퐏 ≠ 퐈, we must take it into account in our formulation, PAx = LUx = Pd. In such case, we must first permute the vector 퐝 by matrix 퐏. The solution is then obtained as, 퐱 = 퐔\[퐋\(퐏퐝)]. The following example illustrates this situation. Example. Use Matlab to solve the following system of equations using the LU factorization method. 1 1 1 푥1 2 [ 4 2 ] [푥2] = [1] 1 1 1 푥3 8 16 4 1 Since 퐏 ≠ 퐈, the proper way to compute the solution is , More on Matlab’s Left-Division Operator The left-division Matlab operation was used earlier to solve a system of linear equations, Ax = d, using the syntax A\d. Matlab’s left-division is a highly sophisticated algorithm. When used, Matlab examines the structure of the coefficient matrix and then implements an optimal method to obtain the solution.