CS321 Numerical Analysis
Total Page:16
File Type:pdf, Size:1020Kb
CS321 Numerical Analysis Lecture 5 System of Linear Equations Professor Jun Zhang Department of Computer Science University of Kentucky Lexington, KY 40506-0046 System of Linear Equations a11 x1 a12 x2 a1n xn b1 a21 x1 a22 x2 a2n xn b2 an1 x1 an2 x2 ann xn bn where aij are coefficients, xi are unknowns, and bi are right-hand sides. Written in a compact form is n aij x j bi , i 1,,n j1 The system can also be written in a matrix form Ax b where the matrix is a11 a12 a1n a a a A 21 22 2n an1 an2 ann and x [x , x ,, x ]T ,b [b ,b ,b ]T 1 2 n 1 2 n 2 An Upper Triangular System An upper triangular system a11 x1 a12 x2 a13 x3 a1n xn b1 a22 x2 a23 x3 a2n xn b2 a33 x3 a3n xn b3 an1,n1 xn1 an1,n xn bn1 ann xn bn is much easier to find the solution: bn xn ann from the last equation and substitute its value in other equations and repeat the process n 1 xi bi aij x j aii ji1 for i = n – 1, n – 2,…, 1 3 Karl Friedrich Gauss (April 30, 1777 – February 23, 1855) German Mathematician and Scientist 4 Gaussian Elimination Linear systems are solved by Gaussian elimination, which involves repeated procedure of multiplying a row by a number and adding it to another row to eliminate a certain variable For a particular step, this amounts to aik aij aij akj (k j n) akk aik bi bi bk akk th After this step, the variable xk, is eliminated in the (k + 1) and in the later equations The Gaussian elimination modifies a matrix into an upper triangular form such that aij = 0 for all i > j. The solution of an upper triangular system is then easily obtained by a back substitution procedure 5 Illustration of Gaussian Elimination 6 7 Back Substitution The obtained upper triangular system is a11 x1 a12 x2 a13 x3 a1n xn b1 a22 x2 a23 x3 a2n xn b2 a33 x3 a3n xn b3 an1,n1 xn1 an1,n xn bn1 ann xn bn We can compute bn xn ann from the last equation and substitute its value in other equations and repeat the process n 1 xi bi aij x j aii ji1 for i = n – 1, n – 2,…, 1 8 9 Condition Number and Error A quantity used to measure the quality of a matrix is called condition number, defined as (A) A A1 The condition number measures the transfer of error from the matrix A to the right-hand side vector b. If A has a large condition number, small error in b may yield large error in the solution x = A-1b. Such a matrix is called ill- conditioned The error e is defined as the difference between a computed solution and the exact solution e x x~ Since the exact solution is generally unknown, we measure the residual r b A x~ as an indicator of the size of the error 10 Is this BMW ill-conditioned? 11 Small Pivot x1 x2 1 x1 x2 2 for some small ε. After the step of Gaussian elimination x1 x2 1 1 1 1 x2 2 We have 2 1/ x 2 1 1/ 1 x x 2 1 For very small ε, the computer result will be x2 = 1 and x1 = 0. The correct results are 1 x 1 1 1 1 2 x 1 2 1 12 Scaled Partial Pivoting We need to choose an element which is large relative to other elements of the same row as the pivot Let L = (l1, l2,…, ln) be an index array of integers. We first compute an array of scale factor as S = (s1, s2,…, sn) where si max aij (1 i n) 1 jn The first row i is chosen such that the ratio |ai,1 |/si is the greatest. Suppose this index is l1, then appropriate multipliers of equation l1 are subtracted from the other equations to eliminate x1 from the other equations Suppose initially L = (l1, l2,…, ln) = (1, 2,…, n), if our first choice is lj, we will interchange lj and l1 in the index set, not actually interchange the first and the lj rows, to avoid moving data around the memory. The remaining subsystem uses the same scale factors 13 Example Straightforward Gaussian elimination does not work well (not robust) x1 x2 1 x1 x2 2 The scale factor will be computed as S = {1,1}. In the first step, the scale factor ratio array {ε,1}. So the 2nd row is the pivoting row st After eliminating x1 from the 1 equation, we have (1 )x2 1 2 x1 x2 2 It follows that 1 2 x 1 2 1 x1 2 x2 1 We computed correct results by using scaled partial pivoting strategy 14 Gaussian Elimination with Scaled Partial Pivoting 15 Long Operation Count We count the number of multiplications and divisions, ignore summations and subtractions in scaled Gaussian elimination The 1st step, finding a pivoting costs n divisions Additional n operations are needed to multiply a factor to the pivoting row for each of the n – 1 eliminations. The cost is n(n – 1) operations. The total cost of this step is n2 operations The computation is repeated on the remaining (n – 1) equations. The total cost of Gaussian elimination with scaled partial pivoting is n2 (n 1)2 42 32 22 n(n 1)(2n 1) n3 1 6 3 Back substitution costs n(n – 1)/2 operations 16 Tridiagonal and Banded Systems Banded system has a coefficient matrix such that aij = 0 if |i – j| ≥ w. For a tridiagonal system, w = 2 d1 c1 x1 b1 a d c x b 1 2 2 2 2 a2 d3 c3 x3 b3 a d c x b n2 n1 n1 n1 n1 an1 d n xn bn General elimination procedure ai 1 di di ci1 di 1 ai 1 bi bi bi1 di 1 The array ci is not modified. No additional nonzero is created Matrix can be stored in three vector arrays 17 Tridiagonal Systems The back substitution is straightforward bn xn dn bi ci xi1 xi (i n 1,,1) di No pivoting is performed, otherwise the procedure will be quite different due to the fill-in (the array c will be modified) Diagonal dominance: A matrix A = (aij)n×n is diagonally dominant if n aii aij (1i n) j1, ji For a diagonally dominant tridiagonal system, no pivoting is needed, i.e., no division by zero will happen We want to show Gaussian elimination preserves diagonal dominance, i.e., di ai1 ci 18 Tridiagonal Systems The new coefficient matrix has 0 elements at the ai ‘s places. The new diagonal elements are determined recursively as d1 d1 ai1 d i d c (2 i n) i i1 di1 We assume that di | ai1 | |ci | We want to show that | ||c | di i We use induction to prove the inequality It is obviously true for i =1, as d d1 1 19 Tridiagonal Systems If we assume that We prove for index i, as ai1 | d i || d c | i i1 di1 | ci1 | | di | | ai1 | | di1 | | ai1 | | ci | | ai1 || ci | It follows that the new diagonal entries will not be zero, the Gaussian elimination procedure can be carried out without any problem 20 Example of a pentadiagonal matrix. It is nearly tridiagonal. The matrix with only nonzero entries on the main diagonal, and the first two bi-diagonals above and below it 21 22 LU Factorization 1 As we showed before, an n*n system of linear equations can be written in a matrix form as Axb where the coefficient matrix A has the form a11 a12 a1n a a a A 21 22 2n an1 an2 ann x is the unknown vector and b is the right-hand side known vector We also assume A is of full rank, and most entries of A are not zero 23 LU Factorization 2 There are two special forms of matrices. One is called (unit) lower triangular 1 0 0 l 1 0 L 21 ln1 ln2 1 The other is upper triangular u11 u12 u1n 0 u u U 22 2n 0 0 u nn We want to find a pair of L and U matrices, such that A LU 24 Example Take a system of linear equations 6 2 2 4 x1 16 12 8 6 10 x 26 2 3 13 9 3 x3 19 6 4 1 18x4 34 The Gaussian elimination process finally yields an upper triangular system 6 2 2 4 x1 16 0 4 2 2 x 6 2 0 0 2 5x3 9 0 0 0 3x4 3 This could be achieved by multiplying the original system with a matrix M , such that MAx Mb 25 Example We want the matrix M to be special, so that MA is upper triangular 6 2 2 4 0 4 2 2 MA U 0 0 2 5 0 0 0 3 The question is: can we find such a matrix M? Look at the first step of the Gaussian elimination 6 2 2 4 x1 16 0 4 2 2 x 6 2 0 12 8 1 x3 27 0 2 3 14x4 18 This step can be achieved by multiplying the original system with a lower triangular matrix M1Ax M1b 26 Example Here the lower triangular matrix M1 is 1 0 0 0 2 1 0 0 M 1 1 0 1 0 2 1 0 0 1 This matrix is nonsingular, because it is lower triangular with a main diagonal containing all 1’s.