<<

Numerical Methods in Linear Algebra September 25, 2017 Part Two

Outline Numerical Methods in Linear Algebra, Part Two • Review last class • Pivoting strategies to reduce round-off Larry Caretto error in solutions • Gauss-Jordan for matrix inverses Mechanical Engineering 501A • The LU method Seminar in Engineering • software Analysis September 25, 2017

2

Review last class Review Error Propagation

• Finite representation of in • Addition and subtraction errors ~ ~ binary computer gives round-off error x  y x  y   x   y  x   y  rel    ~ ~ • Machine epsilon is measure of precision x  y x  y x  y of floating point representation • Multiplication xy  ~x~y –1 +  = 1 below “machine epsilon” and division   rel xy – epsilon is usually 1.19x10-7 for single and errors -16 2.22x10 for double precision   y • Approximate result for   x  • Round-off error growth in calculations rel ~x ~y called error propagation both multiply/divide:

3 4

Review Matrix Vector Norm Review Problem Condition • Both Ax and x are matrices • In ill-conditioned problems small relative Ax A  max • Can use any matrix norm to changes in data cause large relative x x compute ||A|| changes in results • Matrix condition (A) = ||A|| ||A-1|| • Choosing infinity n of 100 or above indicate ill-conditioning norm as vector norm A  max aij ~ ~  i  x  x r b  Ax gives row sum j1  (A)  (A) x b b • Choosing one norm n δx δb as vector norm gives A  max aij δx δA 1 j   (A)  (A) column sum i1 x A x b 5 6

ME 501A Seminar in Engineering Analysis Page 1 Numerical Methods in Linear Algebra September 25, 2017 Part Two

Review Gauss Elimination Review Round-off Error Example

• Use each row from row 1 to row n-1 as • Same set of equations solved with the “pivot” row original order and reverse order – Work on each row below the pivot row • Multiply pivot row by a /a 0.00003 3x 1.0002  1 1y  1  row,pivot pivot,pivot             • Subtract result from row r to make arow,pivot = 0  1 1y  1  0.00003 3x 1.0002 • Operation requires subtraction for each column of A right of pivot column and for b 0.00003 3 x  1.0002  1 1 y  1    1.0002  – Repeat for each row below pivot  0  9999y 1 0 2.9997x 0.9999     0.0003      • Repeat for rows 1 to n-1 as pivot rows • Use back substitution for x values • Original order solution less accurate by about four significant figures

7 8

Review Round-off Conclusions Zero and Small Pivots

• Showed that order of equations was • Gauss elimination may give zero for the important factor in round-off error pivot element in one row, but swapping • Problems caused by small pivot equations can give a nonzero pivot elements (diagonal element on • Always check for a small number not zero pivot row) • Found large loss of significant • Use scaled equations so maximum figures with original order but no absolute element in each row is one error when order was reversed • Find row with maximum (scaled) element • Want to use this idea in algorithms in the pivot column and then swap for reducing round-off error 9 10

Finding Inverse Matrices Finding Inverse Matrices II

•For B = A-1, AB = I, gives • E. g., for the second column of B we solve

a a a a b b b b  1 0 0 0 a11 a12 a13   a1n b12  0 11 12 13   1n 11 12 13   1n        a a a a b b b b  0 1 0 0 a a a a b 1  21 22 23   2n  21 22 23   2n       21 22 23   2n  22    a31 a32 a33   a3n b31 b32 b33   b3n  0 0 1   0 a a a a b  0       31 32 33   3n 32                                                                    an1 an2 an3   ann bn1 bn2 bn3   bnn  0 0 0   1      an1 an2 an3   ann bn2  0 • This requires n solutions of n equations in n unknowns (one solution for each b • We can solve for all n columns of the column) inverse matrix at one time 11 12

ME 501A Seminar in Engineering Analysis Page 2 Numerical Methods in Linear Algebra September 25, 2017 Part Two

Finding Inverse Matrices III Gauss-Jordan Matrix Inversion

• Gauss-Jordan subtracts pivot row from all • Start with augmented A matrix rows above and below to give final result a11 a12 a13   a1n 1 0 0   0 shown below a a a a 0 1 0 0  21 22 23   2n    1 0 0 0 b      12   12  • Diagonal form a31 a32 a33   a3n 0 0 1   0 0 1 0 0b    gives solutions       22   22             by inspection: 0 0 1   0b32  32          b =  , b =                      12 12 22 a a a a 0 0 0 1  , b =  ,  n1 n2 n3   nn              22 32 32      …, bn2 = n2, • Gauss elimination with diagonals set to 1 0 0 0   1bn2  n2  and pivot row subtracted from all rows 13 14

Gauss-Jordan Result Gauss-Jordan Pseudocode

• Augmented matrix at end of process Loop over all rows as pivots (1 to N) Find the row with the maximum 1 0 0 0 b b b b    11 12 13   1n  (scaled) element in the pivot 0 1 0 0 b b b b     21 22 23   2n  column and swap it with the pivot current pivot row 0 0 1   0 b31 b32 b33   b3n    Divide all elements the pivot row            by a[pivot,pivot]              Loop over all rows, subtracting the 0 0 0   1 bn1 bn2 bn3   bnn  pivot row times a[row,pivot] from • Inverse is read from augmented (right- each row (1 to N except pivot) hand) part of matrix

15 16

Gauss-Jordan Example Gauss-Jordan Example II • Solve the set 2x  4x  26x  34 (i) 1 2 3  3  31x  2   3(2)x  9   3 (13) x  13  3(17) of equations 1 2 3  3x1  2x2  9x3  13 (ii) on the right 7  71x1  3 7(2)x2  8  7 (13) x3  14  7(17) 7x1  3x2  8x3  14 (iii) • Result from 1x1  2x2 13x3  17 (i) • Divide 1x  2x 13x  17 (i) first set of 1 2 3 0x1  4x2  30x3  38 (ii) equation (i) operations  3x1  2x2  9x3  13 (ii) 0x 17x  99x 133 (iii) by a = 2 1 2 3 11 7x  3x  8x  14 (iii) 1 2 3 • Divide 1x1  2x2 13x3  17 (i) • Subtract –3 times (i) from equation (ii) and equation (ii) 0x1 1x2  7.5x3  9.5 (ii) 7 times (i) from (iii) to replace (ii) and (iii) by a = -4 22 0x1 17x2  99x3 133 (iii) 17 18

ME 501A Seminar in Engineering Analysis Page 3 Numerical Methods in Linear Algebra September 25, 2017 Part Two

Gauss-Jordan Example III Gauss-Jordan Example IV

• Subtract –2 times (ii) from equation (i) and • Divide 1x1  0x2  2x3  2 (i) 17 times (ii) from (iii) to replace (i) and (iii) equation (iii) 0x1 1x2  7.5x3  9.5 (ii)

by a33 = -28.5 0x1  0x2 1x3 1(iii) 1  2 0 x1   2   2 (1) x2  13  2 (7.5) x3  17   2(9.5) • Subtract 2 times (iii) from equation (i) and 0  17 1 x  17  17 (1) x  99  17 (7.5) x  133 17 (9.5) 1  2  3   7.5 times (iii) from (ii) to replace (i) and (ii)

• Result from 1x1  0x2  2x3  2 (i) 1 20x1  0  2(0)x2  2  2 (0) x3  2  2(1) second set of 0x1  1x2  7.5x3  9.5 (ii) operations 0  7.50x1  1 7.5(0)x2  7.5  7.5 (1) x3  9.5  7.5(1) 0x1  0x2  28.5x3  28.5 (iii)

19 20

Gauss-Jordan Example V Gauss-Jordan Inverse Example

• Results after 1x1  0x2  0x3  0 (i) • Use previous matrix to get inverse using row iii 0x1 1x2  0x3  2 (ii) as pivot row  2  4  26 1 0 0 0x1  0x2 1x3 1(iii)   A,I   3 2 9 0 1 0 • Solutions are seen to be x1 = 0, x2 = 2,  7 3 8 0 0 1 and x3 = 1 • No back substitution required 1  2 13 0.5 0 0 • After first   • Not generally used because more compu- row as 0  4  30 1.5 1 0 tations required (compared to standard pivot row 0 17 99  3.5 0 1 Gauss elimination) 21 22

Gauss-Jordan Inverse Example II LU Methods

• After sec- 1 0 2  0.25  0.5 0 • Based on factoring original A matrix into ond row as a product LU = A 0 1 7.5  0.375  0.25 0 pivot row   • L is lower triangular   0 0  28.5 2.875 4.25 1 • U is upper triangular • Final step (row 3 as pivot) shows [I,A-1] • Different ways to do this (Doolittle, Crout and Cholesky) 1 0 0  0.0482456  0.201754 0.0701754    • Can get L and U without knowing b 0 1 0 0.381579 0.868421 0.263168  • Useful when several b solutions (for 0 0 1  0.100877  0.149123  0.0350877 same A) required at different times

23 24

ME 501A Seminar in Engineering Analysis Page 4 Numerical Methods in Linear Algebra September 25, 2017 Part Two

Crout Algorithm A = LU

a a a a • The L and U elements are stored in the  11 12 13   1n  a a a a  space used for the A elements  21 22 23   2n  a31 a32 a33   a3n  A     – In Crout algorithm, the lower triangular               matrix, L, has values of 1 on the principal   a a a a diagonal which does not have to be stored  n1 n2 n3   nn   1 0 0 0 u u u u  – All upper triangular matrix elements, uij,   11 12 13   1n l 1 0 0  0 u u u  including diagonal elements are stored in  21     22 23   2n  l31 l32 1   0  0 0 u33   u3n  place of the upper a elements      LU              – This storage pattern for the L and U                  matrices is shown on the next slide ln1 ln2 an3   1  0 0 0   unn 

25 26

How do we get L and U? General L and U Equations n m1 • Conventional matrix a  l u ij  ik kj amj   lmk u kj  lmm umj   from k  m  1to n is zero multiplication k 1 k 1 • L and U structure give effective upper • Use the following equations to compute limits less than n components of L and U matrices (lmm = 1) l 1 n 11 m1 a  l u  l u  (0)u  (0)u   (0) u  (1)u  u 1 j  1k kj 11 1 j 2 j 3 j  nj 1 j 1 j u  a  l u j  m, , n k1 mj mj  mk kj  n k1 ai1  likuk1  li1u11  li2 (0)  li3(0)    li1(0)  li1u11 m1 k 1 a  l u • These two equations give u = a and im  ik km 1j 1j l  k 1 i  m 1, ,n li1 = ai1/u11 (first u row and l column) im  27 umm 28

Solving Ax = b Solving Ax = b Continued

• Define y such that y = Ux • We find y from y = Ux (Ly = b)

• b = Ax = LUx = Ly = b •y1 = b1, y2 = b2 –l21y1, etc. • So we have to solve the following for y i1  1 0 0   0  y1  b1  yi  bi  lik yk i 1,2,n l 1 0 0 y  b  k 1  21     2   2  l31 l32 1   0  y3  b3  • Solution for yi requires only yk with k < i                    • Once y is known, we have to solve y =                   Ux for x ln1 ln2 ln3   1 yn  bn  29 30

ME 501A Seminar in Engineering Analysis Page 5 Numerical Methods in Linear Algebra September 25, 2017 Part Two

Solving Ax = b Concluded LU with Pivoting

• Solving y = Ux is just back substitution • Can do usual pivoting in LU method by selecting maximum u for division u11 u12 u13   u1n   x1   y1  mm  0 u u u  x  y  • How do we consider row swaps when  22 23   2n   2   2  we have a new b vector?  0 0 u33   u3n  x3   y3  • Maintain one-dimension integer array to                     keep track of row swaps              • See notes for details       • Library programs call for this array  0 0 0   unn  xn  yn 

31 32

Tridiagonal Systems Thomas Algorithm* • Loop over all rows from k = 1 to k = N-2; • Such systems have the following for each k value compute B and D general form: A x + B x + x = D k+1 k+1 i i-1 i i i i+1 i B  B  A C B D  D  A D B • Occur in special cases such as ordinary k 1 k 1 k 1 k k k1 k1 k 1 k k x  D B differential equation boundary value • Compute N N N problems and fitting cubic spline • Loop over all rows from k = N – 1 to k = polynomials 2 in reverse order. For each row, k, • Simplified solution procedure, Thomas compute xk from the following equation D  C x *Also called Algorithm, which is really Gaussian x  k k k 1 Elimination for this simple system k B TriDiagonal Matrix k Algorithm (TDMA) 33 34

Iterative Methods QR Method

• Used for large sparse systems of • Based on the idea of transforming the A simultaneous linear equations matrix of Ax = b into A = QR, where – May have thousands of equations, but any – Q is an orthogonal matrix (Q-1 = QT), and one equation will have only a few (five to – R is an upper triangular matrix seven) nonzero coefficients • Then have A-1 = (QR)-1 = R-1Q-1 = R-1QT – Such systems are associated with – b = A-1x = R-1QTx where premultiplication numerical solution of partial differential by R-1 is simple because R upper triangular equations – Covered in ME 501B numerical solution of • Formation of orthogonal Q known as partial differential equations discussion modified Gram-Schmidt process 35 36

ME 501A Seminar in Engineering Analysis Page 6 Numerical Methods in Linear Algebra September 25, 2017 Part Two

Singular Value Decomposition Software

• Based on the idea of transforming the A • Excel has array functions minverse, matrix of Ax = b into A = USVT, where mmult, and mdeterm – U and V are orthogonal matrices (U-1 = UT • Matlab has many functions for matrices, and V-1 = VT) and S is a diagonal matrix including eigenvectors and eigenvalues • Singular value decomposition (SVD) • Linear Algebra PACKage (LAPACK) is and the QR Method are also used in available on many computers solving least squares problems where • Visual numerics (IMSL library) an experimental data are used to get the best fit to a theoretical model • http:/gams.nist.gov/

37 38

Software Issues Conclusions

• What do you want to do? • Solving problems in linear algebra has a – Solve a problem strong background and literature – Develop a general method • Much available software • To solve one problem (even several • Be sure that you have sufficient precision times) use Excel or Matlab to avoid round-off error and use pivoting • Check matrix condition number if you • To develop a general method get suspect near linear dependence libraries from LAPACK, GAMS, or IMSL • Matlab is versatile tool for matrix (Visual Numerics) equations, eigenvalues and eigenvectors

39 40

ME 501A Seminar in Engineering Analysis Page 7