Solution of Linear Systems
1 in matrix form
• AX=B can be transformed into an equivalent system which may be easier to solve. • Equivalent system has the same solution as the original system. • Allowable operations during the transformation are:
2 Gaussian Elimination for solving [A][X]= [C] consists of 2 steps 1. Forward Elimination of unknowns The goal of Forward Elimination is to transform the coefficient matrix into an Upper Triangular Matrix
⎡ 25 5 1⎤ ⎡25 5 1 ⎤ ⎢ 64 8 1⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ 0 − 4.8 −1.56⎥ ⎣⎢144 12 1⎦⎥ ⎣⎢ 0 0 0.7 ⎦⎥
2. Back Substitution The goal of Back Substitution is to solve each of the equations using the upper triangular matrix.
3 Gaussian Elimination
Example 3.16
4 Forward Elimination
Linear Equations A set of n equations and n unknowns
a11x1 + a12 x2 + a13x3 + ... + a1n xn = b1
a21x1 + a22 x2 + a23x3 + ... + a2n xn = b2
......
an1x1 + an2 x2 + an3x3 + ... + ann xn = bn
Forward Elimination
Transform to an Upper Triangular Matrix
nd Step 1: Eliminate x1 in 2 equation using equation 1 as the pivot equation (pivot row)
⎡ Eqn1⎤ ⎢ ⎥ ×(a21) ⎣ a11 ⎦
Which will yield
a21 a21 a21 a21x1 + a12 x2 + ... + a1n xn = b1 a11 a11 a11 a11:pivot element, row 1:pivot row
5 Forward Elimination
nd Zeroing out the coefficient of x1 in the 2 equation. Subtract this equation from 2nd equation ⎛ a ⎞ ⎛ a ⎞ a ⎜ 21 ⎟ ⎜ 21 ⎟ 21 ⎜a22 − a12 ⎟x2 + ... + ⎜a2n − a1n ⎟xn = b2 − b1 ⎝ a11 ⎠ ⎝ a11 ⎠ a11
Or Where
' ' ' ' a21 a22 x2 + ... + a2n xn = b2 a = a − a 22 22 12 a11 M
' a21 a2n = a2n − a1n a11
Forward Elimination
Repeat this procedure for the remaining equations to reduce the set of equations as
a11x1 + a12 x2 + a13x3 + ... + a1n xn = b1 ' ' ' ' a22 x2 + a23x3 + ... + a2n xn = b2 ' ' ' ' a32 x2 + a33 x3 + ... + a3n xn = b3
......
' ' ' ' an2 x2 + an3x3 + ... + ann xn = bn
6 Forward Elimination
rd Step 2: Eliminate x2 in the 3 equation. nd Equivalent to eliminating x1 in the 2 equation using equation 2 as the pivot equation.
⎡ Eqn2⎤ Eqn3− ⎢ ⎥ ×(a32 ) ⎣ a22 ⎦
Forward Elimination
This procedure is repeated for the remaining equations to reduce the set of equations as
a11x1 + a12 x2 + a13x3 + ... + a1n xn = b1 ' ' ' ' a22 x2 + a23 x3 + ... + a2n xn = b2 " " " a33 x3 + ... + a3n xn = b3 ......
" " " an3x3 + ... + ann xn = bn
7 Forward Elimination
Continue this procedure by using the third equation as the pivot equation and so on. At the end of (n-1) Forward Elimination steps, the system of equations will look like:
a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1 ' ' ' ' a22 x2 + a23 x3 + ... + a2n xn = b2 " " " a33x3 + ... + an xn = b3 ...... (n−1) (n−1 ) ann xn = bn
Forward Elimination
At the end of the Forward Elimination steps
⎡a11 a12 a13 L a1n ⎤⎡x1 ⎤ ⎡ b1 ⎤ ⎢ a' a' a' ⎥⎢x ⎥ ⎢ b' ⎥ ⎢ 22 23 L 2n ⎥⎢ 2 ⎥ ⎢ 2 ⎥ " " " ⎢ a33 L a3n ⎥⎢x3 ⎥ = ⎢ b3 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ M M ⎥⎢ M ⎥ ⎢ M ⎥ ⎢ (n−1) ⎥⎢ ⎥ ⎢ (n-1) ⎥ ⎣ ann ⎦⎣xn ⎦ ⎣bn ⎦
8 Back Substitution
The goal of Back Substitution is to solve each of the equations using the upper triangular matrix.
⎡a11 a12 a13 ⎤ ⎡x1 ⎤ ⎡b1 ⎤ ⎢ 0 a a ⎥ ⎢x ⎥ = ⎢b ⎥ ⎢ 22 23 ⎥ ⎢ 2 ⎥ ⎢ 2 ⎥ ⎣⎢ 0 0 a33 ⎦⎥ ⎣⎢x3 ⎦⎥ ⎣⎢b3 ⎦⎥ Example of a system of 3 equations
Back Substitution
Start with the last equation because it has only one unknown
(n−1) bn xn = (n−1) ann
Solve the second from last equation (n-1)th using xn solved for previously.
This solves for xn-1.
9 Back Substitution
Representing Back Substitution for all equations by formula
n ()i−1 ()i−1 bi − ∑ aij x j j=i+1 For i=n-1, n-2,….,1 xi = ()i−1 aii and
(n−1) bn xn = (n−1) ann
Potential Pitfalls
-Division by zero: May occur in the forward elimination steps. -Round-off error: Prone to round-off errors.
Improvements
Increase the number of significant digits Decreases round off error Does not avoid division by zero
Gaussian Elimination with Pivoting Avoids division by zero Reduces round off error
10 Division by zero
Trivial pivoting
11 Round-off error
Pivoting to reduce error
• Partial pivoting • Scaled partial pivoting
12 Partial Pivoting
Gaussian Elimination with partial pivoting applies row switching to normal Gaussian Elimination. How? At the beginning of the kth step of forward elimination, find the maximum of
akk , ak+1,k ,...... , ank
( find max of all elements in the column on or below the main diagonal )
th If the maximum of the values is a pk In the p row, k ≤ p ≤ n, then switch rows p and k.
Partial Pivoting
What does it Mean?
Gaussian Elimination with Partial Pivoting ensures that each step of Forward Elimination is performed with the pivoting element |akk| having the largest absolute value.
13 Partial Pivoting: Example
Consider the system of equations
10x1 − 7x 2 = 7
− 3x1 + 2.099x 2 + 3x3 = 3.901
5x1 − x 2 + 5x3 = 6
In matrix form
⎡10 7 0⎤ ⎡x1 ⎤ ⎡ 7 ⎤ ⎢− 3 2.099 6⎥ ⎢x ⎥ ⎢3.901⎥ ⎢ ⎥ ⎢ 2 ⎥ = ⎢ ⎥ ⎣⎢ 5 −1 5⎦⎥ ⎣⎢x3 ⎦⎥ ⎣⎢ 6 ⎦⎥
Solve using Gaussian Elimination with Partial Pivoting using five significant digits with chopping
Partial Pivoting: Example
Forward Elimination: Step 1 Examining the values of the first column |10|, |-3|, and |5| or 10, 3, and 5 The largest absolute value is 10, which means, to follow the rules of Partial Pivoting, we switch row1 with row1.
Performing Forward Elimination
⎡10 7 0⎤⎡x1 ⎤ ⎡ 7 ⎤ ⎡10 − 7 0⎤⎡x1 ⎤ ⎡ 7 ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢− 3 2.099 6⎥⎢x ⎥ = ⎢3.901⎥ 0 − 0.001 6 x = 6.001 ⎢ ⎥⎢ 2 ⎥ ⎢ ⎥ ⇒ ⎢ ⎥⎢ 2 ⎥ ⎢ ⎥ ⎢ 0 2.5 5⎥⎢x ⎥ ⎢ 2.5 ⎥ ⎣⎢ 5 −1 5⎦⎥⎣⎢x3 ⎦⎥ ⎣⎢ 6 ⎦⎥ ⎣ ⎦⎣ 3 ⎦ ⎣ ⎦
14 Partial Pivoting: Example
Forward Elimination: Step 2 Examining the values of the first column |-0.001| and |2.5| or 0.0001 and 2.5 The largest absolute value is 2.5, so row 2 is switched with row 3
Performing the row swap
⎡10 − 7 0⎤⎡x1 ⎤ ⎡ 7 ⎤ ⎡10 − 7 0⎤⎡x1 ⎤ ⎡ 7 ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 − 0.001 6 x = 6.001 ⎢ 0 2.5 5⎥⎢x ⎥ = ⎢ 2.5 ⎥ ⎢ ⎥⎢ 2 ⎥ ⎢ ⎥ ⇒ ⎢ ⎥⎢ 2 ⎥ ⎢ ⎥ ⎢ 0 2.5 5⎥⎢x ⎥ ⎢ 2.5 ⎥ ⎣ ⎦⎣ 3 ⎦ ⎣ ⎦ ⎣⎢ 0 − 0.001 6⎦⎥⎣⎢x3 ⎦⎥ ⎣⎢6.001⎦⎥
Partial Pivoting: Example
Forward Elimination: Step 2
Performing the Forward Elimination results in:
⎡10 − 7 0 ⎤⎡x1 ⎤ ⎡ 7 ⎤ ⎢ 0 2.5 5 ⎥⎢x ⎥ = ⎢ 2.5 ⎥ ⎢ ⎥⎢ 2 ⎥ ⎢ ⎥ ⎣⎢ 0 0 6.002⎦⎥⎣⎢x3 ⎦⎥ ⎣⎢6.002⎦⎥
15 Partial Pivoting: Example
Back Substitution Solving the equations through back substitution 6.002 ⎡10 − 7 0 ⎤⎡x ⎤ ⎡ 7 ⎤ 1 x3 = = 1 ⎢ 0 2.5 5 ⎥⎢x ⎥ = ⎢ 2.5 ⎥ 6.002 ⎢ ⎥⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ 0 0 6.002⎦⎣x3 ⎦ ⎣6.002⎦ 2.5 − 5x x = 2 = 1 2 2.5
7 + 7x − 0x x = 2 3 = 0 1 10
Scaled partial pivoting
16 Scaled partial pivoting example
Potential Pitfalls
-Division by zero: May occur in the forward elimination steps. -Round-off error: Prone to round-off errors.
Improvements
Increase the number of significant digits Decreases round off error Does not avoid division by zero
Gaussian Elimination with Pivoting Avoids division by zero Reduces round off error
17 LU Decomposition (Triangular Factorization)
LU Decomposition
A non-singular matrix [ A ] has a traingular factorization if it can be expressed as [A] = [L] [U ] where []L = lower triangular martix []U = upper triangular martix
18 LU Decomposition
Method: [A] Decompose to [L] and [U]
⎡ 1 0 0⎤⎡u11 u12 u13 ⎤ A = L U = ⎢ 1 0⎥⎢ 0 u u ⎥ [] [][] ⎢l 21 ⎥⎢ 22 23 ⎥ ⎣⎢l 31 l 32 1⎦⎥⎣⎢ 0 0 u33 ⎦⎥
[U] is the same as the coefficient matrix at the end of the forward elimination step. [L] is obtained using the multipliers that were used in the forward elimination process
Example
19 LU Decomposition
Given [][] A X = []C Decompose [] A into [ L ] and [ U ] ⇒ [L][U ] [X ]= [C]
Define []Z = [U ] [X ]
Then solve [ L ] [ Z ] = [ C ] for [Z]
And then solve [ U ] [ X ] = [ Z ] for [X ]
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Solve the following set of ⎡ 25 5 1⎤ ⎡a1 ⎤ ⎡106.8⎤ linear equations using LU ⎢ 64 8 1⎥ ⎢a ⎥ = ⎢177.2⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎥ Decomposition ⎣⎢144 12 1⎦⎥ ⎣⎢a 3 ⎦⎥ ⎣⎢279.2⎦⎥
Using the procedure for finding the [L] and [U] matrices ⎡ 1 0 0⎤⎡25 5 1 ⎤ ⎢ ⎥⎢ ⎥ []A = [][]L U = ⎢2.56 1 0⎥⎢ 0 − 4.8 −1.56⎥ ⎣⎢5.76 3.5 1⎦⎥⎣⎢ 0 0 0.7 ⎦⎥
20 LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
⎡ 1 0 0⎤⎡z ⎤ ⎡106.8⎤ []L [Z][]= C 1 Set ⎢2.56 1 0⎥⎢z ⎥ = ⎢177.2⎥ ⎢ ⎥⎢ 2 ⎥ ⎢ ⎥ ⎣⎢5.76 3.5 1⎦⎥⎣⎢z3 ⎦⎥ ⎣⎢279.2⎦⎥
⎡z1 ⎤ ⎡ 106.8 ⎤ Z = ⎢z ⎥ = ⎢− 96.21⎥ Solve for []Z [] ⎢ 2 ⎥ ⎢ ⎥ ⎣⎢z3 ⎦⎥ ⎣⎢ 0.735 ⎦⎥
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
⎡25 5 1 ⎤ ⎡a1 ⎤ ⎡ 106.8 ⎤ Set [U ][X ]= [Z ] ⎢ 0 − 4.8 −1.56⎥ ⎢a ⎥ = ⎢- 96.21⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎣⎢ 0 0 0.7 ⎦⎥ ⎣⎢a3 ⎦⎥ ⎣⎢ 0.735 ⎦⎥
⎡a1 ⎤ ⎡0.2900⎤ Solve for []X ⎢a ⎥ = ⎢ 19.70 ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎣⎢a3 ⎦⎥ ⎣⎢ 1.050 ⎦⎥
21 Factorization with Pivoting
Factorization with Pivoting
• Theorem. If A is a nonsingular matrix, then there exists a permutation matrix P so that PA has an LU- factorization
PA = LU. • Theorem (PA = LU; Factorization with Pivoting). Given that A is nonsingular. The solution X to the linear system AX=B , is found in four steps:
1. Construct the matrices L,U and P . 2. Compute the column vector PB . 3. Solve LY=PB for Y using forward substitution. 4. Solve UX=Y for X using back substitution.
22 ⎡ 1 0 0⎤ ⎢ ⎥ []L = ⎢− 2 1 0⎥ ⎣⎢ 4 0 1⎦⎥
Is LU Decomposition better or faster than Gauss Elimination?
Let’s look at computational time. n = number of equations
n3 To decompose [A], time is proportional to 3
To solve [][]U X = [C] and [ L ] [ Z ] = [ C ] n 2 time proportional to 2
23 Total computational time for LU Decomposition is proportional to n3 n 2 n3 + 2( ) or + n 2 3 2 3
Gauss Elimination computation time is proportional to n3 n 2 + 3 2
How is this better?
LU Decomposition
What about a situation where the [C] vector changes? In LU Decomposition, LU decomposition of [A] is independent of the [C] vector, therefore it only needs to be done once. Let m = the number of times the [C] vector changes The computational times are proportional to
n3 n 2 n3 Gauss Elimination = m ( + ) LU decomposition = + m(n 2 ) 3 2 3
Consider a 100 equation set with 50 right hand side vectors LU Decomposition = 8 . 33 × 10 5 Gauss Elimination = 1.69×107
24 Simultaneous Linear Equations: Iterative Methods
Jacobi and Gauss-Seidel Method
25 -Algebraically solve each linear equation for xi -Assume an initial guess
-Solve for each xi and repeat -Check if error is within a pre-specified tolerance.
Jacobi Gauss-Seidel
26 Algorithm
A set of n equations and n unknowns: If: the diagonal elements are a11x1 + a12 x2 + a13x3 + ... + a1n xn = b1 non-zero a x a x a x ... a x b 21 1 + 22 2 + 23 3 + + 2n n = 2 Rewrite each equation solving . . . . for the corresponding unknown . . ex: an1x1 + an2 x2 + an3x3 + ... + ann xn = bn First equation, solve for x1
Second equation, solve for x2
Algorithm
Rewriting each equation From Equation 1 c1 − a12 x2 − a13 x3 KK− a1n xn x1 = a11
c2 − a21x1 − a23 x3 KK− a2n xn x2 = From equation 2 a22 M M M
cn−1 − an−1,1x1 − an−1,2 x2 KK− an−1,n−2 xn−2 − an−1,n xn From equation n-1 xn−1 = an−1,n−1
cn − an1x1 − an2 x2 −KK− an,n−1xn−1 From equation n xn = ann
27 Stopping criterion
Absolute Relative Error x new − xold ε = i i ×100 a i new xi
The iterations are stopped when the absolute relative error is less than a prespecified tolerance for all unknowns.
28 Given the system of equations With an initial guess of
12x1 + 3x 2 - 5x 3 = 1 ⎡x1 ⎤ ⎡1⎤ ⎢ ⎥ ⎢ ⎥ x1 + 5x 2 + 3x 3 = 28 x = 0 ⎢ 2 ⎥ ⎢ ⎥ 3x1 + 7x2 + 13x3 =76 ⎣⎢x3 ⎦⎥ ⎣⎢1⎦⎥ Rewriting each equation
⎡12 3 − 5⎤ ⎡a1 ⎤ ⎡ 1 ⎤ ⎢ 1 5 3 ⎥ ⎢a ⎥ = ⎢28⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎣⎢ 3 7 13 ⎦⎥ ⎣⎢a3 ⎦⎥ ⎣⎢76⎦⎥ 1− 3(0)+ 5(1) 1− 3x2 + 5x3 x = = 0.50000 x = 1 1 12 12 28 − (0.5)− 3(1) 28 − x1 − 3x3 x = = 4.9000 x = 2 5 2 5 76 − 3 0.50000 − 7 4.9000 76 − 3x1 − 7x2 ( ) ( ) x = x3 = = 3.0923 3 13 13
The absolute relative error Initial guess
⎡x1 ⎤ ⎡1⎤ 0.50000 −1.0000 ⎢ ⎥ ⎢ ⎥ ∈ = ×100 = 67.662% x = 0 a 1 ⎢ 2 ⎥ ⎢ ⎥ 0.50000 ⎣⎢x3 ⎦⎥ ⎣⎢1⎦⎥ 4.9000 − 0 ∈ = ×100 =100.00% a 2 4.9000 After Iteration #1 3.0923 −1.0000 ⎡x1 ⎤ ⎡0.5000⎤ ∈a = ×100 = 67.662% ⎢x ⎥ = ⎢4.9000⎥ 3 3.0923 ⎢ 2 ⎥ ⎢ ⎥ ⎣⎢x3 ⎦⎥ ⎣⎢3.0923⎦⎥
The maximum absolute relative error after the first iteration is 100%
29 Repeating more iterations, the following values are obtained
Iteration a1 ε a2 ε a3 ε a 1 a 2 a 3
1 0.50000 67.662 4.900 100.00 3.0923 67.662 2 0.14679 240.62 3.7153 31.887 3.8118 18.876 3 0.74275 80.23 3.1644 17.409 3.9708 4.0042 4 0.94675 21.547 3.0281 4.5012 3.9971 0.65798 5 0.99177 4.5394 3.0034 0.82240 4.0001 0.07499 6 0.99919 0.74260 3.0001 0.11000 4.0001 0.00000
⎡x ⎤ ⎡0.99919⎤ The solution obtained 1 ⎢x ⎥ = ⎢ 3.0001 ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎣⎢x3 ⎦⎥ ⎣⎢ 4.0001 ⎦⎥
⎡x1 ⎤ ⎡1⎤ the exact solution ⎢x ⎥ = ⎢3⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎣⎢x3 ⎦⎥ ⎣⎢4⎦⎥
30 What went wrong? Even though done correctly, the answer is not converging to the correct answer This example illustrates a pitfall of Jacobi/ Gauss-Siedel method: not all systems of equations will converge.
Is there a fix?
Diagonally dominant: [A] in [A] [X] = [C] is diagonally dominant if:
n aii > ∑ aij j=1 j≠i
The coefficient on the diagonal must be greater than the sum of the other coefficients in that row.
31