Last Time - Point Iterative Methods
Total Page:16
File Type:pdf, Size:1020Kb
Last Time - Point Iterative Methods
2 A U = v for the system U = g with a computational molecule
B 3
2 B 2 B 0 B 1 = h g + B . C . s
B 4
[A] partitioned into [R] + [D] + [S] Below Diagonal Above
n n n + 1 n n + 1 + ( 1 - ) U i , j n n + 1 n n + 1 n + 1 - 1 n n n n U i , j = [ B 1 U i + 1 , j + B 2 U i - 1 , j + B 3 U i , j + 1 + B 4 U i , j - 1 – R h s ] B 0
U n + 1 = - D - 1 R + S U n + D - 1 V
Jacobi: G J
U n + 1 = - R + D - 1 S U n + R + D - 1 V
Gauss - Seidel: G G S
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 1 S.O.R.: n + 1 - 1 n - 1 U = D + R 1 - D - S U + D + R V
G
Basic Rule: Type I Boundary: Do not use the PDE on the Bdy. Type II or III Bdys: Use PDE Plus B.C. together
Spectral Radius, , of iteration matrix [G] is the largest magnitude eigenvalue of [G]
< 1 for convergence
Bare Essentials of Iterative Methods
Computational Estimate for
n = U n - U n - 1
1/ 2 轾M 2 Un- U n-1 d n 犏 ( i i ) r @ = 臌i=1 n-1 1/ 2 M 2 d 轾 n-1 n - 2 犏 (Ui- U i ) 臌i=1
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 2
Now to prove that an iteration scheme can converge
Consider the following worst case situations. Recall def. Strict Diagonal Dominance
a i i > a i j j i
Expanding on the iteration handout
n n n - 1 Define = U - U = U - A v exact algebraic solutions (unknown) Since n n - 1 U = G U + r and U = G U + r
n = G U n - 1 + r - G U - r = G U n - 1 - U or n = G n - 1 = G G n - 2 G n 0
0 However we still don’t know But n n n - 1 Define = U - U This incremental error can be determined for all n n = G U n - 1 + r - G U n - 2 - r = G U n - 1 - U n - 2 or n = G n - 1 G n 0
Finally we can examine Residuals:
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 3 Normally A U = v 0 = A U - v A U n - v Define R n = A U n - v = A U n - A A - 1 v = A U n - A - 1 v n n remember = U - U R n = A n = A G n - 1 = A G A - 1 A n - 1 R n - 1 R n = A G A - 1 R n - 1 A G n A - 1 R 0
Therefore, we have the following error measures
n n - 1 n 0 = G G numerical vs algebraic
n n - 1 n 0 = G G incremental errors
n n - 1 n 0 R = G R G R residual error
Each of these error indicators converges to zero if and only if the spectral radius, , (or largest absolute value eigenvalue) of the iteration matrix is less than 1. Therefore, n n - 1
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 4 n n - 1
R n R n - 1 and one can estimate the spectral radius of the system via
1/ 2 轾M 2 Un- U n-1 d n 犏 ( i i ) r @ = 臌i=1 n-1 1/ 2 M 2 d 轾 n-1 n - 2 犏 (Ui- U i ) 臌i=1
If one measures expect the following
1
I t e r a t i o n s
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 5 Given A U = v where [A] strict diagonal dominance
Prove Convergence a i j i = a Define j i i i
n + 1 = G n Recall a i j - 1 j 1 G J = - D R + S = – Jacobi a i i
n + 1 1 n : i = - a i j j a i i j 1
n + 1 a i j n n i j i j i a i i
n m a x n where j j
Worst case n + 1 n m a x
m a x < 1 sufficient for convergence
Note: Elliptic equation m a x = 1 Jacobi will not diverge.
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 6 Examine Gauss - Seidel :
N n + 1 - 1 n n + 1 n n 1 = a 1 j j 1 1 a 1 1 j = 2 since 1 < 1
N n + 1 - 1 n + 1 n 2 = a 2 1 1 + a 2 j j a 2 2 j = 3
N
n + 1 a 2 1 n a 2 j n n 2 + j 2 a 2 2 j = 3 a 2 2 etc. . . .
a 2 1 n + 1 a 2 1 n 1 < In general since a 2 2 a 2 2
the Gauss - Seidel will converge faster (or diverge faster) than Jacobi. and 2 G S = J
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 7 From S.O.R. theory for [A] Symmetric, Consistently ordered, “Property A”
2 G S = J
= 2 = 2 o p t 2 1 + 1 - J 1 + 1 - G S Recall: Self Adjoint implies symmetry
Rates of Convergence
In the limit of large n, recall that
n + M M n = and therefore
n + M = M n or n + M M = n and if you wish to reduce the existing error by a factor of K
n + M K = n one can write
M = K and solve for M, the number of iterations required to get to desired accuracy
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 8 M = l n ( K ) / l n ( )
2 When solving U = 0 on a square
Jacobi 2 = m a x = c o s h ~ 1 - h a s h 0 2
The rate of convergence of a linear iteration
n + 1 n U = G U + r characterized by the matrix [G] is
R G J = - l o g G J = - l o g
(-) since < 1 for (+) convergence rate
Ref: Young, D.M. Trans. AM. Math. Soc., 76, #92, 1954 Ames - on reserve Westlake - listed in handout class 1 - Appendix B Eigenvalue Bounds
2 2 - l o g ~ - l o g 1 - h = h + O h 4 and 2 2
Thus, the convergence rate for Jacobi iterations is approximately h 2 / 2 which is slow for small values of h
l o g 1 + x = x - x 2 + x 3 - x 4 2 3 4 - 1 < x 1
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 9 The max in G G S is c o s 2 h ~ 1 - h 2 as h 0
2 2 4 R G G S = - l o g c o s h ~ h + O h or the Gauss-Seidel iterations will converge twice as fast as Jacobi.
Finally max in G ~ 1 - 2 h as h 0
2 R G ~ 2 h + O h for optimal S.O.R.
2 h ~ 2 h 2 h times faster than Gauss-Seidel which for small h is significant
ME 525 (Sullivan) Point Iterative Techniques Continued - Lecture 5 10