Comparison of Two Stationary Iterative Methods
Total Page:16
File Type:pdf, Size:1020Kb
Comparison of Two Stationary Iterative Methods Oleksandra Osadcha Faculty of Applied Mathematics Silesian University of Technology Gliwice, Poland Email:[email protected] Abstract—This paper illustrates comparison of two stationary Some convergence result for the block Gauss-Seidel method iteration methods for solving systems of linear equations. The for problems where the feasible set is the Cartesian product of aim of the research is to analyze, which method is faster solve m closed convex sets, under the assumption that the sequence this equations and how many iteration required each method for solving. This paper present some ways to deal with this problem. generated by the method has limit points presented in.[5]. In [9] presented improving the modiefied Gauss-Seidel method Index Terms—comparison, analisys, stationary methods, for Z-matrices. In [11] presented the convergence Gauss-Seidel Jacobi method, Gauss-Seidel method iterative method, because matrix of linear system of equations is positive definite, but it does not afford a conclusion on the convergence of the Jacobi method. However, it also proved I. INTRODUCTION that Jacobi method also converges. Also these methods were present in [12], where described each The term ”iterative method” refers to a wide range of method. techniques that use successive approximations to obtain more This publication present comparison of Jacobi and Gauss- accurate solutions to a linear system at each step. In this publi- Seidel methods. These methods are used for solving systems cation we will cover two types of iterative methods. Stationary of linear equations. We analyze, how many iterations required methods are older, simpler to understand and implement, but each method and which method is faster. usually not as effective. Nonstationary methods are a rela- tively recent development; their analysis is usually harder to understand, but they can be highly effective. The nonstationary II. JACOBI METHOD methods we present are based on the idea of sequences of We consider the system of linear equations orthogonal vectors. (An exception is the Chebyshev iteration method, which is based on orthogonal polynomials.) Ax = B The rate at which an iterative method converges depends greatly on the spectrum of the coeffcient matrix. Hence, where A 2 Mn×n i detA 6= 0. iterative methods usually involve a second matrix that trans- We write the matrix in form of a sum of three matricies: forms the coeffcient matrix into one with a more favorable spectrum. The transformation matrix is called a precondi- A = L + D + U tioner. A good preconditioner improves the convergence of the iterative method, suffciently to overcome the extra cost of where: constructing and applying the preconditioner. Indeed, without • L - the strictly lower-triangular matrix a preconditioner the iterative method may even fail to converge • D - the diagonal matrix [1]. • U - the strictly upper-triangular matrix There are a lot of publications with stationary iterative If the matrix has next form: methods. As is well known, a real symmetric matrix can be 2 3 transformed iteratively into diagonal form through a sequence a11 a12 : : : a1n of appropriately chosen elementary orthogonal transformations 6 a21 a22 : : : a2n 7 6 . 7 (in the following called Jacobi rotations): Ak ! Ak+1 = 6 . .. 7 4 . 5 U T A U (A = given matrix), this transformations pre- k k k 0 a a : : : a sented in [2]. The special cyclic Jacobi method for computing n1 n2 nn the eigenvalues and eigenvectors of a real symmetric matrix then annihilates the off-diagonal elements of the matrix succes- 2 0 0 ::: 0 3 sively, in the natural order, by rows or columns. Cyclic Jacobi 6 a21 0 ::: 0 7 L = 6 7 method presented in [4]. 6 . .. 7 4 . 5 Copyright held by the author(s). an1 an2 ::: 0 39 2 3 a11 0 ::: 0 We have: 6 0 a22 ::: 0 7 2 3 2 3 D = 6 7 4 −1 0 2 6 . .. 7 4 . 5 A = 4 −1 4 −1 5 b = 4 6 5 0 0 : : : ann 0 −1 4 2 2 0 a : : : a 3 2 3 2 1 3 12 1n 0 −1 0 4 0 0 0 0 : : : a −1 1 6 2n 7 L + U = 4 −1 0 −1 5 D 4 0 4 0 5 U = 6 7 1 6 . .. 7 0 −1 0 0 0 − 4 . 5 4 0 0 ::: 0 We count 2 1 3 If the matrix A is nonsingular (detA 6= 0), then we can − 4 0 0 −1 1 rearrange its rows, so that on the diagonal there are no zeros, M = −D · (L + U) = 4 0 − 4 0 5 · 1 so aii 6= 0, 8i2(1;2;:::;n) 0 0 4 If the diagonal matrix D is nonsingular matrix (so aii 6= 0, 2 3 2 1 3 0 −1 0 0 4 0 i 2 1; : : : ; n), then inverse matrix has following form: 1 1 · 4 −1 0 −1 5 = 4 4 0 4 5 2 1 0 ::: 0 3 0 −1 0 0 − 1 0 a11 4 . 6 1 . 7 2 1 3 2 3 2 1 3 −1 6 0 ::: . 7 0 0 2 D = 6 a22 7 4 2 6 . 7 W = D−1 · b = 0 1 0 · 6 = 3 . .. 0 4 4 5 4 5 4 2 5 4 5 0 0 − 1 2 − 1 0 ::: 0 1 4 2 ann The iterative scheme has the next form: By substituting the above distribution for the system of equa- 2 1 3 2 i 3 2 1 3 tions, we obtain: 0 4 0 x 2 i+1 i 1 1 i 3 v = Mv + w = 4 4 0 4 5 · 4 y 5 + 4 2 5 (L + D + U)x = b 1 i 1 0 − 4 0 z − 2 After next convertions, we have: i=0 2 1 3 2 3 2 1 3 2 1 3 Dx = −(L + U)x + b 0 4 0 0 2 2 1 1 1 3 3 v = 4 4 0 4 5 · 4 0 5 + 4 2 5 = 4 2 5 −1 −1 1 1 1 x = −D (L + U)x + D b 0 − 4 0 0 − 2 − 2 Using this, we get the iterative scheme of the Jacobi method: 2 3 2 1 3 2 3 4 −1 0 2 2 i 3 ( i+1 −1 i −1 Av − b = 4 −1 4 −1 5 · 4 2 5 − 4 6 5 = x = −D (L + U)x + D b 1 i = 0; 1; 2 ::: 0 −1 4 − 2 2 x0 − initial approximation 2 1 3 2 3 2 3 3 2 − −1 −1 2 2 Matrix −D (L + U) and vector D b do not change during = 4 6 5 − 4 6 5 = 4 0 5 = calculations. 1 3 2 2 − 2 Algorithm of Jacobi method: r 32 32 3p 1) Data : = − + 02 + − = 2 • Matrix A 2 2 2 • vector b i=1 0 • vector x 2 1 3 2 1 3 2 1 3 2 3 3 2 1 3 0 4 0 2 2 8 2 • approximation " ; (" > 0) 1 1 1 3 3 3 v = 4 4 0 4 5·4 2 5+4 2 5 = 4 0 5+4 2 5 = 2) We set matrices L + U i D−1 1 1 1 3 1 0 − 4 0 − 2 − 2 − 8 − 2 3) We count matrix M = −D−1(L + U) and vector w = −1 2 7 3 D b 8 i+1 i i+1 3 4) We count x = Mx +w so long until Ax − b = 4 2 5 > 7 " − 8 i+1 5) The result is the last counted x 2 3 2 7 3 2 3 4 −1 0 8 2 Example. Use Jacobi’s method to solve the system of equa- i 3 Av − b = 4 −1 4 −1 5 · 4 2 5 − 4 6 5 = tions. The zero vector is assumed as the initial approximation. 7 0 −1 4 − 8 2 8 4x − y = 2 2 0 3 2 3 2 3 2 3 <> 2 2 0 −x + 4y − z = 6 v0 = 0 4 5 = 4 6 5 − 4 6 5 = 4 0 5 = 0 > 0 :−y − 4z = 2 2 2 0 40 2 1 3 2 3 III. GAUSS-SEIDEL METHOD − 3 0 0 0 0 0 −1 1 M1 = −D · L = 4 0 4 0 5 · 4 1 0 0 5 = Analogously to the Jacobi method, in the Gauss-Seidel 0 0 1 0 −1 0 method, we write the matrix A in the form of a sum: 2 2 0 0 0 3 A = L + D + U 1 = 4 4 0 0 5 1 where matrices L,D and U they are defined in the same way 0 − 2 0 as in the Jacobi method. Next we get: 2 1 3 2 3 − 3 0 0 0 −1 1 −1 1 Dx = −Lx − Ux + b M2 = −D · U = 4 0 4 0 5 · 4 0 0 1 5 = 1 0 0 2 0 0 0 where 2 1 1 3 −1 −1 −1 0 3 − 3 x = −D Lx − D Ux + D b 1 = 4 0 0 4 5 Based on the above formula, we can write an iterative scheme 0 0 0 of the Gauss-Seidel method: 2 1 3 2 3 2 3 3 0 0 3 1 ( i+1 1 i+1 −1 −1 −1 1 1 x = −D Lx − D Ux + D b w = D · b = 4 0 − 4 0 5 · 4 −2 5 = 4 2 5 0 i = 0; 1; 2 ::: 1 3 x − initial approximation 0 0 − 2 −3 2 In the Jacobi method, we use the corrected values of subse- The iterative scheme has following form: quent coordinates of the solution only in the next step of the 2 i+1 3 2 3 2 i+1 3 x1 0 0 0 x1 iteration.