ECE 3040 Lecture 16: Systems of Linear Equations II © Prof

Total Page:16

File Type:pdf, Size:1020Kb

ECE 3040 Lecture 16: Systems of Linear Equations II © Prof ECE 3040 Lecture 16: Systems of Linear Equations II © Prof. Mohamad Hassoun This lecture covers the following topics: Introduction LU factorization and the solution of linear algebraic equations Matlab’s lu function and the left-division operator “\” More on Matlab’s left-division operator Iterative methods: Jacobi and Gauss-Seidel algorithms Introduction In the previous lecture, Gauss elimination with pivoting was used to solve a system of linear algebraic equations of the form: 퐀퐱 = 퐝. This method becomes inefficient for solving problems where the coefficient matrix 퐀 is constant, but with different right-hand-side vectors 퐝. Recall that Gauss elimination involves a forward elimination step (which dominates the computation time) that couples the vector 퐝 and the matrix 퐀 through the augmented matrix [퐀 퐝]. This means that every time 퐝 changes we have to repeat the elimination step all over. The LU factorization-based solution method separates (decouples) the time-consuming elimination of the matrix A from the manipulations of the vector 퐝. Thus, once matrix 퐀 has been factored as the product of two square matrices, 퐀 = 퐋퐔, multiple right-hand vectors 퐝 (as they become available) can be evaluated in an efficient manner. Note: If multiple right hand vectors 퐝1, 퐝2, …, 퐝푚 are available simultaneously, then we can still apply (efficiently) Gauss elimination to the augmented system [퐀 퐝1 퐝2 … 퐝푚] and obtain [퐀′ 퐝′1 퐝′2 … 퐝′푚] The solution vectors 퐱1, 퐱2, …, 퐱푚 can then be determined, respectively, by back substitution using the systems ′ ′ ′ 퐀 퐱1 = 퐝′1, 퐀 퐱2 = 퐝′2, …, 퐀 퐱푚 = 퐝′푚. However, the complication arises when the 퐝푖 vectors are not available simultaneously. Here, the elimination step would have to be repeated 푚 times which renders the Gauss elimination method impractical for solving large systems of equations. LU Factorization and the Solution of Linear Algebraic Equations The LU factorization of an 푛x푛 matrix A requires pivoting (i.e., row permutations) just as was the case for Gauss elimination. If matrix A is nonsingular (i.e., |A| ≠ 0), then it can be shown that PA = LU, where P is a permutation matrix, L is a lower triangular matrix with 1’s on the diagonal, and U is an upper triangular matrix. For simplicity, the factorization method will be described for a 3x3 matrix that requires no pivoting (i.e., P = I or A = LU), where 1 0 0 푢11 푢12 푢13 L = [푙21 1 0], U = [ 0 푢22 푢23] 푙31 푙32 1 0 0 푢33 The coefficients 푙푖푗 and 푢푖푗 can be computed from the 푎푖푗 coefficients, as follows. Set A equal to the product LU (note that, in general, matrix multiplication is not commutative LU ≠ UL): 푎11 푎12 푎13 1 0 0 푢11 푢12 푢13 [푎21 푎22 푎23] = [푙21 1 0] [ 0 푢22 푢23] 푎31 푎32 푎33 푙31 푙32 1 0 0 푢33 or, 푎11 푎12 푎13 푢11 푢12 푢13 [푎21 푎22 푎23] = [푙21푢11 푙21푢12 + 푢22 푙21푢13 + 푢23 ] 푎31 푎32 푎33 푙31푢11 푙31푢12 + 푙32푢22 푙31푢13 + 푙32푢23 + 푢33 By matching coefficients in the above matrix equation we obtain the following formulas: 푢11 = 푎11 푢12 = 푎12 푢13 = 푎13 푎21 푎21 푙21 = = 푢11 푎11 푎21푎12 푢22 = 푎22 − 푙21푢12 = 푎22 − 푎11 푎21푎13 푢23 = 푎23 − 푙21푢13 = 푎23 − 푎11 푎31 푎31 푙31 = = 푢11 푎11 푎 푎 푎 − 31 12 푎 − 푙 푢 32 푎 푎 푎 − 푎 푎 푙 = 32 31 12 = 11 = 11 32 31 12 32 푢 푎21푎12 푎 푎 − 푎 푎 22 푎22 − 11 22 21 12 푎11 푢33 = 푎33 − 푙31푢13 − 푙32푢23 푎31푎13 (푎11푎32 − 푎31푎12)(푎11푎23 − 푎21푎13) = 푎33 − − 푎11 푎11(푎11푎22 − 푎21푎12) Regrouping the above results, gives 푢11 = 푎11 푢12 = 푎12 푢13 = 푎13 푎21푎12 푢22 = 푎22 − 푎11 푎21푎13 푢23 = 푎23 − 푎11 푎31푎13 (푎11푎32 − 푎31푎12)(푎11푎23 − 푎21푎13) 푢33 = 푎33 − − 푎11 푎11(푎11푎22 − 푎21푎12) 푎21 푙21 = 푎11 푎31 푙31 = 푎11 푎11푎32 − 푎31푎12 푙32 = 푎11푎22 − 푎21푎12 Notice that 푎11푎22 − 푎21푎12 ≠ 0 and 푎11 ≠ 0 are required for the decomposition to exist. However, if A is nonsingular and we allow row permutations, we may relax those conditions. Your turn: Derive the above formulas for a 4x4 matrix, assuming that row permutations are not necessary. Example. Perform LU factorization for the following matrix. 1 1 1 [ 4 2 ] 1 1 1 16 4 1 푎21 1 푙21 = = = 4 푎11 1 4 푎31 16 푙31 = = = 64 푎11 1 4 푎 푎 − 푎 푎 1 − 8 푙 = 11 32 31 12 = = +28 32 1 1 푎11푎22 − 푎21푎12 − 4 2 Therefore the lower triangular matrix L is, 1 0 0 1 0 0 L = [푙21 1 0] = [ 4 1 0] 푙31 푙32 1 64 28 1 And, 1 푢 = 푎 = 11 11 4 1 푢 = 푎 = 12 12 2 푢13 = 푎13 = 1 1 (1) ( ) 푎21푎12 2 푢22 = 푎22 − = 1 − = −1 푎11 1 4 푎21푎13 (1)(1) 푢23 = 푎23 − = 1 − = −3 푎11 1 4 푎31푎13 (푎11푎32 − 푎31푎12)(푎11푎23 − 푎21푎13) 푢33 = 푎33 − − = 푎11 푎11(푎11푎22 − 푎21푎12) 1 (16)(1) (1 − 8) ( − 1) 1 − − 4 = 1 − 64 + 84 = 21 1 1 1 1 ( − ) 4 4 4 2 leading to the upper triangular matrix, 푢11 푢12 푢13 1/4 1/2 1 U = [ 0 푢22 푢23] = [ 0 −1 −3] 0 0 푢33 0 0 21 Verification: Now, let us go back to the system Ax = d. We may rewrite it as LUx = d or L(Ux) = d. Define the intermediate vector z, as z = Ux. Then, we may write the original system as Lz = d, or explicitly as 1 0 0 푧1 푑1 [푙21 1 0] [푧2] = [푑2] 푙31 푙32 1 푧3 푑3 The above formulation can be solved by forward-substitution, 푧1 = 푑1 푧2 = 푑2 − 푙21푧1 푧3 = 푑3 − 푙31푧1 − 푙32푧2 Once z is determined, we can solve for x using back-substitution with Ux = z, or explicitly, 푢11 푢12 푢13 푥1 푧1 [ 0 푢22 푢23] [푥2] = [푧2] 0 0 푢33 푥3 푧3 The solution for x, assuming non-zero diagonal elements (푢푖푖 ≠ 0), is 푧3 푥3 = 푢33 푧2 − 푢23푥3 푥2 = 푢22 푧1 − 푢12푥2 − 푢13푥3 푥1 = 푢11 Returning to our original 3x3 A matrix and its associated LU factorization, and assuming a right-hand vector d = [2 1 8]T, we can then apply the forward- substitution step to the system 1 0 0 푧1 2 [ 4 1 0] [푧2] = [1] 64 28 1 푧3 8 to obtain the intermediate vector z, 푧1 = 푑1 = 2 푧2 = 푑2 − 푙21푧1 = 1 − 4(2) = −7 푧3 = 푑3 − 푙31푧1 − 푙32푧2 = 8 − 64(2) − 28(−7) = 76 leading to the intermediate solution, z = [2 -7 76]T. Finally, the back-substitution step is applied to the system 1/4 1/2 1 푥1 푧1 2 [ 0 −1 −3] [푥2] = [푧2] = [−7] 0 0 21 푥3 푧3 76 and returns the solution x, 푧3 76 푥3 = = ≅ 3.6190 푢33 21 76 −7 − (−3) ( ) 푧2 − 푢23푥3 21 −27 푥2 = = = ≅ −3.8571 푢22 −1 7 1 27 76 2 − ( ) (− ) − (1) ( ) 푧1 − 푢12푥2 − 푢13푥3 2 7 21 26 푥1 = = = − ≅ 1.238 푢11 1 21 4 1.2381 or, x ≅ [−3.8571]. 3.6190 The forward-substitution step can be represented concisely, for a system of 푛 equations, as 푖−1 푧푖 = 푑푖 − ∑ 푙푖푗푧푗 for 푖 = 1,2, … , 푛 푗=1 The back-substitution step can be written concisely as 푧푛 푥푛 = 푢푛푛 푛 푧푖 − ∑푗=푖+1 푢푖푗푥푗 푥푖 = for 푖 = 푛 − 1, 푛 − 2, … ,1 푢푖푖 Matlab’s lu Function and the Left-Division Operator “\” Matlab has a built-in function “lu” that generates the LU factorization. It has the syntax [L,U,P] = lu(A). This function employs row permutations in order to pivot. These permutations are captured in the matrix P, so that PA=LU. Example: Notice how the P matrix permutes (swaps) the first and last rows of matrix A. That happened because the last row has the largest pivot in column one. No swapping of the last two rows occurs because after the first swap, the second row pivot (in column two), 1.0, is larger than the one in the last row, 0.5. The forward and back substitution functions are implemented by the left-division operator “\” (see the next section for more discussion of this operator). The following is an example that illustrates the use of lu and \ to solve a system of linear equations. Example. Use Matlab to solve the following system of equations using the LU factorization method. 30 −1 −2 푥1 8 [ 1 70 −3 ] [푥2] = [−20] 3 −2 100 푥3 80 LU factorization step: No row swapping in this case (P=I) because the first row has the largest pivot (in column one), 30, and the second row has the largest pivot (in column two), 70, compared to that in the last row, |-2| = 2. Forward and back substitution steps: Alternatively, the substitution steps can be computed in one instruction, as follows (the use of parenthesis is required): The above example had the permutation matrix 퐏 = 퐈 (the identity matrix). When 퐏 ≠ 퐈, we must take it into account in our formulation, PAx = LUx = Pd. In such case, we must first permute the vector 퐝 by matrix 퐏. The solution is then obtained as, 퐱 = 퐔\[퐋\(퐏퐝)]. The following example illustrates this situation. Example. Use Matlab to solve the following system of equations using the LU factorization method. 1 1 1 푥1 2 [ 4 2 ] [푥2] = [1] 1 1 1 푥3 8 16 4 1 Since 퐏 ≠ 퐈, the proper way to compute the solution is , More on Matlab’s Left-Division Operator The left-division Matlab operation was used earlier to solve a system of linear equations, Ax = d, using the syntax A\d. Matlab’s left-division is a highly sophisticated algorithm. When used, Matlab examines the structure of the coefficient matrix and then implements an optimal method to obtain the solution.
Recommended publications
  • Overview of Iterative Linear System Solver Packages
    Overview of Iterative Linear System Solver Packages Victor Eijkhout July, 1998 Abstract Description and comparison of several packages for the iterative solu- tion of linear systems of equations. 1 1 Intro duction There are several freely available packages for the iterative solution of linear systems of equations, typically derived from partial di erential equation prob- lems. In this rep ort I will give a brief description of a numberofpackages, and giveaninventory of their features and de ning characteristics. The most imp ortant features of the packages are which iterative metho ds and preconditioners supply; the most relevant de ning characteristics are the interface they present to the user's data structures, and their implementation language. 2 2 Discussion Iterative metho ds are sub ject to several design decisions that a ect ease of use of the software and the resulting p erformance. In this section I will give a global discussion of the issues involved, and how certain p oints are addressed in the packages under review. 2.1 Preconditioners A go o d preconditioner is necessary for the convergence of iterative metho ds as the problem to b e solved b ecomes more dicult. Go o d preconditioners are hard to design, and this esp ecially holds true in the case of parallel pro cessing. Here is a short inventory of the various kinds of preconditioners found in the packages reviewed. 2.1.1 Ab out incomplete factorisation preconditioners Incomplete factorisations are among the most successful preconditioners devel- op ed for single-pro cessor computers. Unfortunately, since they are implicit in nature, they cannot immediately b e used on parallel architectures.
    [Show full text]
  • Jacobi-Davidson, Gauss-Seidel and Successive Over-Relaxation For
    Applied Mathematics and Computational Intelligence Volume 6, 2017 [41-52] Jacobi‐Davidson, Gauss‐Seidel and Successive Over‐Relaxation for Solving Systems of Linear Equations Fatini Dalili Mohammed1,a and Mohd Rivaieb 1Department of Computer Sciences and Mathematics, Universiti Teknologi MARA (UiTM) Terengganu, Campus Kuala Terengganu, Malaysia ABSTRACT Linear systems are applied in many applications such as calculating variables, rates, budgets, making a prediction and others. Generally, there are two techniques of solving system of linear equation including direct methods and iterative methods. Some basic solution methods known as direct methods are ineffective in solving many equations in large systems due to slower computation. Due to inability of direct methods, iterative methods are practical to be used in large systems of linear equations as they do not need much storage. In this project, three indirect methods are used to solve large system of linear equations. The methods are Jacobi Davidson, Gauss‐Seidel and Successive Over‐ Relaxation (SOR) which are well known in the field of numerical analysis. The comparative results analysis of the three methods is considered. These three methods are compared based on number of iterations, CPU time and error. The numerical results show that Gauss‐Seidel method and SOR method with ω=1.25 are more efficient than others. This research allows researcher to appreciate the use of iterative techniques for solving systems of linear equations that is widely used in industrial applications. Keywords: system of linear equation, iterative method, Jacobi‐Davidson, Gauss‐Seidel, Successive Over‐Relaxation 1. INTRODUCTION Linear systems are important in our real life. They are applied in various applications such as in calculating variables, rates, budgets, making a prediction and others.
    [Show full text]
  • A Study on Comparison of Jacobi, Gauss-Seidel and Sor Methods For
    International Journal of Mathematics Trends and Technology (IJMTT) – Volume 56 Issue 4- April 2018 A Study on Comparison of Jacobi, Gauss- Seidel and Sor Methods for the Solution in System of Linear Equations #1 #2 #3 *4 Dr.S.Karunanithi , N.Gajalakshmi , M.Malarvizhi , M.Saileshwari #Assistant Professor ,* Research Scholar Thiruvalluvar University,Vellore PG & Research Department of Mathematics,Govt.Thirumagal Mills College,Gudiyattam,Vellore Dist,Tamilnadu,India-632602 Abstract — This paper presents three iterative methods for the solution of system of linear equations has been evaluated in this work. The result shows that the Successive Over-Relaxation method is more efficient than the other two iterative methods, number of iterations required to converge to an exact solution. This research will enable analyst to appreciate the use of iterative techniques for understanding the system of linear equations. Keywords — The system of linear equations, Iterative methods, Initial approximation, Jacobi method, Gauss- Seidel method, Successive Over- Relaxation method. 1. INTRODUCTION AND PRELIMINARIES Numerical analysis is the area of mathematics and computer science that creates, analyses, and implements algorithms for solving numerically the problems of continuous mathematics. Such problems originate generally from real-world applications of algebra, geometry and calculus, and they involve variables which vary continuously. These problems occur throughout the natural sciences, social science, engineering, medicine, and business. The solution of system of linear equations can be accomplished by a numerical method which falls in one of two categories: direct or iterative methods. We have so far discussed some direct methods for the solution of system of linear equations and we have seen that these methods yield the solution after an amount of computation that is known advance.
    [Show full text]
  • Solving Linear Systems: Iterative Methods and Sparse Systems
    Solving Linear Systems: Iterative Methods and Sparse Systems COS 323 Last time • Linear system: Ax = b • Singular and ill-conditioned systems • Gaussian Elimination: A general purpose method – Naïve Gauss (no pivoting) – Gauss with partial and full pivoting – Asymptotic analysis: O(n3) • Triangular systems and LU decomposition • Special matrices and algorithms: – Symmetric positive definite: Cholesky decomposition – Tridiagonal matrices • Singularity detection and condition numbers Today: Methods for large and sparse systems • Rank-one updating with Sherman-Morrison • Iterative refinement • Fixed-point and stationary methods – Introduction – Iterative refinement as a stationary method – Gauss-Seidel and Jacobi methods – Successive over-relaxation (SOR) • Solving a system as an optimization problem • Representing sparse systems Problems with large systems • Gaussian elimination, LU decomposition (factoring step) take O(n3) • Expensive for big systems! • Can get by more easily with special matrices – Cholesky decomposition: for symmetric positive definite A; still O(n3) but halves storage and operations – Band-diagonal: O(n) storage and operations • What if A is big? (And not diagonal?) Special Example: Cyclic Tridiagonal • Interesting extension: cyclic tridiagonal • Could derive yet another special case algorithm, but there’s a better way Updating Inverse • Suppose we have some fast way of finding A-1 for some matrix A • Now A changes in a special way: A* = A + uvT for some n×1 vectors u and v • Goal: find a fast way of computing (A*)-1
    [Show full text]
  • Numerical Mathematics
    4 Iterative Methods for Solving Linear Systems Iterative methods formally yield the solution x of a linear system after an infinite number of steps. At each step they require the computation of the residual of the system. In the case of a full matrix, their computational cost is therefore of the order of n2 operations for each iteration, to be 2 3 compared with an overall cost of the order of 3 n operations needed by direct methods. Iterative methods can therefore become competitive with direct methods provided the number of iterations that are required to con- verge (within a prescribed tolerance) is either independent of n or scales sublinearly with respect to n. In the case of large sparse matrices, as discussed in Section 3.9, direct methods may be unconvenient due to the dramatic fill-in, although ex- tremely efficient direct solvers can be devised on sparse matrices featuring special structures like, for example, those encountered in the approximation of partial differential equations (see Chapters 12 and 13). Finally, we notice that, when A is ill-conditioned, a combined use of direct and iterative methods is made possible by preconditioning techniques that will be addressed in Section 4.3.2. 4.1 On the Convergence of Iterative Methods The basic idea of iterative methods is to construct a sequence of vectors x(k) that enjoy the property of convergence x = lim x(k), (4.1) k →∞ 124 4. Iterative Methods for Solving Linear Systems where x is the solution to (3.2). In practice, the iterative process is stopped at the minimum value of n such that x(n) x <ε, where ε is a fixed tolerance and is any convenient vector∥ norm.− ∥ However, since the exact solution is obviously∥·∥ not available, it is necessary to introduce suitable stopping criteria to monitor the convergence of the iteration (see Section 4.6).
    [Show full text]
  • Linear Iterative Methods
    Chapter 6 Linear Iterative Methods 6.1 Motivation In Chapter 3 we learned that, in general, solving the linear system of equations n n n 3 Ax = b with A C ⇥ and b C requires (n ) operations. This is too 2 2 O expensive in practice. The high cost begs the following questions: Are there lower cost options? Is an approximation of x good enough? How would such an approximation be generated? Often times we can find schemes that have a much lower cost of computing an approximate solution to x. As an alternative to the direct methods that we studied in the previous chapters, in the present chapter we will describe so called iteration methods for n 1 constructing sequences, x k 1 C , with the desire that x k x := A− b, { }k=1 ⇢ ! as k . The idea is that, given some ">0, we look for a k N such that !1 2 x x " k − k k with respect to some norm. In this context, " is called the stopping tolerance. In other words, we want to make certain the error is small in norm. But a word of caution. Usually, we do not have a direct way of approximating the error. The residual is more readily available. Suppose that x k is an ap- proximation of x = A 1b. The error is e = x x and the residual is − k − k r = b Ax = Ae . Recall that, k − k k e r k k k (A)k k k. x b k k k k r k Thus, when (A) is large, k b k , which is easily computable, may not be a good k k ek indicator of the size of the relative error k x k , which is not directly computable.
    [Show full text]
  • Topic 3 Iterative Methods for Ax = B
    Topic 3 Iterative methods for Ax = b 3.1 Introduction Earlier in the course, we saw how to reduce the linear system Ax = b to echelon form using elementary row operations. Solution methods that rely on this strategy (e.g. LU factorization) are robust and efficient, and are fundamental tools for solving the systems of linear equations that arise in practice. These are known as direct methods, since the solution x is obtained following a single pass through the relevant algorithm. We are now going to look at some alternative approaches that fall into the category of iterative methods . These techniques can only be applied to square linear systems ( n equations in n unknowns), but this is of course a common and important case. 45 Iterative methods for Ax = b begin with an approximation to the solution, x0, then seek to provide a series of improved approximations x1, x2, … that converge to the exact solution. For the engineer, this approach is appealing because it can be stopped as soon as the approximations xi have converged to an acceptable precision, which might be something as crude as 10 −3. With a direct method, bailing out early is not an option; the process of elimination and back-substitution has to be carried right through to completion, or else abandoned altogether. By far the main attraction of iterative methods, however, is that for certain problems (particularly those where the matrix A is large and sparse ) they are much faster than direct methods. On the other hand, iterative methods can be unreliable; for some problems they may exhibit very slow convergence, or they may not converge at all.
    [Show full text]
  • Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1
    Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1 Richard Barrett2, Michael Berry3, Tony F. Chan4, James Demmel5, June M. Donato6, Jack Dongarra3,2, Victor Eijkhout7, Roldan Pozo8, Charles Romine9, and Henk Van der Vorst10 This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied Mathematics (http://www.siam.org/books). 1This work was supported in part by DARPA and ARO under contract number DAAL03-91-C-0047, the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615, the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-84OR21400, and the Stichting Nationale Computer Faciliteit (NCF) by Grant CRG 92.03. 2Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830- 6173. 3Department of Computer Science, University of Tennessee, Knoxville, TN 37996. 4Applied Mathematics Department, University of California, Los Angeles, CA 90024-1555. 5Computer Science Division and Mathematics Department, University of California, Berkeley, CA 94720. 6Science Applications International Corporation, Oak Ridge, TN 37831 7Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX 78758 8National Institute of Standards and Technology, Gaithersburg, MD 9Office of Science and Technology Policy, Executive Office of the President 10Department of Mathematics, Utrecht University, Utrecht, the Netherlands. ii How to Use This Book We have divided this book into five main chapters. Chapter 1 gives the motivation for this book and the use of templates. Chapter 2 describes stationary and nonstationary iterative methods.
    [Show full text]
  • On Some Iterative Methods for Solving Systems of Linear Equations
    Computational and Applied Mathematics Journal 2015; 1(2): 21-28 Published online February 20, 2015 (http://www.aascit.org/journal/camj) On Some Iterative Methods for Solving Systems of Linear Equations Fadugba Sunday Emmanuel Department of Mathematical Sciences, Ekiti State University, Ado Ekiti, Nigeria Email address [email protected], [email protected] Citation Fadugba Sunday Emmanuel. On Some Iterative Methods for Solving Systems of Linear Equations. Computational and Applied Mathematics Journal. Vol. 1, No. 2, 2015, pp. 21-28. Keywords Abstract Iterative Method, This paper presents some iterative methods for solving system of linear equations Jacobi Method, namely the Jacobi method and the modified Jacobi method. The Jacobi method is an Modified Jacobi Method algorithm for solving system of linear equations with largest absolute values in each row and column dominated by the diagonal elements. The modified Jacobi method also known as the Gauss Seidel method or the method of successive displacement is useful for the solution of system of linear equations. The comparative results analysis of the Received: January 17, 2015 two methods was considered. We also discussed the rate of convergence of the Jacobi Revised: January 27, 2015 method and the modified Jacobi method. Finally, the results showed that the modified Accepted: January 28, 2015 Jacobi method is more efficient, accurate and converges faster than its counterpart “the Jacobi Method”. 1. Introduction In computational mathematics, an iterative method attempts to solve a system of linear equations by finding successive approximations to the solution starting from an initial guess. This approach is in contrast to direct methods, which attempt to solve the problem by a finite sequence of some operations and in the absence of rounding errors.
    [Show full text]
  • Numerical Linear Algebra Contents
    WS 2009/2010 Numerical Linear Algebra Professor Dr. Christoph Pflaum Contents 1 Linear Equation Systems in the Numerical Solution of PDE’s 5 1.1 ExamplesofPDE’s........................ 5 1.2 Finite-Difference-Discretization of Poisson’s Equation..... 7 1.3 FD Discretization for Convection-Diffusion . 8 1.4 Irreducible and Diagonal Dominant Matrices . 9 1.5 FE (Finite Element) Discretization . 12 1.6 Discretization Error and Algebraic Error . 15 1.7 Basic Theory for LInear Iterative Solvers . 15 1.8 Effective Convergence Rate . 18 1.9 Jacobi and Gauss-Seidel Iteration . 20 1.9.1 Ideas of Both Methods . 20 1.9.2 Description of Jacobi and Gauss-Seidel Iteration by Matrices.......................... 22 1.10 Convergence Rate of Jacobi and Gauss-Seidel Iteration . 24 1.10.1 General Theory for Weak Dominant Matrices . 24 1.10.2 Special Theory for the FD-Upwind . 26 1.10.3 FE analysis, Variational approach . 30 1 1.10.4 Analysis of the Convergence of the Jacobi Method . 33 1.10.5 Iteration Method with Damping Parameter . 34 1.10.6 Damped Jacobi Method . 35 1.10.7 Analysis of the Damped Jacobi method . 35 1.10.8 Heuristic approach . 37 2 Multigrid Algorithm 38 2.1 Multigrid algorithm on a Simple Structured Grid . 38 2.1.1 Multigrid ......................... 38 2.1.2 Idea of Multigrid Algorithm . 39 2.1.3 Two–grid Multigrid Algorithm . 40 2.1.4 Restriction and Prolongation Operators . 41 2.1.5 Prolongation or Interpolation . 41 2.1.6 Pointwise Restriction . 41 2.1.7 Weighted Restriction . 42 2.2 Iteration Matrix of the Two–Grid Multigrid Algorithm .
    [Show full text]
  • Comparison of Jacobi and Gauss-Seidel Iterative Methods for the Solution of Systems of Linear Equations
    Asian Research Journal of Mathematics 8(3): 1-7, 2018; Article no.ARJOM.34769 ISSN: 2456-477X Comparison of Jacobi and Gauss-Seidel Iterative Methods for the Solution of Systems of Linear Equations A. I. Bakari 1* and I. A. Dahiru 1 1Department of Mathematics, Federal University, Dutse, Nigeria. Authors’ contributions This work was carried out in collaboration between both authors. Author AIB analyzed the basic computational methods while author IAD implemented the method on some systems of linear equations of six variables problems with aid of MATLAB programming language. Both authors read and approved the final manuscript. Article Information DOI: 10.9734/ARJOM/2018/34769 Editor(s): (1) Danilo Costarelli, Department of Mathematics and Computer Science, University of Perugia, Italy. Reviewers: (1) Najmuddin Ahmad, Integral University, India. (2) El Akkad Abdeslam, Morocco. Complete Peer review History: http://www.sciencedomain.org/review-history/23003 Received: 10 th June 2017 Accepted: 11 th January 2018 Original Research Article Published: 3rd February2018 _______________________________________________________________________________ Abstract In this research work two iterative methods of solving system of linear equation has been compared, the iterative methods are used for solving sparse and dense system of linear equation and the methods were being considered are: Jacobi method and Gauss-Seidel method. The results show that Gauss-Seidel method is more efficient than Jacobi method by considering maximum number of iteration required to converge and accuracy. Keywords: Iterative methods; Linear equations problem; convergence; square matrix. 1 Introduction The development of numerical methods on a daily basis is to find the right solution techniques for solving problems in the field of applied science and pure science, such as weather forecasts, population, the spread of the disease, chemical reactions, physics, optics and others.
    [Show full text]
  • Open Lu-Dissertation.Pdf
    The Pennsylvania State University The Graduate School THE AUXILIARY SPACE SOLVERS AND THEIR APPLICATIONS A Dissertation in Mathematics by Lu Wang c 2014 Lu Wang Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2014 The dissertation of Lu Wang was reviewed and approved∗ by the following: Jinchao Xu Professor of Department of Mathematics Dissertation Advisor, Chair of Committee James Brannick Associate Professor of the Department of Mathematics Ludmil Zikatanov Professor of the Department of Mathematics Chao-Yang Wang Professor of Materials Science and Engineering Yuxi Zheng Professor of the Department of Mathematics Department Head ∗Signatures are on file in the Graduate School. Abstract Developing efficient iterative methods and parallel algorithms for solving sparse linear sys- tems discretized from partial differential equations (PDEs) is still a challenging task in scien- tific computing and practical applications. Although many mathematically optimal solvers, such as the multigrid methods, have been analyzed and developed, the unfortunate reality is that these solvers have not been used much in practical applications. In order to narrow the gap between theory and practice, we develop, formulate, and analyze mathematically optimal solvers that are robust and easy to use in practice based on the methodology of Fast Auxiliary Space Preconditioning (FASP). We develop a multigrid method on unstructured shape-regular grids by the construction of an auxiliary coarse grid hierarchy on which the multigrid method can be applied by using the FASP technique. Such a construction is realized by a cluster tree which can be obtained in O(N log N) operations for a grid of N nodes.
    [Show full text]