Comparison of Two Stationary Iterative Methods

Total Page:16

File Type:pdf, Size:1020Kb

Comparison of Two Stationary Iterative Methods Comparison of Two Stationary Iterative Methods Oleksandra Osadcha Faculty of Applied Mathematics Silesian University of Technology Gliwice, Poland Email:[email protected] Abstract—This paper illustrates comparison of two stationary Some convergence result for the block Gauss-Seidel method iteration methods for solving systems of linear equations. The for problems where the feasible set is the Cartesian product of aim of the research is to analyze, which method is faster solve m closed convex sets, under the assumption that the sequence this equations and how many iteration required each method for solving. This paper present some ways to deal with this problem. generated by the method has limit points presented in.[5]. In [9] presented improving the modiefied Gauss-Seidel method Index Terms—comparison, analisys, stationary methods, for Z-matrices. In [11] presented the convergence Gauss-Seidel Jacobi method, Gauss-Seidel method iterative method, because matrix of linear system of equations is positive definite, but it does not afford a conclusion on the convergence of the Jacobi method. However, it also proved I. INTRODUCTION that Jacobi method also converges. Also these methods were present in [12], where described each The term ”iterative method” refers to a wide range of method. techniques that use successive approximations to obtain more This publication present comparison of Jacobi and Gauss- accurate solutions to a linear system at each step. In this publi- Seidel methods. These methods are used for solving systems cation we will cover two types of iterative methods. Stationary of linear equations. We analyze, how many iterations required methods are older, simpler to understand and implement, but each method and which method is faster. usually not as effective. Nonstationary methods are a rela- tively recent development; their analysis is usually harder to understand, but they can be highly effective. The nonstationary II. JACOBI METHOD methods we present are based on the idea of sequences of We consider the system of linear equations orthogonal vectors. (An exception is the Chebyshev iteration method, which is based on orthogonal polynomials.) Ax = B The rate at which an iterative method converges depends greatly on the spectrum of the coeffcient matrix. Hence, where A 2 Mn×n i detA 6= 0. iterative methods usually involve a second matrix that trans- We write the matrix in form of a sum of three matricies: forms the coeffcient matrix into one with a more favorable spectrum. The transformation matrix is called a precondi- A = L + D + U tioner. A good preconditioner improves the convergence of the iterative method, suffciently to overcome the extra cost of where: constructing and applying the preconditioner. Indeed, without • L - the strictly lower-triangular matrix a preconditioner the iterative method may even fail to converge • D - the diagonal matrix [1]. • U - the strictly upper-triangular matrix There are a lot of publications with stationary iterative If the matrix has next form: methods. As is well known, a real symmetric matrix can be 2 3 transformed iteratively into diagonal form through a sequence a11 a12 : : : a1n of appropriately chosen elementary orthogonal transformations 6 a21 a22 : : : a2n 7 6 . 7 (in the following called Jacobi rotations): Ak ! Ak+1 = 6 . .. 7 4 . 5 U T A U (A = given matrix), this transformations pre- k k k 0 a a : : : a sented in [2]. The special cyclic Jacobi method for computing n1 n2 nn the eigenvalues and eigenvectors of a real symmetric matrix then annihilates the off-diagonal elements of the matrix succes- 2 0 0 ::: 0 3 sively, in the natural order, by rows or columns. Cyclic Jacobi 6 a21 0 ::: 0 7 L = 6 7 method presented in [4]. 6 . .. 7 4 . 5 Copyright held by the author(s). an1 an2 ::: 0 39 2 3 a11 0 ::: 0 We have: 6 0 a22 ::: 0 7 2 3 2 3 D = 6 7 4 −1 0 2 6 . .. 7 4 . 5 A = 4 −1 4 −1 5 b = 4 6 5 0 0 : : : ann 0 −1 4 2 2 0 a : : : a 3 2 3 2 1 3 12 1n 0 −1 0 4 0 0 0 0 : : : a −1 1 6 2n 7 L + U = 4 −1 0 −1 5 D 4 0 4 0 5 U = 6 7 1 6 . .. 7 0 −1 0 0 0 − 4 . 5 4 0 0 ::: 0 We count 2 1 3 If the matrix A is nonsingular (detA 6= 0), then we can − 4 0 0 −1 1 rearrange its rows, so that on the diagonal there are no zeros, M = −D · (L + U) = 4 0 − 4 0 5 · 1 so aii 6= 0, 8i2(1;2;:::;n) 0 0 4 If the diagonal matrix D is nonsingular matrix (so aii 6= 0, 2 3 2 1 3 0 −1 0 0 4 0 i 2 1; : : : ; n), then inverse matrix has following form: 1 1 · 4 −1 0 −1 5 = 4 4 0 4 5 2 1 0 ::: 0 3 0 −1 0 0 − 1 0 a11 4 . 6 1 . 7 2 1 3 2 3 2 1 3 −1 6 0 ::: . 7 0 0 2 D = 6 a22 7 4 2 6 . 7 W = D−1 · b = 0 1 0 · 6 = 3 . .. 0 4 4 5 4 5 4 2 5 4 5 0 0 − 1 2 − 1 0 ::: 0 1 4 2 ann The iterative scheme has the next form: By substituting the above distribution for the system of equa- 2 1 3 2 i 3 2 1 3 tions, we obtain: 0 4 0 x 2 i+1 i 1 1 i 3 v = Mv + w = 4 4 0 4 5 · 4 y 5 + 4 2 5 (L + D + U)x = b 1 i 1 0 − 4 0 z − 2 After next convertions, we have: i=0 2 1 3 2 3 2 1 3 2 1 3 Dx = −(L + U)x + b 0 4 0 0 2 2 1 1 1 3 3 v = 4 4 0 4 5 · 4 0 5 + 4 2 5 = 4 2 5 −1 −1 1 1 1 x = −D (L + U)x + D b 0 − 4 0 0 − 2 − 2 Using this, we get the iterative scheme of the Jacobi method: 2 3 2 1 3 2 3 4 −1 0 2 2 i 3 ( i+1 −1 i −1 Av − b = 4 −1 4 −1 5 · 4 2 5 − 4 6 5 = x = −D (L + U)x + D b 1 i = 0; 1; 2 ::: 0 −1 4 − 2 2 x0 − initial approximation 2 1 3 2 3 2 3 3 2 − −1 −1 2 2 Matrix −D (L + U) and vector D b do not change during = 4 6 5 − 4 6 5 = 4 0 5 = calculations. 1 3 2 2 − 2 Algorithm of Jacobi method: r 32 32 3p 1) Data : = − + 02 + − = 2 • Matrix A 2 2 2 • vector b i=1 0 • vector x 2 1 3 2 1 3 2 1 3 2 3 3 2 1 3 0 4 0 2 2 8 2 • approximation " ; (" > 0) 1 1 1 3 3 3 v = 4 4 0 4 5·4 2 5+4 2 5 = 4 0 5+4 2 5 = 2) We set matrices L + U i D−1 1 1 1 3 1 0 − 4 0 − 2 − 2 − 8 − 2 3) We count matrix M = −D−1(L + U) and vector w = −1 2 7 3 D b 8 i+1 i i+1 3 4) We count x = Mx +w so long until Ax − b = 4 2 5 > 7 " − 8 i+1 5) The result is the last counted x 2 3 2 7 3 2 3 4 −1 0 8 2 Example. Use Jacobi’s method to solve the system of equa- i 3 Av − b = 4 −1 4 −1 5 · 4 2 5 − 4 6 5 = tions. The zero vector is assumed as the initial approximation. 7 0 −1 4 − 8 2 8 4x − y = 2 2 0 3 2 3 2 3 2 3 <> 2 2 0 −x + 4y − z = 6 v0 = 0 4 5 = 4 6 5 − 4 6 5 = 4 0 5 = 0 > 0 :−y − 4z = 2 2 2 0 40 2 1 3 2 3 III. GAUSS-SEIDEL METHOD − 3 0 0 0 0 0 −1 1 M1 = −D · L = 4 0 4 0 5 · 4 1 0 0 5 = Analogously to the Jacobi method, in the Gauss-Seidel 0 0 1 0 −1 0 method, we write the matrix A in the form of a sum: 2 2 0 0 0 3 A = L + D + U 1 = 4 4 0 0 5 1 where matrices L,D and U they are defined in the same way 0 − 2 0 as in the Jacobi method. Next we get: 2 1 3 2 3 − 3 0 0 0 −1 1 −1 1 Dx = −Lx − Ux + b M2 = −D · U = 4 0 4 0 5 · 4 0 0 1 5 = 1 0 0 2 0 0 0 where 2 1 1 3 −1 −1 −1 0 3 − 3 x = −D Lx − D Ux + D b 1 = 4 0 0 4 5 Based on the above formula, we can write an iterative scheme 0 0 0 of the Gauss-Seidel method: 2 1 3 2 3 2 3 3 0 0 3 1 ( i+1 1 i+1 −1 −1 −1 1 1 x = −D Lx − D Ux + D b w = D · b = 4 0 − 4 0 5 · 4 −2 5 = 4 2 5 0 i = 0; 1; 2 ::: 1 3 x − initial approximation 0 0 − 2 −3 2 In the Jacobi method, we use the corrected values of subse- The iterative scheme has following form: quent coordinates of the solution only in the next step of the 2 i+1 3 2 3 2 i+1 3 x1 0 0 0 x1 iteration.
Recommended publications
  • Overview of Iterative Linear System Solver Packages
    Overview of Iterative Linear System Solver Packages Victor Eijkhout July, 1998 Abstract Description and comparison of several packages for the iterative solu- tion of linear systems of equations. 1 1 Intro duction There are several freely available packages for the iterative solution of linear systems of equations, typically derived from partial di erential equation prob- lems. In this rep ort I will give a brief description of a numberofpackages, and giveaninventory of their features and de ning characteristics. The most imp ortant features of the packages are which iterative metho ds and preconditioners supply; the most relevant de ning characteristics are the interface they present to the user's data structures, and their implementation language. 2 2 Discussion Iterative metho ds are sub ject to several design decisions that a ect ease of use of the software and the resulting p erformance. In this section I will give a global discussion of the issues involved, and how certain p oints are addressed in the packages under review. 2.1 Preconditioners A go o d preconditioner is necessary for the convergence of iterative metho ds as the problem to b e solved b ecomes more dicult. Go o d preconditioners are hard to design, and this esp ecially holds true in the case of parallel pro cessing. Here is a short inventory of the various kinds of preconditioners found in the packages reviewed. 2.1.1 Ab out incomplete factorisation preconditioners Incomplete factorisations are among the most successful preconditioners devel- op ed for single-pro cessor computers. Unfortunately, since they are implicit in nature, they cannot immediately b e used on parallel architectures.
    [Show full text]
  • Chebyshev and Fourier Spectral Methods 2000
    Chebyshev and Fourier Spectral Methods Second Edition John P. Boyd University of Michigan Ann Arbor, Michigan 48109-2143 email: [email protected] http://www-personal.engin.umich.edu/jpboyd/ 2000 DOVER Publications, Inc. 31 East 2nd Street Mineola, New York 11501 1 Dedication To Marilyn, Ian, and Emma “A computation is a temptation that should be resisted as long as possible.” — J. P. Boyd, paraphrasing T. S. Eliot i Contents PREFACE x Acknowledgments xiv Errata and Extended-Bibliography xvi 1 Introduction 1 1.1 Series expansions .................................. 1 1.2 First Example .................................... 2 1.3 Comparison with finite element methods .................... 4 1.4 Comparisons with Finite Differences ....................... 6 1.5 Parallel Computers ................................. 9 1.6 Choice of basis functions .............................. 9 1.7 Boundary conditions ................................ 10 1.8 Non-Interpolating and Pseudospectral ...................... 12 1.9 Nonlinearity ..................................... 13 1.10 Time-dependent problems ............................. 15 1.11 FAQ: Frequently Asked Questions ........................ 16 1.12 The Chrysalis .................................... 17 2 Chebyshev & Fourier Series 19 2.1 Introduction ..................................... 19 2.2 Fourier series .................................... 20 2.3 Orders of Convergence ............................... 25 2.4 Convergence Order ................................. 27 2.5 Assumption of Equal Errors ...........................
    [Show full text]
  • Iterative Solution Methods and Their Rate of Convergence
    Uppsala University Graduate School in Mathematics and Computing Institute for Information Technology Numerical Linear Algebra FMB and MN1 Fall 2007 Mandatory Assignment 3a: Iterative solution methods and their rate of convergence Exercise 1 (Implementation tasks) Implement in Matlab the following iterative schemes: M1: the (pointwise) Jacobi method; M2: the second order Chebyshev iteration, described in Appendix A; M3: the unpreconditioned conjugate gradient (CG) method (see the lecture notes); M4: the Jacobi (diagonally) preconditioned CG method (see the lecture notes). For methods M3 and M4, you have to write your own Matlab code. For the numerical exper- iments you should use your implementation and not the available Matlab function pcg. The latter can be used to check the correctness of your implementation. Exercise 2 (Generation of test data) Generate four test matrices which have different properties: A: A=matgen disco(s,1); B: B=matgen disco(s,0.001); C: C=matgen anisot(s,1,1); D: D=matgen anisot(s,1,0.001); Here s determines the size of the problem, namely, the matrix size is n = 2s. Check the sparsity of the above matrices (spy). Compute the complete spectra of A, B, C, D (say, for s = 3) by using the Matlab function eig. Plot those and compare. Compute the condition numbers of these matrices. You are advised to repeat the experiment for a larger s to see how the condition number grows with the matrix size n. Exercise 3 (Numerical experiments and method comparisons) Test the four meth- ods M1, M2, M3, M4 on the matrices A, B, C, D.
    [Show full text]
  • Iterative Methods for Linear Systems of Equations: a Brief Historical Journey
    Iterative methods for linear systems of equations: A brief historical journey Yousef Saad Abstract. This paper presents a brief historical survey of iterative methods for solving linear systems of equations. The journey begins with Gauss who developed the first known method that can be termed iterative. The early 20th century saw good progress of these methods which were initially used to solve least-squares systems, and then linear systems arising from the discretization of partial differential equations. Then iterative methods received a big impetus in the 1950s - partly because of the development of computers. The survey does not attempt to be exhaustive. Rather, the aim is to bring out the way of thinking at specific periods of time and to highlight the major ideas that steered the field. 1. It all started with Gauss A big part of the contributions of Carl Friedrich Gauss can be found in the vo- luminous exchanges he had with contemporary scientists. This correspondence has been preserved in a number of books, e.g., in the twelve \Werke" volumes gathered from 1863 to 1929 at the University of G¨ottingen1. There are also books special- ized on specific correspondences. One of these is dedicated to the exchanges he had with Christian Ludwig Gerling [67]. Gerling was a student of Gauss under whose supervision he earned a PhD from the university of G¨ottingenin 1812. Gerling later became professor of mathematics, physics, and astronomy at the University of Marburg where he spent the rest of his life from 1817 and maintained a relation with Gauss until Gauss's death in 1855.
    [Show full text]
  • Linear Algebra Libraries
    Linear Algebra Libraries Claire Mouton [email protected] March 2009 Contents I Requirements 3 II CPPLapack 4 III Eigen 5 IV Flens 7 V Gmm++ 9 VI GNU Scientific Library (GSL) 11 VII IT++ 12 VIII Lapack++ 13 arXiv:1103.3020v1 [cs.MS] 15 Mar 2011 IX Matrix Template Library (MTL) 14 X PETSc 16 XI Seldon 17 XII SparseLib++ 18 XIII Template Numerical Toolkit (TNT) 19 XIV Trilinos 20 1 XV uBlas 22 XVI Other Libraries 23 XVII Links and Benchmarks 25 1 Links 25 2 Benchmarks 25 2.1 Benchmarks for Linear Algebra Libraries . ....... 25 2.2 BenchmarksincludingSeldon . 26 2.2.1 BenchmarksforDenseMatrix. 26 2.2.2 BenchmarksforSparseMatrix . 29 XVIII Appendix 30 3 Flens Overloaded Operator Performance Compared to Seldon 30 4 Flens, Seldon and Trilinos Content Comparisons 32 4.1 Available Matrix Types from Blas (Flens and Seldon) . ........ 32 4.2 Available Interfaces to Blas and Lapack Routines (Flens and Seldon) . 33 4.3 Available Interfaces to Blas and Lapack Routines (Trilinos) ......... 40 5 Flens and Seldon Synoptic Comparison 41 2 Part I Requirements This document has been written to help in the choice of a linear algebra library to be included in Verdandi, a scientific library for data assimilation. The main requirements are 1. Portability: Verdandi should compile on BSD systems, Linux, MacOS, Unix and Windows. Beyond the portability itself, this often ensures that most compilers will accept Verdandi. An obvious consequence is that all dependencies of Verdandi must be portable, especially the linear algebra library. 2. High-level interface: the dependencies should be compatible with the building of the high-level interface (e.
    [Show full text]
  • Implementing a Preconditioned Iterative Linear Solver Using Massively Parallel Graphics Processing Units
    Implementing a Preconditioned Iterative Linear Solver Using Massively Parallel Graphics Processing Units by Amirhassan Asgari Kamiabad A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department of Electrical and Computer Engineering University of Toronto Copyright c 2011 by Amirhassan Asgari Kamiabad Abstract Implementing a Preconditioned Iterative Linear Solver Using Massively Parallel Graphics Processing Units Amirhassan Asgari Kamiabad Master of Applied Science Graduate Department of Electrical and Computer Engineering University of Toronto 2011 The research conducted in this thesis provides a robust implementation of a precondi- tioned iterative linear solver on programmable graphic processing units (GPUs), appro- priately designed to be used in electric power system analysis. Solving a large, sparse linear system is the most computationally demanding part of many widely used power system analysis, such as power flow and transient stability. This thesis presents a de- tailed study of iterative linear solvers with a focus on Krylov-based methods. Since the ill-conditioned nature of power system matrices typically requires substantial precon- ditioning to ensure robustness of Krylov-based methods, a polynomial preconditioning technique is also studied in this thesis. Programmable GPUs currently offer the best ratio of floating point computational throughput to price for commodity processors, outdistancing same-generation CPUs by an order of magnitude. This has led to the widespread
    [Show full text]
  • 11 Mar 2021 Acceleration Methods
    Acceleration Methods Alexandre d’Aspremont CNRS & Ecole Normale Supérieure, Paris [email protected] Damien Scieur Samsung SAIT AI Lab & MILA, Montreal [email protected] Adrien Taylor INRIA & Ecole Normale Supérieure, Paris [email protected] arXiv:2101.09545v2 [math.OC] 11 Mar 2021 Contents 1 Introduction 2 2 Chebyshev Acceleration 5 2.1 Introduction ................................... 5 2.2 OptimalMethodsandMinimaxPolynomials . ..... 7 2.3 TheChebyshevMethod ............................. 9 3 Nonlinear Acceleration 15 3.1 Introduction ................................... 15 3.2 QuadraticCase ................................. 16 3.3 The Non-Quadratic Case: Regularized Nonlinear Acceleration.. .. .. 20 3.4 Constrained/Proximal Nonlinear Acceleration . .......... 26 3.5 Heuristics for Speeding-up Nonlinear Acceleration . ............ 29 3.6 RelatedWork .................................. 31 4 Nesterov Acceleration 32 4.1 Introduction ................................... 33 4.2 GradientMethodandPotential Functions . ...... 35 4.3 OptimizedGradientMethod . 38 4.4 Nesterov’sAcceleration . 45 4.5 AccelerationunderStrongConvexity . ...... 51 4.6 Recent Variants of Accelerated Methods . ...... 59 4.7 PracticalExtensions . .. .. .. .. .. .. .. .. 65 4.8 Notes&References ............................... 81 5 Proximal Acceleration and Catalyst 85 5.1 Introduction ................................... 85 5.2 Proximal Point Algorithm and Acceleration . ........ 86 5.3 Güler and Monteiro-Svaiter Acceleration . ........ 90 5.4 ExploitingStrongConvexity.
    [Show full text]
  • Explicit Iterative Methods of Second Order and Approximate Inverse Preconditioners for Solving Complex Computational Problems
    Applied Mathematics, 2020, 11, 307-327 https://www.scirp.org/journal/am ISSN Online: 2152-7393 ISSN Print: 2152-7385 Explicit Iterative Methods of Second Order and Approximate Inverse Preconditioners for Solving Complex Computational Problems Anastasia-Dimitra Lipitakis Department of Informatics and Telematics, Harokopio University, Athens, Greece How to cite this paper: Lipitakis, A.-D. Abstract (2020) Explicit Iterative Methods of Second Order and Approximate Inverse Precondi- Explicit Exact and Approximate Inverse Preconditioners for solving complex tioners for Solving Complex Computational linear systems are introduced. A class of general iterative methods of second Problems. Applied Mathematics, 11, 307-327. order is presented and the selection of iterative parameters is discussed. The https://doi.org/10.4236/am.2020.114023 second order iterative methods behave quite similar to first order methods Received: March 3, 2020 and the development of efficient preconditioners for solving the original li- Accepted: April 19, 2020 near system is a decisive factor for making the second order iterative methods Published: April 22, 2020 superior to the first order iterative methods. Adaptive preconditioned Con- jugate Gradient methods using explicit approximate preconditioners for Copyright © 2020 by author(s) and Scientific Research Publishing Inc. solving efficiently large sparse systems of algebraic equations are also pre- This work is licensed under the Creative sented. The generalized Approximate Inverse Matrix techniques can be effi- Commons Attribution International ciently used in conjunction with explicit iterative schemes leading to effective License (CC BY 4.0). composite semi-direct solution methods for solving large linear systems of http://creativecommons.org/licenses/by/4.0/ algebraic equations.
    [Show full text]
  • Solution of General Linear Systems of Equations Using Block Krylov Based
    Solution of general linear systems of equations using blo ck Krylov based iterative metho ds on distributed computing environments LeroyAnthony Drummond Lewis Decemb er i Resolution de systemes lineaires creux par des metho des iteratives par blo cs dans des environnements distribues heterogenes Resume Nous etudions limplantation de metho des iteratives par blo cs dans des environnements mul tipro cesseur amemoire distribuee p our la resolution de systemes lineaires quelconques Dans un premier temps nous nous interessons aletude du p otentiel de la metho de du gradient con jugue classique en environnement parallele Dans un deuxieme temps nous etudions une autre metho de iterativedelameme famille que celle du gradient conjugue qui a ete concue p our resoudre simultanementdessystemes lineaires avec de multiples seconds membres et commune mentreferencee sous le nom de gradient conjugue par blo cs La complexite algorithmique de la metho de du gradient conjugue par blo cs est superieure a celle de la metho de du gradientconjugue classique car elle demande plus de calculs par iteration et necessite en outre plus de memoire Malgre cela la metho de du gradient conjugue par blo cs apparat comme etant mieux adaptee aux environnements vectoriels et paralleles Nous avons etudie trois variantes du gradient conjugue par blo cs basees sur dierents mo deles de programmation parallele et leur ecaciteaete comparee sur divers environnements distribues Pour la resolution des systemes lineaires non symetriques nous considerons lutilisation de me thodes
    [Show full text]
  • LSRN: a Parallel Iterative Solver for Strongly Over-Or Under-Determined
    LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDER-DETERMINED SYSTEMS XIANGRUI MENG∗, MICHAEL A. SAUNDERSy , AND MICHAEL W. MAHONEYz Abstract. We describe a parallel iterative least squares solver named LSRN that is based on n random normal projection. LSRN computes the min-length solution to minx2R kAx − bk2, where m×n A 2 R with m n or m n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is only involved in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size dγ min(m; n)e × min(m; n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results demonstrate that on a shared-memory machine, LSRN outperforms LAPACK's DGELSD on large dense problems, and MATLAB's backslash (SuiteSparseQR) on sparse problems. Further experiments demonstrate that LSRN scales well on an Amazon Elastic Compute Cloud cluster. Key words. linear least squares, over-determined system, under-determined system, rank- deficient, minimum-length solution, LAPACK, sparse matrix, iterative method, preconditioning, LSQR, Chebyshev semi-iterative method, Tikhonov regularization, ridge regression, parallel com- puting, random projection, random sampling, random matrix, randomized algorithm AMS subject classifications.
    [Show full text]
  • A Parallel Iterative Solver for Strongly Over- Or Underdetermined Systems∗
    SIAM J. SCI. COMPUT. c 2014 Society for Industrial and Applied Mathematics Vol. 36, No. 2, pp. C95–C118 LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS∗ † ‡ § XIANGRUI MENG , MICHAEL A. SAUNDERS , AND MICHAEL W. MAHONEY Abstract. We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈Rn Ax − b2,where A ∈ Rm×n with m n or m n,andwhereA may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size γ min(m, n)×min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large prob- lems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in.
    [Show full text]
  • IML++ V.1.2 Iterative Methods Library, Reference Guide
    Jack Dongarra IML++ v. 1.2 Oak Ridge National Laboratory and The University of Tennessee Andrew Lumsdaine Iterative Methods Library University of Notre Dame Roldan Pozo Reference Guide Karin A. Remington U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Applied and Computational Mathematics Division Gaithersburg, MD 20899 QC 100 .U56 NO. 5860 NIST 1996 NISTIR 5860 Jack Dongarra IML++ v. 1.2 Oak Ridge National Laboratory and The University of Tennessee Andrew Lumsdaine Iterative Methods Library University of Notre Dame Roldan Pozo Reference Guide Karin A. Remington U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Applied and Computational Mathematics Division Gaithersburg, MD 20899 June 1996 U.S. DEPARTMENT OF COMMERCE Michael Kantor, Secretary TECHNOLOGY ADMINISTRATION Mary L. Good, Under Secretary for Technology NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY Arati Prabhakar, Director Contents 1 Introduction 1 2 Requirements 2 3 Using IML++ 5 4 Iterative Method Library Functions 7 BiCG 8 BiCGSTAB 10 CG 12 CGS 14 CHEBY 16 GMRES 19 IR 22 QMR 24 5 References 26 A Code Listings 26 bicg.h 27 bicgstab.h 28 cg.h 30 cgs.h 31 cheby.h 32 gmres.h 33 ir.h 36 qmr.h 37 l ; ; ; 1 Introduction The Iterative Methods Library, IML++, is a collection of algorithms implemented in C++ for solving both symmetric and nonsymmetric linear systems of equations by using iterative techniques. The goal of the package is to provide working code which separates the numerical algorithm from the details of the matrix/vector implementation. This separation allows the same algorithm to be used without regardless of the specific data representation.
    [Show full text]