Abstract 1 Introduction
Total Page:16
File Type:pdf, Size:1020Kb
Implementation in ScaLAPACK of Divide-and-Conquer Algorithms for Banded and Tridiagonal Linear Systems A. Cleary Department of Computer Science University of Tennessee J. Dongarra Department of Computer Science University of Tennessee Mathematical Sciences Section Oak Ridge National Laboratory Abstract Described hereare the design and implementation of a family of algorithms for a variety of classes of narrow ly banded linear systems. The classes of matrices include symmetric and positive de - nite, nonsymmetric but diagonal ly dominant, and general nonsymmetric; and, al l these types are addressed for both general band and tridiagonal matrices. The family of algorithms captures the general avor of existing divide-and-conquer algorithms for banded matrices in that they have three distinct phases, the rst and last of which arecompletely paral lel, and the second of which is the par- al lel bottleneck. The algorithms have been modi ed so that they have the desirable property that they are the same mathematical ly as existing factorizations Cholesky, Gaussian elimination of suitably reordered matrices. This approach represents a departure in the nonsymmetric case from existing methods, but has the practical bene ts of a smal ler and more easily hand led reduced system. All codes implement a block odd-even reduction for the reduced system that al lows the algorithm to scale far better than existing codes that use variants of sequential solution methods for the reduced system. A cross section of results is displayed that supports the predicted performance results for the algo- rithms. Comparison with existing dense-type methods shows that for areas of the problem parameter space with low bandwidth and/or high number of processors, the family of algorithms described here is superior. 1 Intro duction We are concerned in this work with the solution of banded linear systems of equations Ax = b: The matrix A is n n. In general, x and b are n nr hs matrices, but it is sucient to consider nr hs =1 for explanation of the algorithms. Here, nr hs is the number of right-hand sides in the system of equations. The matrix A is banded, with lower bandwidth and upp er bandwidth . This has the following l u meaning: If i j , then i j> A 0; if i j , then j i> A 0. l i;j u i;j Thus, the matrix entries are identically zero outside of a certain distance from the main diagonal. We are concerned here with narrow ly banded matrices, that is, n ;n . Note that while, l u 1 in general, entries within the band may also b e numerically zero, it is assumed in the co de that all entries within the band are nonzero. As a sp ecial case of narrowly banded matrices, we are also concerned with tridiagonal matrices, sp eci cally, = =1. l u Three classes of matrix are of interest: general nonsymmetric, symmetric and p ositive de nite, and nonsymmetric with conditions that allow for stable computations without pivoting, such as diagonal dominance. In this article we fo cus on the last two cases and indicate how the family of algorithms we present can apply to other classes of matrix. The rst class, general nonsymmetric, is the sub ject of a forthcoming rep ort. + This article concentrates on issues related to the inclusion in ScaLAPACK [CDPW93, BCC 97] of library-quality implementations of the algorithms discussed here. ScaLAPACK is a public-domain p ortable software library that provides broad functionality in linear algebra mathematical software to a wide variety of distributed-memory parallel systems. Details on retrieving the co des discussed here, as well as the rest of the ScaLAPACK package, are given at the end of this article. Algorithms for factoring a single banded matrix A fall into two distinct classes. Dense-typ e: Fine-grained parallelism such as is exploited in dense matrix algorithms is used. The banded structure is used to reduce op eration counts but not to provide parallelism. Divide and Conquer: Medium/large-grained parallelism resulting from the structure of banded matrices is used to divide the matrix into large chunks that, for the most part, can b e dealt with indep endently. The key parameter for cho osing b etween the two classes of algorithms is the bandwidth. Par- allelism of the dense metho ds dep ends on the bandwidth in the exact same way that parallelism in dense matrix algorithms dep ends on the matrix size. For small bandwidths, dense metho ds are very inecient. Reordering metho ds are the opp osite: the smaller the bandwidth, the greater the p otential level of parallelism. However, reordering metho ds have an automatic p enalty that reduces the sp ectrum over which they are the metho ds of choice: b ecause of ll-in, they incur an op eration count p enalty of approximately four compared with sequential algorithms. They are therefore lim- ited to an eciency of no greater than 25. Nonetheless, these algorithms scale well and must b e included in a parallel library. + Many studies of parallel computing rep orted in the literature [Wri91,AG95, GGJT96, BCD 94, Joh87, LS84] address either the general banded problem or sp ecial cases such as the tridiagonal problem. Wright[Wri91] presented an ambitious attempt at incorp orating pivoting, b oth row and column, throughout the entire calculation. His single-width separator algorithm without pivoting is similar to the work describ ed here. However, Wright targeted relatively small parallel systems and, in particular, used a sequential metho d for the reduced system. Arb enz and Gander [AG95] present exp erimental results demonstrating that sequential solution of the reduced system seriously impacts scalability. They discuss the basic divide-and-conquer algorithms used here, but do not give the level of detail on implementation that we present. Gustavson et al. [GGJT96] tackle the wide-banded case with a systolic approach that involves an initial remapping of the data. Cleary + [BCD 94] presents an in-place wide-banded algorithm that is much closer to the ScaLAPACK-style of algorithm [CDPW93], utilizing ne-grained parallelism within BLAS-3 for parallel p erformance. 2 2 Divide-and-Conquer Algorithms In this article, we are concerned only with divide-and-conquer metho ds that are appropriate for nar- rowly banded matrices. The family of divide-and-conquer algorithms used in ScaLAPACK p erform the following algebraic steps. Here, P is an n n p ermutation matrix sp eci ed in Section 3 that reorders A to allow exploitation of parallelism. First, Ax = b is multiplied on the left by P to pro duce the equation 1 PAP Px = Pb: 1 The reordered matrix PAP is factored via Gaussian elimination as if A is symmetric and p ositive T de nite, Cholesky decomp osition is used and U = L 1 PAP = LU: Substituting this factorization and the following de nitions, 0 0 x = Px;b = Pb; we are left with the system 0 0 LU x = b : This is solved in the traditional fashion by using triangular solutions: 0 0 Lz = b ;Ux = z: 0 The nal step is recovery of x from x : 1 0 x = P x : 3 The Symmetric Positive De nite Case Dongarra and Johnsson [DJ87] showed that their divide-and-conquer algorithm, when applied cor- rectly to a symmetric p ositive de nite matrix, can take advantage of symmetry throughout, including the solution of the reduced system. In fact, the reduced system is itself symmetric p ositive de nite. This fact can be explained easily in terms of sparse matrix theory: the algorithm is equivalent to applying a symmetric permutation to the original matrix and performing Cholesky factorization on T the reordered matrix. This follows from the fact that the matrix PAP resulting from a symmetric p ermutation P applied to a symmetric p ositive de nite matrix A is also symmetric p ositive de nite and thus has a unique Cholesky factorization L. The reordering is applied to the right-hand sides as well, and the solution must b e p ermuted back to the original ordering to give the nal solution. The key p oint is that, mathematically, the whole pro cess can b e analyzed and describ ed in terms of the Cholesky factorization of a sp ecially structured sparse matrix. We take this approach in the sequel. Figure 1 pictorially illustrates the user matrix up on input, assuming the user cho oses to input the matrix in lower triangular form analogous to banded routines + in LAPACK [ABB 95], we provide the option of entering the matrix in either lower or upp er form. Each pro cessor stores a contiguous set of columns of the matrix, denoted by the thicker lines in the gure. We partition each pro cessor's matrix into blo cks, as shown. The sizes of the A are given i resp ectively by O . The matrices B ;C ;D are all . Note that the last pro cessor has only A . i i i i i The sp eci c reordering can b e describ ed as follows: Numb er the equations in the A rst, keeping the same relative order of these equations. i 3 A1 B1 C1 D1 A2 B 2 C2 D2 A3 B3 C3 D3 A4 Figure 1: Divide-and-conquer partitioning of lower triangular p ortion of a symmetric matrix Numb er the equations in the C next, again keeping the same relative order of these equations. i This results in the lower triangular matrix shown in Figure 2. We stress that we do not physically reorder the matrix but, rather, base our blo ck op erations on this mathematical reordering.