
Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1998 Parallel Solution of Large Sparse Linear Systems by a Balance Scheme Preconditioner Tz. Ostromsky Ahmed Sameh Purdue University - Main Campus, [email protected] V. Sarin Report Number: 98-030 Ostromsky, Tz.; Sameh, Ahmed; and Sarin, V., "Parallel Solution of Large Sparse Linear Systems by a Balance Scheme Preconditioner" (1998). Department of Computer Science Technical Reports. Paper 1418. https://docs.lib.purdue.edu/cstech/1418 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. PARALLEL SOLUTION OF LARGE SPARSE LINEAR SYSTEMS BY A BALANCE SCHEME PRECONDITIONER Tzvetan Ostromsky Ahmed Samch Vivek Sarin Department of Computer Sciences Purdue University West LafayeUe, IN 47907 CSD-TR #98-030 Seplember 1998 Parallel Solution of Large Sparse Linear Systems by a Balance Scheme Preconditioner Tz. Ostromsky* A. Sameht V. Sarint Computer Science Department, Purdue University, West Lafayette, IN 47907- 1398 Abstract A parallel algorithm for preconditioning large and sparse linear systems is pro­ posed. Both structural and numerical dropping are used to construct a prccon­ ditioner with proper structure. The Balance method is used to solve the linear system involving such preconditioner in each iteration. The algorithm can be used together with any iterative method (GMRES is used for the experiments in this paper). As shown by numerical experiments, this approach is quite robust Imd has attractive parallel performance. It has been sllccessful in solving quickly, and accurately, some ill-conditioned systems, which proved to be difficult for othcr preconditioned iterativc methods. Keywords: linear system, iterativc method, preconditioner, sparse matri..x, drop­ tolerancc, block partitioning, bandwidth, parallel computations. 1 Introduction Iterative methods are most commonly used for solving large and sparse linear prob­ lems [9], [13], [16J. They are often faster and require less storage than direct solvers [4]. The main concern about their use, however, is their robustness a.<; well as the accuracy achievable. Without proper preconditioning they often fail or stagnate and, even worse, could converge far from the actual solution (see the results fOt' • e-mail: [email protected] t e-mail: [email protected] I e-mail: [email protected] 1 GRE216B in Tdble 1 in the last section of this paper). That is why the precondi­ tioning technique is at least as important as the iterative method, a lot of research has been conducted in this area [3, 6, 8, 14, 15, 16, 17J. More details regarding construction of our preconditioners is given in Section 2. The Balance method (its original projection-based version is described in [10]) is used to perform the preconditioning. After block-row partitioning and factorization of the blocks, it eventually leads to the solutioIl of a reduced system of smaller size. Rather than factori1'.ing the entire preconditioner, like in the well-known Incom­ plete LU-jadoTization (ILU) and similar preconditioning techniques, we factorize its blocks and the reduced system only, which are normally much smaller. In addition, there is a lot of natural parallelism in this task, which is highly desirable when us­ ing multiprocessors. To take full advantage of the features of t.he high-performance supercomputers, dense LAPACK [1J kernels are used in most of the floating-point computations. This results in a larger amount of arithmetics to be done, compared to classical sparse t.echniques ([4]' [7], (17]), hut results also in more efficiency on multipl'Ocessors due to higher data localit.y. The approach generally gives good re­ sults for matrices, in which most. of the nonzeros ate packed in a band around the main diagonal (or matrices that can be reordered in such form). The bandwidth imposes certain restrictions on the number of blocks that can be used by the Balance scheme, which apparently limit the parallel performance of the method. In other words, the st.ructure of t.he preconditioner is an important issue in our approach. The Balance met.hod, which is responsible for t.he preconditioning operations in our scheme, is described in more detail in Section 3. In Section 4 some results of numerical experiments on an SGI Origin-2000 are present.ed. The SPARSKIT version of GMRES ([13J, [14]) is uscd as the underlying it.erative algorithm. CG and BCGSTAB [16J have also been test.ed and give similar results, which are not. presented in this paper. For comparison, GMRES has been run also with some ILU-type preconditioners, also available in SPARSKIT. The main conclusions of the experiment.s are summarized in Section 5. 2 Constructing the preconditioner Generally speaking, a good precondit.ionel' A of the given matrix A must satisfy the following conditions: 1. To be nonsingular (rank(A) = n); 2. To be ea..<:;y to invert / factorize; 3. The product A-IA should be bett.er conditioned than A. It. is very difficult to satisfy all of t.hese requirements, especially for general sparse matrices. In our ca..<:;e, a mix of st.ructural and numerical dropping strategies, 2 combined with a proper reordering algorithm for bandwidth minimization, is used to obtain an appropriate preconditiouer. The nonzero structure of the preconditioner has lower and upper staircase-like boundaries that contain the main diagonal (or, in general, a narrow band with lower and upper bandwidth k1 and k 2 specified by the user). The structure is obtained in the fonowing way: 1. Foreachi=l, ... ,n (a) The i-th row is scanned to find the maximal absolute value of its element. Then the drop-tolerance for this row Tj = T maxj laj,jl is determined, where T is the relative drop-tolerance, specified by the user. (b) Another scan is performed in order to extract all the elements, {ai,i : ]uiJI2'. Til. These form the i-til row of the skeleton matrix of A. 2. The skeleton matrix is reordered in order to minimize its bandwidth. The same permutation is applied to the original system (matrix A and right-hand side b). 3. For each i = 1, ... , n (a) The i-th row of A is scanned to find the leftmost and the rightmost column index (li and Ti) of a nOll7,era, larger (by absolute value) than Tj. If nonzero lower and upper bandwidth k1 and k2 are specified, then Ii = i - k 1 , Ti = i + k2 are taken as initial values of Ii and Ti in order to preserve all the nonzeros within the band. (b) The i-th row is scanned again and all the nonzeros outside the interval (Ii, Ti] are dropped (i.e. considered to be l'.eros). This pracedure is consistent with the classical numerical dropping strategy (see [7], [17]), as all the nOllzeros larger (by absolute value) than Ti arc kept. The elements in between the boundaries, however, are never dropped irrespective of their numerical value. As the matrix will be processed as dense blocks later, no saving would be realized even if the clements were dropped. Saving these elements, however, results in a more robust preconditioning compared to the incomplete LU­ factorization (see Table 1 in Section (Numerical results). The "pseudo-banded" structure of the preconditioners is illustrated in Fig.I. This structure is exploited by the Balance method, described in the next section. 3 (1.1) (1.2k!c:.,"-) ~ ___, ............. - nonzeros larger than the tolerance 1 :;:1 - elemenls smaller than the lolerance (inside the selected band) - elements of A, dropped from the preconditioner (smaller than the tolerance) Figure 1: Sparsity structure of a "pseudo-banded" preconditioner 4 3 The Balance method The Balance method, introduced in [10], is a powerful block-parallel algorithm for solving sparse linear systems. Its main idea is to reduce the original problem into several subproblems of smaller size. After the partitioning, it computes the QR­ factorizations of nOll-overlapping block-rows (these are independent tasks) followed by the solution of a system that corresponds to the unknowns common for neigh­ boring blocks (called the 1"educcd system). For its block-partitioning, the Balance method uses the specific structure of the matrix (banded, block-handed or "pseudo­ banded", as described in the previous section). For the sake of efficiency the band should be as narrow as possible. The band of such a matrix, split into several block-rows, has block structure as shown in (1). To a certain extent one has freedom to vary the number and the size of the block-rows, which themselves strictly determine the entire block structure. The smaller the bandwidth, the larger the number of blocks this matrix allows to be partitioned on, and the smaller the size of the reduced system, provided partitioning on the same number of blocks is used. The corresponding bloc.k-matrix expression of the original problem (solving the sparse sysLem Ax = b) is given in (1). The overlapping parts (set of columns) of the neighboring block-rows should be sepa.rated by non-overlapping parts (blocks A;), as shown below: Xl b Al B, 6 l e, A, B, X, b, (1) e, A 3 B3 "' ~p-I ep Ap bp , :c1 where Xi and ~i are the unknowns, corresponding to Ai and B i (or Ci+l) respectively. Splitting the above system by block-rows (and duplicating temporarily the un­ knowns illl;;, common for both E i and Ci+l), we obtain: (2) (A"B,) ( ~: ) ~ b, (3) (Gi,A,E;) ("-';: ) bi i = 2...p - 1 (4) (ep , Ap ) ( (~~' ) bp 5 Denote by E; the non-trivial part of the i-th block-row. (5) E I (AI,nIl (6) Ei - (Gi, Ai, Ei) i = 2...p - 1 (7) Ep (Cp , Ap ) The solution ofeach of the underdetermined subsystems (2) - (4) can be obtained via QR~factorizatioll,as follows: (8) Eiz; = b; (i = 1, ..
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages18 Page
-
File Size-