Implementation of a preconditioned eigensolver using Hypre Andrew V. Knyazev 1 ,¤and Merico E. Argentati 1 1 Department of Mathematics, University of Colorado at Denver, USA SUMMARY This paper describes a parallel preconditioned algorithm for solution of partial eigenvalue problems for large sparse symmetric matrices on massively parallel computers, taking advantage of advances in the scalable linear solvers, in particular in multigrid technology and in incomplete factorizations, developed under the Hypre project, at the Lawrence Livermore National Laboratory (LLNL), Center for Applied Scientific Computing. The algorithm implements a ”matrix free” locally optimal block preconditioned conjugate gradient method (LOBPCG), suggested earlier by the first author, to compute one or more of the smallest eigenvalues and the corresponding eigenvectors of a symmetric matrix. We discuss our Hypre specific implementation approach for a flexible parallel algorithm, and the capabilities of the developed software. We demonstrate parallel performance on a set of test problems, using Hypre algebraic multigrid and other preconditioners. The code is written in MPI based C-language and uses Hypre and LAPACK libraries. It has been included in Hypre starting with revision 1.8.0b, which is publicly available on the Internet at LLNL web site. Copyright c 2005 John Wiley & Sons, Ltd. KEY WORDS: Eigenvalue; eigenvector; symmetric eigenvalue problem; preconditioning; sparse matrices; parallel computing; conjugate gradients; algebraic multigrid; additive Schwarz; incomplete factorization; Hypre. AMS(MOS) subject classifications. 65F15, 65N25, 65Y05. 1. Introduction We implement a parallel preconditioned algorithm of the Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG) [3, 4] for the solution of eigenvalue problems Ax = λx for large sparse symmetric matrices A on massively parallel computers. We take advantage of advances in the scalable linear solvers, in particular in multigrid technology and in incomplete factorizations, developed under the High Performance Preconditioners (Hypre) project [5], at the Lawrence Livermore ¤Correspondence to: Department of Mathematics, University of Colorado at Denver, P.O. Box 173364, Campus Box 170, Denver, CO 80217-3364. E-mail: [email protected] Contract/grant sponsor: Center for Applied Scientific Computing, Lawrence Livermore National Laboratory Contract/grant sponsor: National Science Foundation; contract/grant number: DMS 0208773 PRECONDITIONED EIGENSOLVER IN HYPRE 1 National Laboratory (LLNL), Center for Applied Scientific Computing. The solver allows for the utilization of a set of Hypre preconditioners for solution of an eigenvalue problem. We discuss the implementation approach for a flexible “matrix-free” parallel algorithm, and the capabilities of the developed software, see [1] for details. We test the performance of the software on a few simple test problems. The LOBPCG Hypre software has been integrated into the Hypre software at LLNL and has been included in the recent Beta Release: Hypre–1.8.0b. Hypre is publicly available on the Internet at LLNL web site to download. Software for the Preconditioned Eigensolvers is available through the Internet page http:// www-math.cudenver.edu/ ˜aknyazev/software/CG/ which contains, in particular, the MATLAB code of LOBPCG. The rest of the paper is organized as follows. Section 2 contains a complete and detailed description of the LOBPCG algorithm as implemented in Hypre. We discuss our Hypre specific implementation approach for a flexible parallel algorithm and the capabilities of the developed software in Sections 3-4. We demonstrate parallel performance on a set of test problems, using Hypre supported preconditioners in Section 5. The final Section 6 contain the conclusions. 2. The Detailed Description of the LOBPCG Algorithm Below is a detailed description of the LOBPCG algorithm that has been implemented in the Hypre code. The earlier published LOBPCG description in [4] is not nearly as complete as the description below. LOBPCG Algorithm in Hypre Input: m starting linearly independent vectors in X 2 Rn£m, devices to compute A ¤ x and T ¤ x. 1. Allocate memory for the six matrices X;AX;W;AW;P;AP 2 Rn£m. 2. Initial settings for loop: [Q;R] = qr(X);X = Q; AX = A ¤ X; k = 0. 3. Compute initial Ritz vectors and initial residual: [TMP;Λ] = eig(X T ¤ AX);X = X ¤ TMP;AX = AX ¤ TMP; W = AX ¡ X ¤ Λ. 4. Get norm of individual residuals and store in normR 2 Rm. 5. do while (max(normR) > tol) 6. Find index set I of normR for normR > tol. The index set I defines those residual vectors for which the norm is greater than the tolerance. Refer to the column vectors defined by I, that are a subset of the columns of W and P, by WI and PI. 7. Apply preconditioner: WI = T ¤WI. 8. Orthonormalize WI: [Q;R] = qr(WI);WI = Q. 9. Update AWI: AWI = A ¤WI. 10. if k > 0 11. Orthonormalize PI: [Q;R] = qr(PI);PI = Q; ¡1 12. Update API: API = API ¤ R . 13. end if 14. Complete Rayleigh Ritz Procedure: 15. if k > 0 Copyright c 2005 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2005; 00:0–0 Prepared using nlaauth.cls 2 ANDREW KNYAZEV AND MERICO ARGENTATI T T Λ X ¤ AWI X ¤ API T T T 16. Compute G = 2 WI ¤ AX WI ¤ AWI WI ¤ API 3. T T T PI ¤ AX PI ¤ AWI PI ¤ API 4 T T 5 I X ¤WI X ¤ PI T T 17. Compute M = 2 WI ¤ X I WI ¤ PI 3. T T P ¤ X P ¤WI I 4 I I 5 18. else T Λ X ¤ AWI 19. Compute G = T T . · WI ¤ AX WI ¤ AWI ¸ T I X ¤WI 20. Compute M = T . · WI ¤ X I ¸ 21. end if 22. Solve the generalized eigenvalue problem: G ¤ y = λM ¤ y and store the first m eigenvalues in increasing order in the diagonal matrix Λ and store the corresponding m eigenvectors in Y. 23. if k > 0 Y1 24. Partition Y = 2 Y2 3 according to the number of columns in Y 4 3 5 X; WI, and PI, respectively. 25. Compute X = X ¤Y1 +WI ¤Y2 + PI ¤Y3. 26. Compute AX = AX ¤Y1 + AWI ¤Y2 + API ¤Y3. 27. Compute PI = WI ¤Y2 + PI ¤Y3. 28. Compute API = AWI ¤Y2 + API ¤Y3. 29. else Y 30. Partition Y = 1 according to the number of columns in · Y2 ¸ X and WI respectively. 31. Compute X = X ¤Y1 +WI ¤Y2. 32. Compute AX = AX ¤Y1 + AWI ¤Y2. 33. Compute PI = WI. 34. Compute API = AWI. 35. end if 36. Compute new residuals: WI = AXI ¡ XI ¤ ΛI. 37. Update residual normR for columns of W in WI. 38. k = k + 1 39. end do Output: Eigenvectors X and eigenvalues Λ. 3. LOBPCG Hypre Implementation Strategy The Hypre eigensolver code is written in MPI, e.g., [2], based C–language and uses Hypre and LAPACK libraries. It has been originally tested with Hypre version 1.6.0 and is being incorporated Copyright c 2005 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2005; 00:0–0 Prepared using nlaauth.cls PRECONDITIONED EIGENSOLVER IN HYPRE 3 into Hypre starting with the 1.8.0b release. The user interface or Applications Program Interface (API) to the solver is implemented using Hypre style function calls. The matrix–vector multiply and the preconditioned solve are done through user supplied functions. This approach provides significant flexibility, and the implementation illustrates that the LOBPCG algorithm can successfully and efficiently use parallel libraries. Partition of vector data across the processors is determined by user input consisting of an initial array of parallel vectors. The same partitioning that is used for these vectors, is then used to setup and run the eigensolver. Partitioning of the matrix A and the preconditioner (also determined by the user) must be compatible with the partitioning of these initial vectors. Preconditioning is implemented through calls to the Hypre preconditioned conjugate gradient method (PCG). Specifically, the action x = Tb of the preconditioner T on a given vector b is performed by calling one of Hypre iterative linear solvers for the following linear algebraic system: Ax = b. Thus, we do not attempt to use shift-and-invert strategy, but instead simply take T to be a preconditioner for A. Since it is assumed in LOBPCG that the matrix A and the preconditioner T are both symmetric positive definite matrices, we only considered the Hypre PCG iterative method for inner iterations. Specifically, LOBPCG has been tested with Hypre PCG solvers with the following preconditioning: ² AMG–PCG: algebraic multigrid ² DS–PCG: diagonal scaling ² ParaSails–PCG: approximate inverse of A is generated by attempting to minimize kI ¡ AMkF ² Schwarz–PCG: additive Schwarz ² Euclid–PCG: incomplete LU 4. Hypre LOBPCG Software Implementation Details Hypre supports four conceptual interfaces: Struct, SStruct, FEM and IJ. At present, LOBPCG is only implemented using the IJ interface. This algebraic interface supports applications with general sparse matrices. A block diagram of the high-level software modules is given in Figure 1. The test driver ij es.c is a modified version of the Hypre driver ij.c and retains all the Hypre functionality and capabilities of ij.c. We anticipate that drives ij es.c and ij.c will be merged in the future Hypre releases. The ij es.c driver is used to run the eigensolver software with control through command line parameters. Various test matrices can be internally generated that consist of different types of Laplacians, or a Matrix Market format file may be read as input. The user can also choose the preconditioner. An initial set of vectors can be generated randomly, as an identity matrix of vectors, or a Matrix Market file may be input, with it's columns used as initial vectors. Various other input parameters can be specified such as the convergence tolerance, the maximum number of iterations and the maximum number of inner iterations for the Hypre solver.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-