On the Performance and Energy Efficiency of Sparse Linear Algebra on Gpus
Total Page:16
File Type:pdf, Size:1020Kb
Original Article The International Journal of High Performance Computing Applications 1–16 On the performance and energy efficiency Ó The Author(s) 2016 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav of sparse linear algebra on GPUs DOI: 10.1177/1094342016672081 hpc.sagepub.com Hartwig Anzt1, Stanimire Tomov1 and Jack Dongarra1,2,3 Abstract In this paper we unveil some performance and energy efficiency frontiers for sparse computations on GPU-based super- computers. We compare the resource efficiency of different sparse matrix–vector products (SpMV) taken from libraries such as cuSPARSE and MAGMA for GPU and Intel’s MKL for multicore CPUs, and develop a GPU sparse matrix–matrix product (SpMM) implementation that handles the simultaneous multiplication of a sparse matrix with a set of vectors in block-wise fashion. While a typical sparse computation such as the SpMV reaches only a fraction of the peak of current GPUs, we show that the SpMM succeeds in exceeding the memory-bound limitations of the SpMV. We integrate this kernel into a GPU-accelerated Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) eigensolver. LOBPCG is chosen as a benchmark algorithm for this study as it combines an interesting mix of sparse and dense linear algebra operations that is typical for complex simulation applications, and allows for hardware-aware optimizations. In a detailed analysis we compare the performance and energy efficiency against a multi-threaded CPU counterpart. The reported performance and energy efficiency results are indicative of sparse computations on supercomputers. Keywords sparse eigensolver, LOBPCG, GPU supercomputer, energy efficiency, blocked sparse matrix–vector product 1Introduction GPU-based supercomputers. We focus in particular on GPU kernel optimizations for the memory-bound Building an Exascale machine using the technology sparse matrix–vector product (SpMV), which serves as a employed in today’s fastest supercomputers would result building block of many sparse solvers. Standard Krylov in a power dissipation of about half a gigawatt methods, for example, are typically bounded by the (Dongarra et al., 2011). Providing suitable infrastructure SpMV performance, which is only a few per cent of the poses a significant challenge, and with a rough cost of peak of current GPUs, as quantified in Figure 1. One US$1 million per megawatt year, the running cost for approach to improve Krylov methods is to break up the facility would quickly exceed the acquisition cost. the data dependency between the SpMV and the dot This is the motivation for replacing homogeneous CPU products like in the recent work on the s-step and clusters with architectures generating most of their per- communication-avoiding Krylov methods (Hoemmen, formance with low-power processors, or accelerators. 2010; Yamazaki et al., 2014). The improvement in these Indeed, the success of using GPU accelerators to methods comes from grouping SpMV s into a (so-called) reduce energy consumption is evident by the fact that Matrix Powers Kernel (MPK), which allows reuse of 17 of the 18 greenest systems are accelerated by GPUs, the sparse matrix and vector reads during the computa- according to the November 2014 Green500 list,1 which tion of a Krylov subspace. We consider in this paper ranks supercomputers according to their performance/ a different approach that is based on building Watt ratio on the HPL benchmark.2 A drawback of this ranking is that HPL provides GFlop/s numbers that are usually orders of magnitude above the perfor- 1University of Tennessee, Knoxville, USA mance achieved in real-world applications, and, thus, it 2Oak Ridge National Laboratory, USA is insufficient to understand how energy efficient the 3University of Manchester, UK accelerated systems are when running scientific applica- Corresponding author: tions (Dongarra and Heroux, 2013). Hartwig Anzt, University of Tennessee, 1122 Volunteer Boulevard, Suite In this paper we unveil some performance and 203, Claxton, Knoxville, TN 37996, USA. energy efficiency frontiers for sparse computations on Email: [email protected] 2 The International Journal of High Performance Computing Applications Figure 1. NVIDIA K40 GPU computing efficiency on compute intensive (left: dense LU in double precision) versus memory-bound computation (right: SpMV in double precision for various data formats) on state-of-the-art hardware and software libraries. CPU runs use MKL (denoted MKL-CSR) with the numactl –interleave=all policy (see also MKL’s benchmarks (Intel Corporation, 2014)). block-Krylov subspace. Higher computational intensity with MKL’s counterparts, and the GPU-accelerated is achieved by multiplying several vectors simultane- LOBPCG solver against the multithreaded CPU imple- ously in a sparse matrix–matrix product (SpMM) kernel. mentation available in BLOPEX,3 for which the PETSc Communication is reduced in this case by accessing the and Hypre libraries provide an interface (Knyazev et al., matrix entries only once, and reusing them for the mul- 2007). Our GPU implementations are available through tiplication with all vectors, which results in significant the open-source MAGMA library starting from performance benefits. As the SpMM’s communication MAGMA 1.5 (Innovative Computing Laboratory, volume is less than the MPK’s, the SpMM performance UTK, 2014b). The target platforms are K20 and K40 can be useful in modeling/predicting the upper perfor- GPUs, and the Piz Daint Supercomputer located at the mance bounds of s-step and communication-avoiding Swiss National Computing Centre (CSCS).4 Krylov methods that rely on MPKs. Furthermore, SpMMs are needed in numerical algorithms such as the Discontinuous Galerkin, high-order finite element 2 Related work method (FEM), cases when vector quantities are simu- 2.1 Energy efficiency lated, or in specially designed block solvers, such as the Locally Optimal Block Preconditioned Conjugate An overview of energy-efficient scientific computing on Gradient (LOBPCG) used for finding a set of smallest/ extreme-scale systems is provided in Donofrio et al. largest eigenstates of an SPD matrix (Knyazev, 2001). (2009), where the authors address both hardware and We chose the LOBPCG method as a benchmark algo- algorithmic efforts to reduce the energy cost of scientific rithm as it provides an interesting mix between sparse applications. Jime´nez et al. (2010) analyze the trend and dense linear algebra operations, that is typical for towards energy-efficient microprocessors. Kestor et al. complex simulation applications. It also serves as a (2013) break down the energy cost of the data move- backbone for many scientific simulations, e.g. in quan- ment in a scientific application. Targeting different tum mechanics, where eigenstates and molecular orbi- mainstream architectures, Aliaga et al. (2015) present an tals are defined by eigenvectors, or principle component energy comparison study for the CG solver. Concerning analysis (PCA). Nonlinear Conjugate Gradient (CG) more complex applications, Charles et al. (2015) present methods have been successfully used in predicting the an efficiency comparison when running the COSMO- electronic properties of large nanostructures (Tomov ART simulation model (Knote, 2011) on different CPU et al., 2006; Vo¨mel et al., 2008), where the LOBPCG architectures. For an agroforestry application, Padoin method in particular has shown higher performance et al. (2013) investigate performance and power con- and improved convergence (reduced number of SpMV sumption on a hybrid CPU + GPU architecture, reveal- products compared with non-blocked CG methods). ing that a changing workload can drastically improve Applying this algorithm efficiently to multi-billion size energy efficiency. Krueger et al. (2011) compare the problems served as the backbone of two Gordon–Bell energy efficiency of a CPU and a GPU implementation Prize finalists who ran many-body simulations on the of a seismic modeling application against a general- Japanese Earth Simulator (Yamada et al., 2006, 2005). purpose manycore chip design called ‘‘Green Wave,’’ To put the GPU results in the context of traditional that is optimized for high-order wave equations. Based processors, we include results for multicore CPUs. We on the analysis on Intel Sandy Bridge processors, compare the GPU SpMV and SpMM implementations Wittmann et al. (2013) extrapolate the energy efficiency Anzt et al. 3 of a lattice-Boltzmann computational fluid dynamics the ABINIT material science package (Gonze et al., (CFD) simulation to a petascale-class machine. 2002). It benefits from utilizing the generic linear alge- bra routines available in the CUDA (NVIDIA, 2009) SpMV and MAGMA (Innovative Computing Laboratory, 2.2 Blocked UTK, 2014a) libraries. More recently, NVIDIA As there exists significant need for SpMM products, announced that LOBPCG will be included in the GPU- NVIDIA’s cuSPARSE library provides this routine for accelerated Algebraic Multigrid Accelerator library the CSR format.5 Aside from a straightforward imple- AmgX.6 This work extends the results presented by mentation assuming the set of vectors being stored in Anzt et al. (2015) by providing a more comprehensive column-major order, the library also contains an opti- analysis on energy and performance, introducing mod- mized version taking the block of vectors as a row- els for estimating hardware-imposed optimization major matrix that can be used in combination with a bounds, and enhancing the LOBPCG solver with preprocessing