
clMAGMA: High Performance Dense Linear Algebra with OpenCL∗ y z Chongxiao Cao Jack Dongarra Peng Du Innovative Computing Innovative Computing Amazon.com Laboratory Laboratory Seattle, WA University of Tennessee University of Tennessee [email protected] Knoxville, TN Knoxville, TN [email protected] [email protected] Mark Gates Piotr Luszczek Stanimire Tomov Innovative Computing Innovative Computing Innovative Computing Laboratory Laboratory Laboratory University of Tennessee University of Tennessee University of Tennessee Knoxville, TN Knoxville, TN Knoxville, TN [email protected] [email protected] [email protected] ABSTRACT Categories and Subject Descriptors This paper presents the design and implementation of sev- G.4 [Mathematical software]: Algorithm design and anal- eral fundamental dense linear algebra (DLA) algorithms in ysis, Efficiency, Parallel implementations, Portability; G.1.3 OpenCL. In particular, these are linear system solvers and [Numerical analysis]: Numerical linear algebra|linear eigenvalue problem solvers. Further, we give an overview of systems, matrix inversion, eigenvalues and eigenvectors the clMAGMA library, an open source, high performance OpenCL library that incorporates the developments pre- 1. INTRODUCTION sented, and in general provides to heterogeneous architec- Solving linear systems of equations and eigenvalue problems tures the DLA functionality of the popular LAPACK library. is fundamental to scientific computing. The popular LA- The LAPACK-compliance and use of OpenCL simplify the PACK library [5], and in particular its vendor optimized use of clMAGMA in applications, while providing them implementations like Intel's MKL [13] or AMD's ACML [3], with portably performant DLA. High performance is ob- have been the libraries of choice to provide these solvers tained through use of the high-performance OpenCL BLAS, for dense matrices on shared memory systems. This pa- hardware and OpenCL-specific tuning, and a hybridization per considers a redesign of the LAPACK algorithms and methodology where we split the algorithm into computa- their OpenCL implementation to add efficient support for tional tasks of various granularities. Execution of those heterogeneous systems of multicore processors with GPU tasks is properly scheduled over the heterogeneous hardware accelerators and coprocessors. This is not the first time components by minimizing data movements and mapping that DLA libraries have needed a redesign to be efficient algorithmic requirements to the architectural strengths of on new architectures { notable examples being the move the various heterogeneous hardware components. from LINPACK [10] to LAPACK [5] in the 80's to make algorithms cache friendly, ScaLAPACK [8] in the 90's to ∗This research was sponsored by the National Science Foun- support distributed memory systems, and now the PLASMA dation through the Keeneland: National Institute for Experi- and MAGMA libraries [1] targeting efficiency on multicore mental Computing grant (award #0910735), the Department and heterogeneous architectures, respectively. of Energy, and AMD. y Also affiliated with the Oak Ridge National Laboratory, TN, The development of new high-performance numerical libraries USA and the University of Manchester, UK is complex, accounting for the extreme level of parallelism, zResearch completed while at the University of Tennessee, heterogeneity, and wide variety of accelerators and coproces- Knoxville sors available in current architectures. Challenges vary from new algorithmic designs to choices of programming models, languages, and frameworks that ease development, future maintenance, and portability. This paper addresses these issues while presenting our approach and algorithmic designs in the development of the clMAGMA [9] library. To provide a uniform portability across a variety of GPU ac- celerators and coprocessors (e.g., Intel Xeon Phi), clMAGMA uses OpenCL [14]. OpenCL is an open standard for off- loading computations to accelerators, coprocessors, and multi/ manycore processors, and is maintained by the Khronos group Currently, the most complete OpenCL BLAS implementa- with the backing of major hardware and software computer tion is AMD's clAmdBlas, provided through the AMD's industry vendors. It offers portability across hardware and Accelerated Parallel Processing Math Libraries (APPML) OS software. Although the use of OpenCL provides program- [2]. It can be used on architectures other than AMD, but its ming portability, cross-device performance portability is not tuning, and therefore highest efficiency, is on AMD hardware. guaranteed; we specifically address this in Section 2. The potential of OpenCL to express BLAS algorithms (vs. other, lower level access to the hardware languages) while To deal with the extreme level of parallelism and heterogene- obtaining high performance is evident through the clAmd- ity in current architectures, clMAGMA uses a hybridization Blas. Other implementations, e.g., from Nakasato et al. [16, methodology, described in Section 3, where we split the 15], confirm this by obtaining impressive high performance algorithms of interest into computational tasks of various matrix-matrix multiplication (GEMM). In particular, the granularities, and properly schedule those tasks' execution highest performance that we are aware of has been demon- over the heterogeneous hardware. Thus, we use a Directed strated by Matsumoto et al. [15] { their OpenCL DGEMM Acyclic Graph (DAG) approach to parallelism and schedul- reaches up to 848 Gflop/s, and SGEMM up to 2,646 Gflop/s, ing that has been developed and successfully used for dense which is 90% and 70% of the double and single precision linear algebra libraries such as PLASMA and MAGMA [1], peak, respectively, of AMD's Tahiti GPU (Radeon HD 7970). as well as in general task-based approaches to parallelism, such as runtime systems like StarPU [6] and SMPSs [7]. In previous work, we evaluated OpenCL as a programming tool for performance-portable BLAS [11]. Triangular solvers Besides the general cross-device considerations addressed in (TRSM) and GEMMs were developed in OpenCL, tuned for Section 2, obtaining high performance in OpenCL depends a specific device, and compared. The conclusion was that on a combination of algorithm and hardware-specific opti- OpenCL environment setup overhead is large and should mizations, discussed in Section 4. The implication of this be minimized, e.g., by preprocessing or localized in library on software, in order to maintain its performance portabil- initialization routines. More importantly, the performance re- ity across hardware, is the need to build in it algorithmic sults presented confirmed the conclusion above that OpenCL variations that are tunable, e.g., at installation time. This is expressive enough for developing high performance BLAS, is the basis of autotuning, an example of these advanced so long as architectural specifics are taken into account in the optimization techniques. algorithm design. Even though good performance should not be expected from blindly running algorithms on a new plat- A performance study on AMD hardware is presented in Sec- form, autotuning heuristics can help to improve performance tion 5. Besides verifying our approaches and confirming the on a single platform. appeal of OpenCL and accelerators for high-performance DLA, the results open up a number of future work opportu- Autotuning mechanisms are already provided in clAmdBlas nities discussed in our conclusions. through a tuning tool that the user can run to produce optimized OpenCL BLAS on the architecture of interest. 2. CROSS-DEVICE CONSIDERATIONS Thus, as performance portability of OpenCL BLAS can be A recommended approach to developing a high-performance obtained, organizing higher-level libraries like clMAGMA and easy to maintain DLA library is to express the algorithms in terms of OpenCL BLAS can ensure their performance of interest in terms of the BLAS standard. Performance portability as well. portability is then obtained through the use of architecture- specific, highly tuned BLAS implementations (e.g., MKL 2.2 Microbenchmarks from Intel or ACML from AMD). LAPACK and ScaLAPACK We developed a number of microbenchmarks to help us gain a have demonstrated this over the years, and now we see it in better understanding of OpenCL and to guide our algorithm the new MAGMA and PLASMA libraries. The clMAGMA design and tuning. We describe two benchmarks that can be library takes the same approach, and therefore performance key for performance { kernel launch overhead and CPU-GPU portability relies on the availability of portable OpenCL data transfer. To add some context to the measurements BLAS, discussed in Section 2.1. Specifics related to OpenCL reported, we include comparisons with corresponding CUDA and its implementation are also important for obtaining high- measurements. performance and must be addressed while designing and tuning OpenCL algorithms. Well designed microbenchmarks, 2.2.1 Kernel launch overhead shown in Section 2.2, can be used to obtain these key OpenCL The average time to asynchronously invoke an OpenCL 1.2 specifics to achieving high performance. AMD-APP (1016.4) kernel on an AMD Tahiti GPU (Radeon HD 7900 Series) is 1:0{1:5µs. This was measured by asyn- 2.1 Portable OpenCL BLAS chronously launching an empty kernel a large number of The Automatically Tuned Linear Algebra Software (ATLAS) times and synchronizing at the end. The overhead increases library [19] is a BLAS implementation for CPUs. ATLAS to 120µs when synchronizing
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-