Multithreaded Dense Linear Algebra on Asymmetric Multi-Core Processors

Total Page:16

File Type:pdf, Size:1020Kb

Multithreaded Dense Linear Algebra on Asymmetric Multi-Core Processors UNIVERSITAT JAUME I DE CASTELLO´ ESCOLA DE DOCTORAT DE LA UNIVERSITAT JAUME I Multithreaded Dense Linear Algebra on Asymmetric Multi-core Processors Castello´ de la Plana, November 2017 Ph.D. Thesis Presented by: Sandra Catalan´ Pallares´ Supervised by: Enrique S. Quintana Ort´ı Rafael Rodr´ıguez Sanchez´ Programa de Doctorat en Informàtica Escola de Doctorat de la Universitat Jaume I Multithreaded Dense Linear Algebra on Asymmetric Multi-core Processors Memòria presentada per Sandra Catalán Pallarés per optar al grau de doctor/a per la Universitat Jaume I. Sandra Catalán Pallarés Enrique S. Quintana Ortí, Rafael Rodríguez Sánchez Castelló de la Plana, desembre de 2017 Agraïments Institucionals Esta tesi ha estat finançada pel projecte FP7 318793 “EXA2GREEN” de la Comisió Europea, l’ajuda predoctoral de la Universitat Jaume I i per l’ajuda predoctoral FPU del Ministeri de Ciència i Educació. vi Contents 1 Introduction 1 1.1 Motivation.........................................1 1.2 Contributions........................................3 1.3 Organization........................................4 2 Background 5 2.1 Basic Linear Algebra Subprograms (BLAS).......................5 2.1.1 Levels of BLAS...................................6 2.1.2 The BLAS library election.............................7 2.2 Linear Algebra PACKage (LAPACK)..........................8 2.3 Target architectures....................................8 2.3.1 Power consumption.................................8 2.3.2 Asymmetric multi-core processors........................9 2.3.2.1 ARM big.LITTLE............................9 2.3.2.2 Odroid.................................. 10 2.3.2.3 Juno................................... 10 2.3.3 Intel......................................... 11 2.4 Measuring power consumption.............................. 11 2.4.1 PMLib and big.LITTLE platforms........................ 12 2.5 Summary.......................................... 13 3 Basic Linear Algebra Subprograms (BLAS) 15 3.1 Blas-like Library Instantiation Software (BLIS)..................... 15 3.2 BLIS-3............................................ 16 3.2.1 General matrix-matrix multiplication (gemm).................. 17 3.2.2 Generalization to BLAS-3............................. 19 3.2.3 Triangular system solve (trsm).......................... 20 3.3 BLIS-3 on ARM big.LITTLE............................... 20 3.4 Optimizing symmetric BLIS gemm on big.LITTLE................... 23 3.4.1 Cache optimization for the big and LITTLE cores............... 23 3.4.2 Multi-threaded Evaluation of BLIS gemm on the big and LITTLE clusters. 24 3.4.3 Asymmetric BLIS GEMM on big.LITTLE.................... 26 vii 3.4.3.1 Static-asymmetric scheduling (sas).................. 26 3.4.3.2 Cache-aware static-asymmetric scheduling (ca-sas)......... 29 3.4.3.3 Cache-aware dynamic-asymmetric scheduling (ca-das)....... 30 3.5 Asymmetric BLIS-3 Evaluation.............................. 32 3.5.1 Square operands in gemm ............................. 33 3.5.2 gemm with rectangular operands......................... 34 3.5.3 Other BLIS-3 kernels with rectangular operands................ 34 3.5.4 BLIS-3 in 64-bit AMPs.............................. 38 3.6 BLIS-2............................................ 38 3.6.1 General matrix-vector multiplication (gemv).................. 39 3.6.2 Symmetric matrix-vector multiplication (symv)................. 40 3.7 BLIS-2 on big.LITTLE................................... 41 3.7.1 Level-2 BLAS micro-kernels for ARMv7 architectures............. 41 3.7.2 Asymmetry-aware symv and gemv for ARM big.LITTLE AMPs....... 41 3.7.3 Cache optimization for the big and LITTLE cores............... 43 3.8 Evaluation of Asymmetric BLIS-2............................ 44 3.8.1 symv and gemv with square operands...................... 44 3.8.2 gemv with rectangular operands......................... 45 3.9 Summary.......................................... 45 4 Factorizations 47 4.1 Cholesky factorization................................... 47 4.2 LU.............................................. 50 4.3 Look-ahead in the LU factorization............................ 52 4.3.1 Basic algorithms and Block-Data Parallelism (BDP).............. 52 4.3.2 Static look-ahead and nested Task-Parallelism+Block-Data Parallelism (TP+BDP)..................................... 55 4.3.3 Advocating for Malleable Thread-Level Linear Algebra Libraries....... 57 4.3.3.1 Worker sharing (WS): Panel factorization less expensive than update 58 4.3.3.2 Early termination: Panel factorization more expensive than update 61 4.3.3.3 Relation to adaptive look-ahead via a runtime............ 63 4.3.4 Experimental Evaluation............................. 63 4.3.4.1 Optimal block size............................ 66 4.3.4.2 Performance comparison of the variants with static look-ahead... 67 4.3.4.3 Performance comparison with OmpSs................. 69 4.3.4.4 Multi-socket performance comparison with OmpSs.......... 70 4.4 Malleable LU on big.LITTLE............................... 71 4.4.1 Mapping of threads for the variant with static look-ahead........... 72 4.4.1.1 Thread mapping of GEMM....................... 72 4.4.1.2 Configuration of TRSM......................... 74 4.4.1.3 The relevance of the partial pivoting.................. 75 4.4.2 Dynamic look-ahead with OmpSs......................... 75 4.4.3 Global comparison................................. 76 4.5 Summary.......................................... 77 viii 5 Reduction to Condensed Forms 79 5.1 General Structure of the Reduction to Condensed Forms................ 79 5.2 General Optimization of the TSOR Routines...................... 81 5.2.1 Selection of the core configuration........................ 82 5.3 Reduction to Tridiagonal Form (sytrd)......................... 84 5.3.1 The impact of the algorithmic block size in sytrd ............... 85 5.3.2 Performance of the optimized sytrd ....................... 86 5.4 Reduction to Bidiagonal Form (gebrd)......................... 88 5.4.1 The impact of the algorithmic block size in the Reduction to Bidiagonal Form (gebrd)....................................... 88 5.4.2 Performance of the optimized gebrd ....................... 89 5.5 Reduction to Upper Hessenberg Form (gehrd)..................... 90 5.5.1 The impact of the algorithmic block size in the Reduction to Upper Hessen- berg Form (gehrd)................................ 91 5.5.2 Performance of the gehrd routine........................ 91 5.6 Modeling the Performance of the TSOR Routines.................... 93 5.7 Summary.......................................... 97 6 Conclusions 99 6.1 Conclusions and Main Contributions........................... 99 6.1.1 BLAS-3 kernels................................... 100 6.1.2 BLAS-2 kernels................................... 100 6.1.3 TSOR routines on asymmetric multicore processors (AMPs) and models... 101 6.1.4 LU and Cholesky factorization on big.LITTLE................. 101 6.1.5 Thread-level malleability............................. 101 6.2 Related Publications.................................... 102 6.2.1 Directly related publications........................... 102 6.2.1.1 Chapter3. Basic Linear Algebra Subprograms (BLAS)....... 102 6.2.1.2 Chapter4. Factorizations........................ 103 6.2.1.3 Chapter5. Reductions......................... 105 6.2.2 Indirectly related publications........................... 106 6.2.3 Other publications................................. 107 6.3 Open Research Lines.................................... 108 7 Conclusiones 109 7.1 Conclusiones y Contribuciones Principales........................ 109 7.1.1 Kernels BLAS-3.................................. 110 7.1.2 Kernels BLAS-2.................................. 110 7.1.3 Rutinas TSOR en AMPs y modelos....................... 111 7.1.4 Factorizaciones LU y Cholesky en big.LITTLE................. 111 7.1.5 Maleabilidad a nivel de thread.......................... 112 7.2 Publicaciones relacionadas................................. 112 7.2.1 Publicaciones directamente relacionadas..................... 112 7.2.1.1 Chapter3. Basic Linear Algebra Subprograms (BLAS)....... 112 7.2.1.2 Chapter4. Factorizaciones....................... 113 7.2.1.3 Chapter5. Reducciones......................... 114 7.2.2 Publicaciones indirectamente relacionadas.................... 114 7.2.3 Otras publicaciones................................ 115 ix 7.3 L´ıneasabiertas de investigaci´on.............................. 116 x List of Figures 2.1 Exynos 5422 block diagram................................. 11 2.2 ARM Juno SoC block diagram............................... 11 2.3 Intel Xeon block diagram.................................. 12 2.4 PMLib diagram....................................... 13 3.1 High performance implementation of gemm in BLIS................... 17 3.2 Data movement involved in the BLIS implementation of gemm............. 18 3.3 Standard partitioning of the iteration space and assignment to threads/cores for a multi-threaded BLIS implementation........................... 22 3.4 Performance and energy efficiency of the reference BLIS gemm............. 22 3.5 BLIS optimal cache configuration parameters mc and kc for the ARM Cortex-A15 and Cortex-A7 in the Samsung Exynos 5422 SoC..................... 24 3.6 Performance and energy efficiency of the BLIS gemm using exclusively one type of core, for a varying number of threads........................... 25 3.7 Power consumption of the BLIS gemm for a matrix size of 4,096 using exclusively one type of core, for a varying number
Recommended publications
  • ARM HPC Ecosystem
    ARM HPC Ecosystem Darren Cepulis HPC Segment Manager ARM Business Segment Group HPC Forum, Santa Fe, NM 19th April 2017 © ARM 2017 ARM Collaboration for Exascale Programs United States Japan ARM is currently a participant in Fujitsu and RIKEN announced that the two Department of Energy funded Post-K system targeted at Exascale will pre-Exascale projects: Data be based on ARMv8 with new Scalable Movement Dominates and Fast Vector Extensions. Forward 2. European Union China Through FP7 and Horizon 2020, James Lin, vice director for the Center ARM has been involved in several of HPC at Shanghai Jiao Tong University funded pre-Exascale projects claims China will build three pre- including the Mont Blanc program Exascale prototypes to select the which deployed one of the first architecture for their Exascale system. ARM prototype HPC systems. The three prototypes are based on AMD, SunWei TaihuLight, and ARMv8. 2 © ARM 2017 ARM HPC deployments starting in 2H2017 Tw o recent announcements about ARM in HPC in Europe: 3 © ARM 2017 Japan Exascale slides from Fujitsu at ISC’16 4 © ARM 2017 Foundational SW Ecosystem for HPC . Linux OS’s – RedHat, SUSE, CENTOS, UBUNTU,… Commercial . Compilers – ARM, GNU, LLVM,… . Libraries – ARM, OpenBLAS, BLIS, ATLAS, FFTW… . Parallelism – OpenMP, OpenMPI, MVAPICH2,… . Debugging – Allinea, RW Totalview, GDB,… Open-source . Analysis – ARM, Allinea, HPCToolkit, TAU,… . Job schedulers – LSF, PBS Pro, SLURM,… . Cluster mgmt – Bright, CMU, warewulf,… Predictable Baseline 5 © ARM 2017 – now on ARM Functional Supported packages / components Areas OpenHPC defines a baseline. It is a community effort to Base OS RHEL/CentOS 7.1, SLES 12 provide a common, verified set of open source packages for Administrative Conman, Ganglia, Lmod, LosF, ORCM, Nagios, pdsh, HPC deployments Tools prun ARM’s participation: Provisioning Warewulf Resource Mgmt.
    [Show full text]
  • 0 BLIS: a Framework for Rapidly Instantiating BLAS Functionality
    0 BLIS: A Framework for Rapidly Instantiating BLAS Functionality FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly in- stantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels. While others have had similar insights, BLIS reduces the necessary kernels to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and imple- mented in ISO C99 so that it can be reused and/or re-parameterized for different operations (and different architectures) with little to no modification. Inserting high-performance kernels into the framework facili- tates the immediate optimization of any BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the the traditional Fortran-77 BLAS interface, a generalized C interface, or any other higher level interface that builds upon this latter API. Preliminary performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans.
    [Show full text]
  • The BLAS API of BLASFEO: Optimizing Performance for Small Matrices
    The BLAS API of BLASFEO: optimizing performance for small matrices Gianluca Frison1, Tommaso Sartor1, Andrea Zanelli1, Moritz Diehl1;2 University of Freiburg, 1 Department of Microsystems Engineering (IMTEK), 2 Department of Mathematics email: [email protected] February 5, 2020 This research was supported by the German Federal Ministry for Economic Affairs and Energy (BMWi) via eco4wind (0324125B) and DyConPV (0324166B), and by DFG via Research Unit FOR 2401. Abstract BLASFEO is a dense linear algebra library providing high-performance implementations of BLAS- and LAPACK-like routines for use in embedded optimization and other applications targeting relatively small matrices. BLASFEO defines an API which uses a packed matrix format as its native format. This format is analogous to the internal memory buffers of optimized BLAS, but it is exposed to the user and it removes the packing cost from the routine call. For matrices fitting in cache, BLASFEO outperforms optimized BLAS implementations, both open-source and proprietary. This paper investigates the addition of a standard BLAS API to the BLASFEO framework, and proposes an implementation switching between two or more algorithms optimized for different matrix sizes. Thanks to the modular assembly framework in BLASFEO, tailored linear algebra kernels with mixed column- and panel-major arguments are easily developed. This BLAS API has lower performance than the BLASFEO API, but it nonetheless outperforms optimized BLAS and especially LAPACK libraries for matrices fitting in cache. Therefore, it can boost a wide range of applications, where standard BLAS and LAPACK libraries are employed and the matrix size is moderate. In particular, this paper investigates the benefits in scientific programming languages such as Octave, SciPy and Julia.
    [Show full text]
  • Accelerating the “Motifs” in Machine Learning on Modern Processors
    Accelerating the \Motifs" in Machine Learning on Modern Processors Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering Jiyuan Zhang B.S., Electrical Engineering, Harbin Institute of Technology Carnegie Mellon University Pittsburgh, PA December 2020 c Jiyuan Zhang, 2020 All Rights Reserved Acknowledgments First and foremost, I would like to thank my advisor | Franz. Thank you for being a supportive and understanding advisor. You encouraged me to pursue projects I enjoyed and guided me to be the researcher that I am today. When I was stuck on a difficult problem, you are always there to offer help and stand side by side with me to figure out solutions. No matter what the difficulties are, from research to personal life issues, you can always consider in our shoes and provide us your unreserved support. Your encouragement and sense of humor has been a great help to guide me through the difficult times. You are the key enabler for me to accomplish the PhD journey, as well as making my PhD journey less suffering and more joyful. Next, I would like to thank the members of my thesis committee { Dr. Michael Garland, Dr. Phil Gibbons, and Dr. Tze Meng Low. Thank you for taking the time to be my thesis committee. Your questions and suggestions have greatly helped shape this work, and your insights have helped me under- stand the limitations that I would have overlooked in the original draft. I am grateful for the advice and suggestions I have received from you to improve this dissertation.
    [Show full text]
  • Introduchon to Arm for Network Stack Developers
    Introducon to Arm for network stack developers Pavel Shamis/Pasha Principal Research Engineer Mvapich User Group 2017 © 2017 Arm Limited Columbus, OH Outline • Arm Overview • HPC SoLware Stack • Porng on Arm • Evaluaon 2 © 2017 Arm Limited Arm Overview © 2017 Arm Limited An introduc1on to Arm Arm is the world's leading semiconductor intellectual property supplier. We license to over 350 partners, are present in 95% of smart phones, 80% of digital cameras, 35% of all electronic devices, and a total of 60 billion Arm cores have been shipped since 1990. Our CPU business model: License technology to partners, who use it to create their own system-on-chip (SoC) products. We may license an instrucBon set architecture (ISA) such as “ARMv8-A”) or a specific implementaon, such as “Cortex-A72”. …and our IP extends beyond the CPU Partners who license an ISA can create their own implementaon, as long as it passes the compliance tests. 4 © 2017 Arm Limited A partnership business model A business model that shares success Business Development • Everyone in the value chain benefits Arm Licenses technology to Partner • Long term sustainability SemiCo Design once and reuse is fundamental IP Partner Licence fee • Spread the cost amongst many partners Provider • Technology reused across mulBple applicaons Partners develop • Creates market for ecosystem to target chips – Re-use is also fundamental to the ecosystem Royalty Upfront license fee OEM • Covers the development cost Customer Ongoing royalBes OEM sells • Typically based on a percentage of chip price
    [Show full text]
  • A Systematic Approach for Obtaining Performance on Matrix-Like Operations Submitted in Partial Fulfillment of the Requirements
    A Systematic Approach for Obtaining Performance on Matrix-Like Operations Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering Richard Michael Veras B.S., Computer Science & Mathematics, The University of Texas at Austin M.S., Electrical and Computer Engineering, Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA August, 2017 Copyright c 2017 Richard Michael� Veras. All rights reserved. Acknowledgments This work would not have been possible without the encouragement and support of those around me. To the members of my thesis committee – Dr. Richard Vuduc, Dr. Keshav Pingali, Dr. James Hoe and my Advisor Dr. Franz Franchetti – thank you for taking the time to be part of this process. Your questions and suggestions helped shape this work. In addition, I am grateful for the advice I have received from the four of you throughout my PhD. Franz, thank you for being a supportive and understanding advisor. Your excitement is contagious and is easily the best part of our meetings, especially during the times when I was stuck on a difficult problem. You en- couraged me to pursue project I enjoyed and guided me to be the researcher that I am today. I am incredibly grateful for your understanding andflexi- bility when it came to my long distance relationship and working remotely. To the Spiral group past and present and A-Level, you are amazing group and community to be a part of. You are all wonderful friends, and you made Pittsburgh feel like a home. Thom and Tze Meng, whether we were talking shop or sci-fi over beers, our discussions have really shaped the way I think and approach ideas, and that will stick with me for a long time.
    [Show full text]
  • Arxiv:1611.01120V1 [Cs.MS] 3 Nov 2016
    Generating Families of Practical Fast Matrix Multiplication Algorithms FLAME Working Note #82 Jianyu Huang*y, Leslie Rice*y, Devin A. Matthewsy, Robert A. van de Geijn*y *Department of Computer Science yInstitute for Computational Engineering and Sciences The University of Texas at Austin, Austin, TX 78712 jianyu@cs., leslierice, dmatthews, rvdg@cs. utexas.edu f g November 3, 2016 Abstract Matrix multiplication (GEMM) is a core operation to numerous scientific applications. Traditional implementations of Strassen-like fast matrix multiplication (FMM) algorithms often do not perform well except for very large matrix sizes, due to the increased cost of memory movement, which is particu- larly noticeable for non-square matrices. Such implementations also require considerable workspace and modifications to the standard BLAS interface. We propose a code generator framework to automatically implement a large family of FMM algorithms suitable for multiplications of arbitrary matrix sizes and shapes. By representing FMM with a triple of matrices U; V; W that capture the linear combinations of submatrices that are formed, we can use the KroneckerJ productK to define a multi-level representation of Strassen-like algorithms. Incorporating the matrix additions that must be performed for Strassen-like algorithms into the inherent packing and micro-kernel operations inside GEMM avoids extra workspace and reduces the cost of memory movement. Adopting the same loop structures as high-performance GEMM implementations allows parallelization of all FMM algorithms with simple but efficient data par- allelism without the overhead of task parallelism. We present a simple performance model for general FMM algorithms and compare actual performance of 20+ FMM algorithms to modeled predictions.
    [Show full text]
  • Implementing Strassen-Like Fast Matrix Multiplication Algorithms with BLIS
    Implementing Strassen-like Fast Matrix Multiplication Algorithms with BLIS Jianyu Huang, Leslie Rice Joint work with Tyler M. Smith, Greg M. Henry, Robert A. van de Geijn BLIS Retreat 2016 *Overlook of the Bay Area. Photo taken in Mission Peak Regional Preserve, Fremont, CA. Summer 2014. STRASSEN, from 30,000 feet Volker Strassen Original Strassen Paper (1969) (Born in 1936, aged 80) One-level Strassen’s Algorithm (In theory) Direct Computation Strassen’s Algorithm 8 multiplications, 8 additions 7 multiplications, 22 additions *Strassen, Volker. "Gaussian elimination is not optimal." Numerische Mathematik 13, no. 4 (1969): 354-356. Multi-level Strassen’s Algorithm (In theory) M := (A +A )(B +B ); 0 00 11 00 11 • One-level Strassen (1+14.3% speedup) M1 := (A10+A11)B00; 8 multiplications → 7 multiplications ; M2 := A00(B01–B11); M3 := A11(B10–B00); • Two-level Strassen (1+30.6% speedup) M := (A +A )B ; 4 00 01 11 64 multiplications → 49 multiplications; M := (A –A )(B +B ); 5 10 00 00 01 • d-level Strassen (n3/n2.803 speedup) M6 := (A01–A11)(B10+B11); d d C00 += M0 + M3 – M4 + M6 8 multiplications → 7 multiplications; C01 += M2 + M4 C10 += M1 + M3 C11 += M0 – M1 + M2 + M5 Multi-level Strassen’s Algorithm (In theory) • One-level Strassen (1+14.3% speedup) M0 := (A00+A11)(B00+B11); 8 multiplications → 7 multiplications ; M1 := (A10+A11)B00; • Two-level Strassen (1+30.6% speedup) M2 := A00(B01–B11); 64 multiplications → 49 multiplications; M := A (B –B ); 3 11 10 00 • d-level Strassen (n3/n2.803 speedup) M4 := (A00+A01)B11; 8d multiplications
    [Show full text]
  • BLAS and LAPACK Runtime Switching
    BLAS and LAPACK runtime switching Mo Zhou [email protected] April 9, 2019 Abstract mark different implementations efficiently, and painlessly switch the underlying implementation for different programs Classical numerical linear algebra libraries, BLAS and LA- under different scenarios. PACK play important roles in the scientific computing field. Various demands on these libraries pose non-trivial challenge on system management and linux distribution development. 1.1 Solutions of Other Distros By leveraging Debian’s update-alternatives mechanism which enables user to switch BLAS and LAPACK libraries smoothly Archlinux only provides the Netlib and OpenBLAS, where and painlessly, the problems could be properly and decently they conflict to each other. This won’t satisfy all user’s de- addressed. This project aims at introducing the mechanism mand, because the OpenMP version of OpenBLAS is not a into Gentoo’s eselect framework to manage BLAS and LA- good choice for pthread-based programs. PACK, providing equivalent or better functionality of De- Fedora’s solution is to provide every possible version, and bian’s update-alternatives. make them co-installable by assigning different SONAMEs. It leads to confusion and chaos if many alternative libraries co-exists on the system, e.g. libopenblas, libopenblasp, li- 1 Rationale bopenblaso. The BSD (Port) Family forces packages to use a specific BLAS (Basic Linear Algebra Subroutines)[1] and LAPACK implemtation on a per-package basis, which clearly doesn’t (Linear Algebra PACKage)[2] are important mathematical satisfy the divsersed user demand. APIs/ABIs/libraries to performance-critical programs that manipulate dense and contiguous numerical arrays. BLAS provides low-level and frequently used linear algebra routines 1.2 Gentoo’s Solution for vector and matrix operations; LAPACK provides higher- level functionality based on complex call graph over BLAS, Currently Gentoo’s solution is based on eselect/pkg-config.
    [Show full text]
  • Implementing Strassen's Algorithm with BLIS
    Implementing Strassen's Algorithm with BLIS FLAME Working Note # 79 Jianyu Huang∗ Tyler M. Smith∗y Greg M. Henryz Robert A. van de Geijn∗y April 16, 2016 Abstract We dispel with \street wisdom" regarding the practical implementation of Strassen's algorithm for matrix-matrix multiplication (DGEMM). Conventional wisdom: it is only practical for very large ma- trices. Our implementation is practical for small matrices. Conventional wisdom: the matrices being multiplied should be relatively square. Our implementation is practical for rank-k updates, where k is relatively small (a shape of importance for libraries like LAPACK). Conventional wisdom: it inher- ently requires substantial workspace. Our implementation requires no workspace beyond buffers already incorporated into conventional high-performance DGEMM implementations. Conventional wisdom: a Strassen DGEMM interface must pass in workspace. Our implementation requires no such workspace and can be plug-compatible with the standard DGEMM interface. Conventional wisdom: it is hard to demonstrate speedup on multi-core architectures. Our implementation demonstrates speedup over TM 1 conventional DGEMM even on an Intel R Xeon Phi coprocessor utilizing 240 threads. We show how a distributed memory matrix-matrix multiplication also benefits from these advances. 1 Introduction Strassen's algorithm (Strassen) [1] for matrix-matrix multiplication (gemm) has fascinated theoreticians and practitioners alike since it was first published, in 1969. That paper demonstrated that multiplication of n × n matrices can be achieved in less than the O(n3) arithmetic operations required by a conventional formulation. It has led to many variants that improve upon this result [2, 3, 4, 5] as well as practical implementations [6, 7, 8, 9].
    [Show full text]
  • Profiling and Inspecting Linear Algebra Applications Using Flexiblas
    Profiling and Inspecting Linear Algebra Applications using FlexiBLAS Martin K¨ohler joint work with Jens Saak, Christian Himpe, and J¨ornPapenbroock March 21, 2018 89th GAMM Annual Meeting Used in many software packages: MATLAB®, GNU Octave NumPy, SciPy Julia FEM Packages like FEniCS, DEAL.II, . Libraries like PetSC, Trillinos, Eigen, SuiteSparse, . What is BLAS? Basic Linear Algebra Subprograms (BLAS) \The BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Because the BLAS are efficient, portable, and widely available, they are commonly used in the development of high quality linear algebra software, LAPACK for example."4 4From: http://www.netlib.org/blas/faq.html { What and where are the BLAS? Martin K¨ohler,[email protected] FlexiBLAS 2/14 What is BLAS? Basic Linear Algebra Subprograms (BLAS) \The BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Because the BLAS are efficient, portable, and widely available, they are commonly used in the development of high quality linear algebra software, LAPACK for example."6 Used in many software packages: MATLAB, GNU Octave NumPy, SciPy Julia FEM Packages like FEniCS, DEAL.II, . Libraries like PetSC, Trillinos, Eigen, SuiteSparse, . 6From: http://www.netlib.org/blas/faq.html { What and where are the BLAS? Martin K¨ohler,[email protected] FlexiBLAS 2/14 FlexiBLAS Lightweight wrapper library around
    [Show full text]
  • 0 the BLIS Framework: Experiments in Portability
    0 The BLIS Framework: Experiments in Portability FIELD G. VAN ZEE, TYLER SMITH, BRYAN MARKER, TZE MENG LOW, ROBERT A. VAN DE GEIJN, The University of Texas at Austin FRANCISCO D. IGUAL, Univ. Complutense de Madrid MIKHAIL SMELYANSKIY, Intel Corporation XIANYI ZHANG, Chinese Academy of Sciences MICHAEL KISTLER, VERNON AUSTEL, JOHN A. GUNNELS, IBM Corporation LEE KILLOUGH, Cray Inc. BLIS is a new software framework for instantiating high-performance BLAS-like dense linear algebra li- braries. We demonstrate how BLIS acts as a productivity multiplier by using it to implement the level-3 BLAS on a variety of current architectures. The systems for which we demonstrate the framework include state-of-the-art general-purpose, low-power, and many-core architectures. We show how, with very little effort, the BLIS framework yields sequential and parallel implementations that are competitive with the performance of ATLAS, OpenBLAS (an effort to maintain and extend the GotoBLAS), and commercial vendor implementations such as AMD's ACML, IBM's ESSL, and Intel's MKL libraries. While most of this paper focuses on single core implementation, we also provide compelling results that suggest the framework's leverage extends to the multithreaded domain. Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Authors’ addresses: Field G. Van Zee and Robert A. van de Geijn, Department of Computer Science and In- stitute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX 78712, ffield,[email protected]. Tyler Smith, Bryan Marker, and Tze Meng Low, Department of Computer Science, The University of Texas at Austin, Austin, TX 78712, ftms,bamarker,[email protected].
    [Show full text]