Strassen's Algorithm Reloaded

Strassen's Algorithm Reloaded

Strassen’s Algorithm Reloaded Jianyu Huang∗, Tyler M. Smith∗y, Greg M. Henryz, Robert A. van de Geijn∗y ∗Department of Computer Science and yInstitute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX 78712 Email: jianyu,tms,[email protected] zIntel Corporation, Hillsboro, OR 97124 Email: [email protected] Abstract—We dispel with “street wisdom” regarding the that can be computed and makes it so an implementation is not practical implementation of Strassen’s algorithm for matrix- plug-compatible with the standard calling sequence supported matrix multiplication (DGEMM). Conventional wisdom: it is only by the BLAS. practical for very large matrices. Our implementation is practical for small matrices. Conventional wisdom: the matrices being An important recent advance in the high-performance imple- multiplied should be relatively square. Our implementation is mentation of DGEMM is the BLAS-like Library Instantiation practical for rank-k updates, where k is relatively small (a shape Software (BLIS framework) [14], a careful refactoring of the of importance for libraries like LAPACK). Conventional wisdom: best-known approach to implementing conventional DGEMM it inherently requires substantial workspace. Our implementation introduced by Goto [15]. Of importance to the present paper requires no workspace beyond buffers already incorporated into conventional high-performance DGEMM implementations. are the building blocks that BLIS exposes, minor modifica- Conventional wisdom: a Strassen DGEMM interface must pass tions of which support a new approach to implementating in workspace. Our implementation requires no such workspace STRASSEN. This approach changes data movement between and can be plug-compatible with the standard DGEMM inter- memory layers and can thus mitigate the negative impact of face. Conventional wisdom: it is hard to demonstrate speedup the additional lower order terms incurred by STRASSEN. These on multi-core architectures. Our implementation demonstrates speedup over conventional DGEMM even on an Intel R Xeon building blocks have similarly been exploited to improve upon PhiTM coprocessor1 utilizing 240 threads. We show how a dis- the performance of, for example, the computation of the tributed memory matrix-matrix multiplication also benefits from K-Nearest Neighbor [16] and Tensor Contraction [17], [18] these advances. problem. The result is a family of STRASSEN implementations, Index Terms—Strassen, numerical algorithm, performance members of which attain superior performance depending on model, matrix multiplication, linear algebra library, BLAS. the sizes of the matrices. I. INTRODUCTION The resulting family improves upon prior implementations of STRASSEN in a number of surprising ways: Strassen’s algorithm (STRASSEN) [1] for matrix-matrix • It can outperform classical DGEMM even for small square multiplication (DGEMM) has fascinated theoreticians and prac- matrices. titioners alike since it was first published, in 1969. That paper • It can achieve high performance for rank-k updates demonstrated that multiplication of n × n matrices can be (DGEMM with a small “inner matrix size”), a case of achieved in less than the O(n3) arithmetic operations required DGEMM frequently encountered in the implementation of by a conventional formulation. It has led to many variants that libraries like LAPACK [19]. improve upon this result [2], [3], [4], [5] as well as practical • It needs not require additional workspace. implementations [6], [7], [8], [9]. The method can yield a • It can incorporate directly the multi-threading in tradi- shorter execution time than the best conventional algorithm tional DGEMM implementations. with a modest degradation in numerical stability [10], [11], • It can be plug-compatible with the standard DGEMM [12] by only incorporating a few levels of recursion. interface supported by the BLAS. From 30,000 feet the algorithm can be described as shifting • It can be incorporated into practical distributed memory computation with submatrices from multiplications to addi- implementations of DGEMM. tions, reducing the O(n3) term at the expense of adding O(n2) complexity. For current architectures, of greater consequence Most of these advances run counter to conventional wisdom is the additional memory movements that are incurred when and are backed up by theoretical analysis and practical imple- the algorithm is implemented in terms of a conventional mentation. DGEMM provided by a high-performance implementation II. STANDARD MATRIX-MATRIX MULTIPLICATION through the Basic Linear Algebra Subprograms (BLAS) [13] We start by discussing naive computation of matrix-matrix interface. A secondary concern has been the extra workspace multiplication (DGEMM), how it is supported as a library rou- that is required. This simultaneously limits the size of problem tine by the Basic Linear Algebra Subprograms (BLAS) [13], 1Intel, Xeon, and Intel Xeon Phi are trademarks of Intel Corporation in the how modern implementations block for caches, and how that U.S. and/or other countries. implementation supports multi-threaded parallelization. A. Computing C = αAB + C C that fits in registers. For details on how these are chosen, Consider C = αAB + C, where C, A, and B are m × n, see [14], [20]. m × k, and k × n matrices, respectively, and α is a scalar. If Importantly, 2 the (i; j) entry of C, A, and B are respectively denoted by • The row panels Bp that fit in the L3 cache are packed ci;j, ai;j, and bi;j, then computing C = αAB + C is achieved into contiguous memory, yielding Bep. by • Blocks Ai that fit in the L2 cache are packed into buffer k−1 X Aei. c = α a b + c ; i;j i;p p;j i;j It is in part this packing that we are going to exploit as we p=0 implement one or more levels of STRASSEN. which requires 2mnk floating point operations (flops). E. Multi-threaded implementation B. Level-3 BLAS matrix-matrix multiplication BLIS exposes all the illustrated loops, requiring only the (General) matrix-matrix multiplication (GEMM) is supported micro-kernel to be optimized for a given architecture. In in the level-3 BLAS [13] interface as contrast, in the GOTOBLAS implementation the micro-kernel DGEMM( transa, transb, m, n, k, alpha, A, lda, B, ldb, beta, C, ldc ) and the first two loops around it form an inner-kernel that is where we focus on double precision arithmetic and data. This implemented as a unit. As a result, the BLIS implementation call supports exposes five loops (two more than the GOTOBLAS imple- mentation) that can be parallelized, as discussed in [23]. In T C = αAB + βC; C = αA B + βC; this work, we mimic the insights from that paper. C = αABT + βC; and C = αAT BT + βC III. STRASSEN’S ALGORITHM depending on the choice of transa and transb. In our discussion we can assume β = 1 since C can always first In this section, we present the basic idea and practical be multiplied by that scalar as a preprocessing step, which considerations of STRASSEN, decomposing it into a combi- requires only O(n2) flops. Also, by internally allowing both nation of general operations that can be adapted to the high- a row stride and a column stride for A, B, and C (as the performance implementation of a traditional DGEMM. BLIS framework does), transposition can be easily supported A. The basic idea by swapping these strides. It suffices then to consider C = αAB + C. It can be verified that the operations in Figure 2 also compute C = αAB + C, requiring only seven multiplications C. Computing with submatrices with submatrices. The computational cost is, approximately, Important to our discussion is that we partition the matri- reduced from 2mnk flops to (7=8)2mnk flops, at the expense ces and stage the matrix-multiplication as computations with of a lower order number of extra additions. Figure 2 describes submatrices. For example, let us assume that m, n, and k are what we will call one-level STRASSEN. all even and partition B. Classic Strassen’s algorithm C00 C01 A00 A01 B00 B01 C = ;A = ;B = ; Each of the matrix multiplications that computes an inter- C10 C11 A10 A11 B10 B11 mediate result Mk can itself be computed with another level m n m k k n where Cij is 2 × 2 , Aij is 2 × 2 , and Bij is 2 × 2 . Then of Strassen’s algorithm. This can then be repeated recursively. If originally m = n = k = 2d, where d is an integer, then C = α(A B + A B ) + C 00 00 00 01 10 00 the cost becomes C01 = α(A00B01 + A01B11) + C01 log (n) 3 log (7=8) 3 2:807 C10 = α(A10B00 + A11B10) + C10 (7=8) 2 2n = n 2 2n = 2n flops: C11 = α(A10B01 + A11B11) + C11 In this discussion, we ignored the increase in the total number computes C = αAB + C via eight multiplications and of extra additions. eight additions with submatrices, still requiring approximately 2mnk flops. C. Practical considerations D. The GotoBLAS algorithm for DGEMM A high-performance implementation of a traditional matrix- matrix multiplication requires careful attention to details re- Figure 1(left) illustrates the way the GOTOBLAS [21] (pre- lated to data movements between memory layers, scheduling decessor of OpenBLAS [22]) approach structures the blocking of operations, and implementations at a very low level (of- for three layers of cache (L1, L2, and L3) when computing ten in assembly code). Practical implementations recursively C = AB+C, as implemented in BLIS. For details we suggest perform a few levels of STRASSEN until the matrices become the reader consult the papers on the GOTOBLAS DGEMM [15] small enough so that a traditional high-performance DGEMM and BLIS [14]. In that figure, the indicated block sizes mC , nC , and kC are chosen so that submatrices fit in the various 2If an architecture does not have an L3 cache, this panel is still packed to caches while mR and nR relate to the size of contributions to make the data contiguous and to reduce the number of TLB entries used.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us