Object Oriented Model of Generalized Matrix Multipication 441

Object Oriented Model of Generalized Matrix Multipication 441

Proceedings of the Federated Conference on ISBN 978-83-60810-22-4 Computer Science and Information Systems pp. 439–442 Developing an OO Model for Generalized Matrix Multiplication: Preliminary Considerations Maria Ganzha, Marcin Paprzycki Stanislav G. Sedukhin Systems Research Laboratory Distributed Parallel Processing Laboratory Polish Academy of Sciences The University of Aizu 01-447 Warszawa, Poland Aizuwakamatsu City, Fukushima 965-8580, Japan Email: [email protected] Email: [email protected] Abstract—Recent changes in computational sciences force Generalized matrix multiplication appears in the Algebraic reevaluation of the role of dense matrix multiplication. Among Path Problem (APP), examples of which include: finding others, this resulted in a proposal to consider generalized matrix the most reliable path, finding the critical path, finding the multiplication, based on the theory of algebraic semirings. The aim of this note is to outline an initial object oriented model of maximum capacity path, etc. Here, a generalized is based the generalized matrix-multiply-add operation. on the algebraic theory of semirings (see [6] and references collected there). Note that, standard linear algebra (with its I. INTRODUCTION matrix multiplication) is an example of an algebraic (ma- HE DENSE matrix multiplication appears in many com- trix) semiring. Application of algebraic semirings to “unify T putational problems. Its arithmetic complexity (O(n3)) through generalization” a large class of computational prob- and inherent data dependencies pose a challenge for reducing lems, should be viewed in the context of recent changes in its run-time complexity. There exist three basic approaches to CPU architectures: (1) popularity of fused miltiply-add (FMA) decrease the execution time of dense matrix multiplication. units, which take three scalar operands and produce a result (1) Reducing the number of (time consuming) scalar mul- of c ← a · b + c in a single clock cycle, (2) increase of tiplications, while increasing the number of (much faster) the number of cores-per-processor (e.g. recent announcement additions; see, discussion and references in [1]. These ap- of 10-core processors from Intel), and (3) success of GPU proaches had very good theoretical arithmetical complexity, processors (e.g. the Fermi architecture from Nvidia and the and worked well when implemented on computers with a Cypress architecture from AMD) that combine multiple FMA single processor and main memory. However, due to complex units (e.g. the Fermi architecture delivers in a single cycle 512 data access patterns they became difficult to efficiently imple- single-precision, or 256 double-precision FMA results). ment on computers with hierarchical memory. Furthermore, Finally, the work reported in [4] illustrates a important recursive matrix multiplication requires extra memory; e.g. aspect of highly optimized codes that deal with complex Cray’s implementation of Strassen’s algorithm required extra matrix structures. While the code code generator, is approx- space of 2.34 ∗ N 2 for matrices of size N × N. imately 6,000 lines long, the generated code is more than (2) Parallelization of matrix multiplication, which is based 100,000 lines. Therefore, when thinking about fast matrix one of four classes of schedules ([2]): (i) Broadcast-Compute- multiplication, one needs to consider also the programming Shift; (ii) All-Shift-Compute (or Systolic); (iii) Broadcast- cost required to develop and later update codes based on Compute-Roll; and (iv) Compute-Roll-All (or Orbital). The complex data structures and movements. latter is characterized by regularity and locality of data move- ment, maximal data reuse without data replication, recurrent II. ALGEBRAIC SEMIRINGS IN SCIENTIFIC CALCULATIONS ability to involve into computing all matrix data at once (retina Since 1970’s, a number of problems have been combined I/O), etc. into the Algebraic Path Problem (APP; see [7]). The APP (3) Combination of these approaches, where irregularity of includes problems from linear algebra, graph theory, optimiza- data movement is exaggerated through the complexity of the tion, etc. while their solution draws from theory of semirings. underlying hardware. Interestingly, work on recursive (and A closed semiring (S, ⊕, ⊗, ∗, 0¯, 1)¯ is an algebraic structure recursive-parallel) matrix multiplication seems to be subsiding, defined for a set S, with two binary operations: addition as the last known to us paper comes from 2006 [3]. ⊕ : S × S → S and multiplication ⊗ : S × S → S, a unary Note that, in sparse matrix algebra the main goal was to save operation called closure ∗ : S → S, and two constants 0¯ and memory; achieved via indexing structures storing information 1¯ in S. Here, we are particularly interested in the case when about non-zero elements (resulting in complex data access the elements of the set S are matrices. Thus, following [7], n n patterns; [4]. However, nowadays the basic element becomes we introduce a matrix semiring (S × , L, N, ⋆, O,¯ I¯) as a a dense block while regularity of data access compensates for set of n × n matrices Sn×n over a closed scalar semiring the multiplications by zero [5]. (S, ⊕, ⊗, ∗, 0¯, 1)¯ with two binary operations, matrix addition 978-83-60810-22-4/$25.00 c 2011 IEEE 439 440 PROCEEDINGS OF THE FEDCSIS. SZCZECIN, 2011 1) Matrix Inversion Problem: N Obviously, the FMAs speed-up (∼ 2×) the solution of α a i, j a i, j −1 a i, k × a k, j ( ) ( ) = ( ) + Pk=0 ( ) ( ); scientific, engineering, and multimedia algorithms based on ω c a × b c ( ) = + ; the linear algebra (matrix) transforms [13]. On the other hand, 2) All-Pairs Shortest Paths Problem: lack of hardware support penalizes APP solvers from other α a i, j a i, j , N−1 a i, k a k, j ( ) ( ) = min ( ) mink=0 [ ( ) + ( )] ; semirings. For instance, in [8] authors have showed that the ω c c, a b ( ) = min( + ); “penalty” for lack of a generalized FMA unit in the Cell/B.E. 3) Minimum Spanning Tree Problem: processor may be up to 400%. Obviously, this can be seen α a i, j a i, j , N−1 a i, k , a k, j ( ) ( ) = min ( ) mink=0 [max ( ) ( )] ; from the “positive side.” Having hardware unit fully supporting ω c c, a, b . ( ) = min[ max( )] operations listed in Figure 1 would speed up solution of Fig. 1. Matrix and scalar semiring operations for sample APP problems APP problems by up to 4 times. Interestingly, we have just found that the AMD Cypress GPU processor supports the (min, max)-operation through a single call with 2 clock cycles n n n n n n L : S × × S × → S × and matrix multiplication per result. Therefore, the Minimum Spanning Tree problem n n n n n n N : S × × S × → S × , a unary operation called (see, Figure 1) could be solved substantially more efficiently closure of a matrix ⋆ : Sn×n → Sn×n, the zero n × n than previously realized. Furthermore, this could mean that matrix O¯ whose all elements equal to 0¯, and the n × n the AMD hardware has build-in elements corresponding to identity matrix I¯ whose all main diagonal elements equal to 1¯ −∞ and ∞. This, in turn, could constitute an important and 0¯ otherwise. Here, matrix addition and multiplication are step towards hardware support of generalized scalar FMA defined as usually in linear algebra. Note that, special cases operations needed to realize many APP kernels(see, also [8]). matrices that are non-square, symmetric, structural, etc., while In the 1990’s three designs for parallel computers have not usually considered in the theory of semirings, are also been tried: (1) array processors, (2) shared memory parallel handled by the above provided definition. computers, and (3) distributed memory parallel computers. The existing blocked algorithms for solving the APP, are After a number of “trails-and-errors,” combined with progress rich in generalized block MAA operations, which are their in miniaturization, we witness the first two approaches joined most compute intensive parts [8]. In the generalized block within a processor and such processors combined into large MMA, addition and multiplication originate from any semiring machines. Specifically, while vendors like IBM, Intel and (possibly different than the standard numerical algebra). In AMD develop multi-core processors with slowly increasing Figure 1 we present the relation between the scalar multiply- number of cores, the Nvidia and the AMD develop array add operation (ω), and the corresponding MMA kernel (α), for processors on the chip. However, all these designs lead to different semirings for three sample APP applications; here, N processors consisting of thousands of FMA units. is the size of the matrix block (see, also [8]). Overall, the generalized MMA is one of key operations for IV. PROPOSED GENERALIZED MULTLPY-AND-ADD the APP problems, including MMA-based numerical linear Based on these considerations, in [14] we have defined a algebra algorithms, which include block-formulations of linear generic generalized matrix-multiply-add operation (MMA), algebraic problems (as conceptualized in the level 3 BLAS C ← MMA[⊗, ⊕](A, B, C): C ← A ⊗ B ⊕ C, operations [9], and applied in the LAPACK library [10]). where the [⊗, ⊕] operations originate from different matrix III. MATRIX OPERATIONS AND COMPUTER HARDWARE semirings. Note that, like in the scalar FMAs, generalized In 1970’s it was realized that many matrix algorithms matrix addition and multiplication, can be implemented by consist of similar building blocks (e.g. a vector update, or a making an n × n matrix A (or B) = O¯ for addition, or a dot-product). As a result, Cray computers provided optimized matrix C = I¯ for multiplication (see, Section II). vector operations: y ← y + αx, while IBM supercomputers Obviously, the generalized MMA resemples the GEMM featured optimized dot products. This resulted also in creation operation from the level 3 BLAS. Therefore, in [14] it was of libraries of routines for scientific computing (e.g.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us