The Libflame Library for Dense Matrix Computations

Total Page:16

File Type:pdf, Size:1020Kb

The Libflame Library for Dense Matrix Computations S O F T W A R E E NGINEERING The libflame Library for Dense Matrix Computations Researchers from the Formal Linear Algebra Method Environment (Flame) project have developed new methodologies for analyzing, designing, and implementing linear algebra libraries. These solutions, which have culminated in the libflame library, seem to solve many of the programmability problems that have arisen with the advent of multicore and many- core architectures. How do we convince people that in program­ books, numerous papers, and even more working ming simplicity and clarity—in short: what math­ notes over the last decade documenting the chal­ ematicians call “elegance”—are not a dispensable lenges and motivations that led to the libflame luxury, but a crucial matter that decides between library’s APIs and implementations (see www. success and failure? cs.utexas.edu/users/flame/publications). Seasoned —Edsger W. Dijkstra users in scientific and numerical computing circles will quickly recognize libflame’s target functional­ ity set. In short, in libflame, our goal is to provide ver the past decade, the University not only a framework for developing dense linear of Texas at Austin and Universidad algebra solutions, but also a ready­made library Jaime I de Castellón have collabo­ that is, by almost any metric, easier to use and of­ rated on the Formal Linear Algebra fers competitive (and in many cases superior) real­ OMethod Environment (Flame) project, developing world performance when compared to the more a unique methodology, notation, tools, and APIs set traditional Basic Linear Algebra Subprograms for deriving and representing linear algebra librar­ (BLAS)1 and Linear Algebra Package (Lapack) ies. To better promote the Flame project’s charac­ libraries.2 teristic techniques, we’ve implemented a functional Here, we briefly introduce both the library it­ library—libflame—that demonstrates findings and self and its underlying philosophy. Using perfor­ insights from our 10 years of research. mance results from different architectures, we The primary purpose of libflame is to give the show how easily it can be retargeted to “hostile” scientific and numerical computing communities environments. Our hope is that the combination a modern, high­performance dense linear algebra of libflame’s functionality and performance will library that is extensible, easy to use, and available lead the scientific computing community to in­ under an open source license. We’ve published two vestigate our other publications. 1521-9615/09/$26.00 © 2009 IEEE What Makes libflame Different? COPUBLI S HED BY THE IEEE CS AND THE AIP Adopting libflame makes sense for numerous rea­ Field G. Van Zee, Ernie Chan, and Robert A. van de Geijn sons. In addition to expected attractions—such 3 The University of Texas at Austin as a detailed user manual that we routinely up­ date and cross­platform support for both GNU/ Enrique S. Quintana­Ortí and Gregorio Quintana­Ortí Linux and Microsoft Windows—libflame offers us­ Universidad Jaime I de Castellón ers and software developers several key advantages. 56 THIS ARTICLE HAS BEEN PEER-REVIEWED. COMPUTING IN SC IEN C E & ENGINEERING A Solution Grounded in CS Fundamentals Algorithm: A := CHOL_L_BLK_VAR2( A ) A A Flame advocates a new approach to developing Partition A → TL TR A A linear algebra libraries. It starts with a more styl­ BL BR where ATL is 0 × 0 ized notation for expressing loop­based linear while m(A ) < m(A) do 4 TL algebra algorithms. As Figure 1 shows, the nota­ Determine block size b tion closely resembles the way pictures naturally Repartition A00 A01 A02 illustrate matrix algorithms. The notation also ATL ATR → A10 A11 A12 facilitates rigorous formal algorithm derivations, ABL ABR A20 A21 A22 which guarantees that the resulting algorithms where A11 is b × b are correct.5 Moreover, it yields an algorithm T family, so that developers can choose the best op­ A11 := A11 − A10A11 (SYRK) T tion for their situations (based on problem size or A21 := A21 − A20A10 (GEMM) architecture, for example). A11 := CHOL (A11) (CHOL) −T A21 := A21A 11 (TRSM) Object-Based Abstractions and API The BLAS, Lapack, and Scalable Lapack6 proj­ Continue with A00 A01 A02 ects place backward compatibility as a high ATL ATR → A10 A11 A12 ABL ABR priority, which hinders adoption of modern A20 A21 A22 software engineering principles such as object endwhile abstraction. We built libflame around opaque structures that hide matrices’ implementation Figure 1. Blocked Cholesky factorization (variant 2) expressed as a Flame algorithm. Subproblems details (such as data layout), and libflame exports annotated as SYRK, GEMM, and TRSM correspond object­based programming interfaces to oper­ to level-three Basic Linear Algebra Subprograms 3 ate upon these structures. Likewise, Flame al­ (BLAS) operations; CHOL is the recursive Cholesky gorithms are expressed (and coded) in terms of factorization subproblem. smaller operations on the matrix operands’ sub­ partitions. This abstraction facilitates program­ ming without array or loop indices, which lets Dense Linear Algebra Framework users avoid painful index­related programming Like Lapack, libflame provides ready­made imple­ errors altogether. mentations of common linear algebra operations. Figure 2 compares the coding styles of libflame libflame’s implementations mirror many of those and Lapack, highlighting the inherent elegance in BLAS and Lapack. However, libflame differs of Flame code and its striking resemblance to the from Lapack in two important ways. First, as men­ corresponding Flame algorithm in Figure 1. This tioned, it provides algorithm families for each op­ similarity is quite intentional, preserving the clar­ eration, so developers can choose the one that best ity of the original algorithm as it would be illus­ suits their needs. Second, it provides a framework trated on a whiteboard or in a publication. for building complete custom linear algebra codes. This makes it a more useful environment because Educational Value it lets users quickly choose and/or prototype a lin­ In addition to introducing students to formal al­ ear algebra solution to fit the application’s needs. gorithm derivation, educators have successfully used Flame to teach linear algebra algorithms in a High Performance classroom setting. Also, Flame’s API affords clean In our publications and performance graphs, abstractions, which makes it ideally suited for we do our best to dispel the myth that user­ and teaching high­performance linear algebra courses programmer­friendly linear algebra codes can’t at the undergraduate and graduate level. yield high performance. As we show later, our Historically, instructors have used the BLAS/ Flame implementations of operations such as Lapack coding style in these pedagogical settings. Cholesky factorization and triangular matrix in­ However, we believe that this coding style ob­ version often outperformed Lapack’s correspond­ scures the algorithms; students often get bogged ing implementations.7 down debugging frustrating errors caused by Currently, libflame relies on only a core set of indexing directly into arrays that represent the highly optimized, unblocked routines to perform matrices. Using Flame greatly reduces the line of the small subproblems found in linear algebra students who need help with coding during office algorithm implementations. However, as we’ve hours. recently demonstrated, we can automatically NOVEMBER /DECEMBER 2009 57 libflame Lapack FLA_Error FLA_Chol_l_blk_var2( FLA_Obj A, int nb_alg) SUBROUTINE DPOTRF(UPLO, N, A, LDA, INFO ) { FLA_Obj ATL, ATR, A00,A01,A02, CHARACTER UPLO ABL, ABR A10, A11, A12, INTEGER INFO,LDA,N A20, A21, A22; DOUBLE PRECISIONA(LDA,*) intb,value; DOUBLE PRECISIONONE FLA_Part_2x2(A,&ATL, &ATR, PARAMETER(ONE=1.0D+0 ) &ABL,&ABR, 0, 0, FLA_TL); LOGICAL UPPER INTEGER J, JB,NB while(FLA_Obj_length(ATL )<FLA_Obj_length(A)) LOGICAL LSAME { INTEGER ILAENV b=min( FLA_Obj_length(ABR ), nb_alg ); EXTERNAL LSAME, ILAENV EXTERNAL DGEMM, DPOTF2,DSYRK,DTRSM, XERBLA FLA_Repart_2x2_to_3x3( INTRINSICMAX,MIN ATL, /**/ ATR, &A00, /**/ &A01,&A02, /* ************** //**********************/ INFO =0 &A10, /**/ &A11,&A12, UPPER=LSAME( UPLO,’U’ ) ABL, /**/ ABR, &A20, /**/ &A21,&A22, IF(.NOT.UPPER.AND. .NOT.LSAME(UPLO,’L’ ))THEN b, b, FLA_BR ); INFO =-1 ELSE IF(N.LT.0)THEN /* -----------------------------------------------------*/ INFO =-2 ELSE IF(LDA.LT.MAX( 1, N))THEN FLA_Syrk(FLA_LOWER_TRIANGULAR, FLA_NO_TRANSPOSE , INFO =-4 FLA_MINUS_ONE, A10, FLA_ONE, A11); ENDIF IF(INFO.NE.0 )THEN FLA_Gemm(FLA_NO_TRANSPOSE, FLA_TRANSPOSE, CALL XERBLA( ’DPOTRF’,-INFO ) FLA_MINUS_ONE, A20, A10, FLA_ONE ,A21 ); RETURN ENDIF value=FLA_Chol_unb_external(FLA_LOWER_TRIANGULAR, A11); INFO =0 if (value != FLA_SUCCESS) UPPER=LSAME( UPLO,’U’ ) return (FLA_Obj_length(A00 )+value); IF(N.EQ.0) FLA_Trsm(FLA_RIGHT , FLA_LOWER_TRIANGULAR, $RETURN FLA_TRANSPOSE, FLA_NONUNIT_DIAG , FLA_ONE ,A11,A21 ); NB =ILAENV( 1, ’DPOTRF’, UPLO, N, -1,-1, -1 ) IF(NB.LE.1.OR.NB.GE.N)THEN /* -----------------------------------------------------*/ CALL DPOTF2(UPLO, N, A, LDA, INFO) ELSE FLA_Cont_with_3x3_to_2x2( IF(UPPER )THEN &ATL, /**/ &ATR,A00,A01,/**/ A02, *Upper triangular case omittedfor purpose of fair comparison. A10, A11,/**/ A12, ELSE /* *************** //********************/ DO 20 J=1, N, NB &ABL, /**/ &ABR,A20,A21,/**/ A22, JB =MIN(NB, N-J+1) FLA_TL ); CALL DSYRK(’Lower’,’No transpose ’, JB,J-1,-ONE, } $ A( J, 1 ), LDA, ONE, A( J, J ), LDA ) CALL DPOTF2(’Lower’,JB, A( J, J),LDA,INFO) return
Recommended publications
  • Linear Algebra Libraries
    Linear Algebra Libraries Claire Mouton [email protected] March 2009 Contents I Requirements 3 II CPPLapack 4 III Eigen 5 IV Flens 7 V Gmm++ 9 VI GNU Scientific Library (GSL) 11 VII IT++ 12 VIII Lapack++ 13 arXiv:1103.3020v1 [cs.MS] 15 Mar 2011 IX Matrix Template Library (MTL) 14 X PETSc 16 XI Seldon 17 XII SparseLib++ 18 XIII Template Numerical Toolkit (TNT) 19 XIV Trilinos 20 1 XV uBlas 22 XVI Other Libraries 23 XVII Links and Benchmarks 25 1 Links 25 2 Benchmarks 25 2.1 Benchmarks for Linear Algebra Libraries . ....... 25 2.2 BenchmarksincludingSeldon . 26 2.2.1 BenchmarksforDenseMatrix. 26 2.2.2 BenchmarksforSparseMatrix . 29 XVIII Appendix 30 3 Flens Overloaded Operator Performance Compared to Seldon 30 4 Flens, Seldon and Trilinos Content Comparisons 32 4.1 Available Matrix Types from Blas (Flens and Seldon) . ........ 32 4.2 Available Interfaces to Blas and Lapack Routines (Flens and Seldon) . 33 4.3 Available Interfaces to Blas and Lapack Routines (Trilinos) ......... 40 5 Flens and Seldon Synoptic Comparison 41 2 Part I Requirements This document has been written to help in the choice of a linear algebra library to be included in Verdandi, a scientific library for data assimilation. The main requirements are 1. Portability: Verdandi should compile on BSD systems, Linux, MacOS, Unix and Windows. Beyond the portability itself, this often ensures that most compilers will accept Verdandi. An obvious consequence is that all dependencies of Verdandi must be portable, especially the linear algebra library. 2. High-level interface: the dependencies should be compatible with the building of the high-level interface (e.
    [Show full text]
  • C++ API for BLAS and LAPACK
    2 C++ API for BLAS and LAPACK Mark Gates ICL1 Piotr Luszczek ICL Ahmad Abdelfattah ICL Jakub Kurzak ICL Jack Dongarra ICL Konstantin Arturov Intel2 Cris Cecka NVIDIA3 Chip Freitag AMD4 1Innovative Computing Laboratory 2Intel Corporation 3NVIDIA Corporation 4Advanced Micro Devices, Inc. February 21, 2018 This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of two U.S. Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering and early testbed platforms, in support of the nation's exascale computing imperative. Revision Notes 06-2017 first publication 02-2018 • copy editing, • new cover. 03-2018 • adding a section about GPU support, • adding Ahmad Abdelfattah as an author. @techreport{gates2017cpp, author={Gates, Mark and Luszczek, Piotr and Abdelfattah, Ahmad and Kurzak, Jakub and Dongarra, Jack and Arturov, Konstantin and Cecka, Cris and Freitag, Chip}, title={{SLATE} Working Note 2: C++ {API} for {BLAS} and {LAPACK}}, institution={Innovative Computing Laboratory, University of Tennessee}, year={2017}, month={June}, number={ICL-UT-17-03}, note={revision 03-2018} } i Contents 1 Introduction and Motivation1 2 Standards and Trends2 2.1 Programming Language Fortran............................ 2 2.1.1 FORTRAN 77.................................. 2 2.1.2 BLAST XBLAS................................. 4 2.1.3 Fortran 90.................................... 4 2.2 Programming Language C................................ 5 2.2.1 Netlib CBLAS.................................. 5 2.2.2 Intel MKL CBLAS................................ 6 2.2.3 Netlib lapack cwrapper............................. 6 2.2.4 LAPACKE.................................... 7 2.2.5 Next-Generation BLAS: \BLAS G2".....................
    [Show full text]
  • Chapter 1. Preliminaries
    Chapter 1. Preliminaries 1.0 Introduction This book, like its sibling versions in other computer languages, is supposed to teach you methods of numerical computing that are practical, efficient, and (insofar as possible) elegant. We presume throughout this book that you, the reader, have particular tasks that you want to get done. We view our job as educating you on how to proceed. Occasionally we may try to reroute you briefly onto a particularly beautiful side road; but by and large, we will guide you along main highways that lead to practical destinations. Throughout this book, you will find us fearlessly editorializing, telling you what you should and shouldn’t do. This prescriptive tone results from a conscious decision on our part, and we hope that you will not find it irritating. We do not claim that our advice is infallible! Rather, we are reacting against a tendency, in the textbook literature of computation, to discuss every possible method that has ever been invented, without ever offering a practical judgment on relative merit. We do, therefore, offer you our practical judgments whenever we can. As you gain experience, you will form your own opinion of how reliable our advice is. We presume that you are able to read computer programs in C++, that being the language of this version of Numerical Recipes. The books Numerical Recipes in Fortran 77, Numerical Recipes in Fortran 90, and Numerical Recipes in C are separately available, if you prefer to program in one of those languages. Earlier editions of Numerical Recipes in Pascal and Numerical Recipes Routines and Ex- amples in BASIC are also available; while not containing the additional material of the Second Edition versions, these versions are perfectly serviceable if Pascal or BASIC is your language of choice.
    [Show full text]
  • A Framework for Developing Finite Element Codes for Multi- Disciplinary Applications
    A Framework for Developing Finite Element Codes for Multi- Disciplinary Applications Pooyan Dadvand Eugenio Oñate Monograph CIMNE Nº-109, January 2008 A Framework for Developing Finite Element Codes for Multi-Disciplinary Applications Pooyan Dadvand Eugenio Oñate Monograph CIMNE Nº-109, January 2008 Centro Internacional de Métodos Numéricos en Ingeniería Gran Capitán s/n, 08034 Barcelona, España CENTRO INTERNACIONAL DE MÉTODOS NUMÉRICOS EN INGENIERÍA Edificio C1, Campus Norte UPC Gran Capitán s/n 08034 Barcelona, Spain Primera edición: Januray 2008 A FRAMEWORK FOR DEVELOPING FINITE ELEMENT CODES FOR MULTI-DISCIPLINARY APPLICATIONS Monografía CIMNE M109 Los autores To Hengameh Abstract The world of computing simulation has experienced great progresses in recent years and requires more exigent multidisciplinary challenges to satisfy the new upcoming demands. Increasing the importance of solving multi-disciplinary problems makes developers put more attention to these problems and deal with difficulties involved in developing software in this area. Conventional finite element codes have several difficulties in dealing with multi-disciplinary problems. Many of these codes are designed and implemented for solving a certain type of problems, generally involving a single field. Extending these codes to deal with another field of analysis usually consists of several problems and large amounts of modifications and implementations. Some typical difficulties are: predefined set of degrees of freedom per node, data structure with fixed set of defined variables, global list of variables for all entities, domain based interfaces, IO restriction in reading new data and writing new results and algorithm definition inside the code. A common approach is to connect different solvers via a master program which implements the interaction algorithms and also transfers data from one solver to another.
    [Show full text]
  • An Active-Library Based Investigation Into the Performance Optimisation of Linear Algebra and the Finite Element Method
    Imperial College of Science, Technology and Medicine Department of Computing An Active-Library Based Investigation into the Performance Optimisation of Linear Algebra and the Finite Element Method Francis Prem Russell Submitted in part fulfilment of the requirements for the degree of Doctor of Philosophy in Computing and the Diploma of Imperial College, London, July 2011 Abstract In this thesis, I explore an approach called \active libraries". These are libraries that take part in their own optimisation, enabling both high-performance code and the presentation of intuitive abstractions. I investigate the use of active libraries in two domains. Firstly, dense and sparse linear algebra, particularly, the solution of linear systems of equations. Secondly, the specification and solution of finite element problems. Extending my earlier (MEng) thesis work, I describe the modifications to my linear algebra library \Desola" required to perform sparse-matrix code generation. I show that optimisations easily applied in the dense case using code-transformation must be applied at a higher level of abstraction in the sparse case. I present performance results for sparse linear system solvers generated using Desola and compare against an implementation using the Intel Math Kernel Library. I also present improved dense linear-algebra performance results. Next, I explore the active-library approach by developing a finite element library that captures runtime representations of basis functions, variational forms and sequences of operations be- tween discretised operators and fields. Using captured representations of variational forms and basis functions, I demonstrate optimisations to cell-local integral assembly that this approach enables, and compare against the state of the art.
    [Show full text]
  • Domain Engineering and Generic Programming for Parallel Scientific
    Domain Engineering and Generic Programming for Parallel Scientific Computing vorgelegt von Diplom-Mathematiker Jens Gerlach Von der Fakult¨at IV – Elektrotechnik und Informatik der Technischen Universit¨at Berlin zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften – Dr.-Ing – genehmigte Dissertation Promotionsausschuss: Vorsitzender: Prof. Dr. rer. nat. Hans-Ulrich Heiß Gutachter: Prof. Dr.-Ing. Stefan J¨ahnichen Gutachter: Prof. Dr. Sergei Gorlatch Tag der wissenschaftlichen Aussprache: 4. Juli 2002 Berlin 2002 D 83 For my parents Brigitte and Peter Gerlach Acknowledgement I want to express my gratitude to many people who supported me while I was working on this dissertation and developing the Janus framework. First of all I want to thank my advisors Professor Stefan J¨ahnichen and Professor Sergei Gorlatch, both from Technische Universit¨at Berlin, for their scientific assistance. As the director of Fraunhofer FIRST, Professor J¨ahnichen provided important feedback, constant support and a great rese- arch infrastructure. Professor Gorlatch discussed many parts of this disser- tation with me and helped me to keep the focus of this work. I also want to thank Professor Wolfgang Schr¨oder-Preikschat of University of Magdeburg for his invaluable assistance and very helpful critics when I was mapping out the scope of this work. I started working on the subject of this dissertation during my stay at the Tsukuba Research Center of the Real World Computing Partnership, Japan. My department heads Mitsuhisa Sato and Yukata Ishikawa were very supportive and provided an outstanding research environment. I owe both of them and my other Japanese colleagues deep gratitude. After returning from Japan, my project leader Dr.
    [Show full text]
  • A Framework for Developing Finite Element Codes for Multi- Disciplinary Applications
    A Framework for Developing Finite Element Codes for Multi- Disciplinary Applications Pooyan Dadvand Eugenio Oñate Monograph CIMNE Nº-109, January 2008 A Framework for Developing Finite Element Codes for Multi-Disciplinary Applications Pooyan Dadvand Eugenio Oñate Monograph CIMNE Nº-109, January 2008 Centro Internacional de Métodos Numéricos en Ingeniería Gran Capitán s/n, 08034 Barcelona, España CENTRO INTERNACIONAL DE MÉTODOS NUMÉRICOS EN INGENIERÍA Edificio C1, Campus Norte UPC Gran Capitán s/n 08034 Barcelona, Spain Primera edición: Januray 2008 A FRAMEWORK FOR DEVELOPING FINITE ELEMENT CODES FOR MULTI-DISCIPLINARY APPLICATIONS Monografía CIMNE M109 Los autores To Hengameh Abstract The world of computing simulation has experienced great progresses in recent years and requires more exigent multidisciplinary challenges to satisfy the new upcoming demands. Increasing the importance of solving multi-disciplinary problems makes developers put more attention to these problems and deal with difficulties involved in developing software in this area. Conventional finite element codes have several difficulties in dealing with multi-disciplinary problems. Many of these codes are designed and implemented for solving a certain type of problems, generally involving a single field. Extending these codes to deal with another field of analysis usually consists of several problems and large amounts of modifications and implementations. Some typical difficulties are: predefined set of degrees of freedom per node, data structure with fixed set of defined variables, global list of variables for all entities, domain based interfaces, IO restriction in reading new data and writing new results and algorithm definition inside the code. A common approach is to connect different solvers via a master program which implements the interaction algorithms and also transfers data from one solver to another.
    [Show full text]
  • Linear Algebra Libraries Claire Mouton
    Linear Algebra Libraries Claire Mouton To cite this version: Claire Mouton. Linear Algebra Libraries. [Technical Report] 2009, pp.35. inria-00576469 HAL Id: inria-00576469 https://hal.inria.fr/inria-00576469 Submitted on 15 Mar 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Linear Algebra Libraries Claire Mouton [email protected] March 2009 Contents I Requirements 3 II CPPLapack 4 III Eigen 5 IV Flens 6 V Gmm++ 7 VI GNU Scientific Library (GSL) 8 VII IT++ 9 VIII Lapack++ 10 IX Matrix Template Library (MTL) 11 X PETSc 13 XI Seldon 14 XII SparseLib++ 15 XIII Template Numerical Toolkit (TNT) 16 XIV Trilinos 17 XV uBlas 19 1 XVI Other Libraries 20 XVII Links and Benchmarks 22 1 Links 22 2 Benchmarks 22 2.1 Benchmarks for Linear Algebra Libraries . ...... 22 2.2 BenchmarksincludingSeldon . 23 2.2.1 BenchmarksforDenseMatrix. 23 2.2.2 BenchmarksforSparseMatrix . 26 XVIII Appendix 27 3 Flens Overloaded Operator Performance Compared to Seldon 27 4 Flens, Seldon and Trilinos Content Comparisons 29 4.1 Available Matrix Types from Blas (Flens and Seldon) . ....... 29 4.2 Available Interfaces to Blas and Lapack Routines (Flens and Seldon) .
    [Show full text]