Evaluation of Open-Source Linear Algebra Libraries Targeting ARM and RISC-V Architectures

Total Page:16

File Type:pdf, Size:1020Kb

Evaluation of Open-Source Linear Algebra Libraries Targeting ARM and RISC-V Architectures Proceedings of the Federated Conference on DOI: 10.15439/2020F145 Computer Science and Information Systems pp. 663–672 ISSN 2300-5963 ACSIS, Vol. 21 Evaluation of Open-Source Linear Algebra Libraries targeting ARM and RISC-V Architectures Christian Fibich, Stefan Tauner, Peter Rossler,¨ Martin Horauer Dept. of Electronic Engineering, University of Applied Sciences Technikum Wien Hochst¨ adtpl.¨ 6, 1200 Vienna, Austria {fibich, tauner, roessler, horauer}@technikum-wien.at Abstract—Basic Linear Algebra Subprograms (BLAS) has tation of this specification1. BLAS essentially describes a emerged as a de-facto standard interface for libraries providing programming interface for three levels of algorithms: level 1 linear algebra functionality. The advent of powerful devices for contains vector-vector operations, level 2 defines vector-matrix Internet of Things (IoT) nodes enables the reuse of existing BLAS implementations in these systems. This calls for a discerning operations, and level 3 specifies matrix-matrix operations all evaluation of the properties of these libraries on embedded for single- and double-precision real numbers, as well as for processors. single- and double-precision complex numbers. The reference This work benchmarks and discusses the performance and implementation itself does not contain optimized code – for memory consumption of a wide range of unmodified open-source example, matrix-multiplication is implemented using a simple BLAS libraries. In comparison to related (but partly outdated) publications this evaluation covers the largest set of open- iterative algorithm with three nested loops – it is, however, source BLAS libraries, considers memory consumption as well widely used as a performance baseline for implementers and distinctively focuses on Linux-capable embedded platforms of optimized BLAS libraries. For example, implementations (an ARM-based SoC that contains an SIMD accelerator and taking advantage of Single Instruction Multiple Data (SIMD) one of the first commercial embedded systems based on the hardware extensions provided by modern CPUs as well as emerging RISC-V architecture). Results show that especially for matrix operations and larger problem sizes, optimized BLAS GPUs via interfaces such as CUDA and OpenCL are common. implementations allow for significant performance gains when Since many modern embedded applications rely on al- compared to pure C implementations. Furthermore, the ARM gorithms that mandate complex computations an evaluation platform outperforms the RISC-V incarnation in our selection whether some BLAS implementations can be re-used under of tests. resource constraints imposed by typical embedded platforms Index Terms—Embedded Systems, Basic Linear Algebra Sub- programs, BLAS, Benchmarks, ARM, RISC-V seems in demand. Unlike the usual target systems for BLAS libraries this work evaluates them on comparably small embed- ded systems that display significantly divergent characteristics I. INTRODUCTION due to architectural differences. For example, modern x86 DGE computing – processing sensor data as close to CPUs can retire over 10 instructions per cycle per core under E their origin as possible – is a paradigm to de-centralize ideal circumstances while running at a multi-GHz clock rate processing and storage in order to improve the scalability of and dozens of MB of caches. This is in stark contrast to IoT applications. More potent embedded CPUs allow more embedded CPUs, which are often still non-speculative in- complex processing tasks, for example signal and image order architectures with sub-GHz frequencies. In particular, processing operations as well as inference (and even training) in this paper we evaluate different implementations of the of neural networks. Vector or matrix operations are essential BLAS specification targeting the ARM Cortex-A9 and RISC- building blocks of the digital algorithms that are at the core of V RV64GCSU architectures, respectively. Both share some these applications. This raises the question whether it is viable similarities such as having RISC architectures, comparably to re-use and adapt proven mathematical software libraries low clock frequencies and other attributes often found in written for server and desktop computers for embedded target embedded systems. Both Instruction Set Architectures (ISAs) platforms. support extensions leading to a wide variety of implementa- Netlib, a repository for open-source mathematical software tions. We chose two representative examples for our tests that run by AT&T, Bell Labs, University of Tennessee, and Oak are described in more detail in Section IV. Ridge National Laboratory, both maintains the BLAS spec- Furthermore, it was the intention of the authors to present ification document [1] and provides a reference implemen- a more comprehensive overview and evaluation of existing open-source BLAS libraries, when compared to related work This work has been supported by the Austrian Federal Ministry for Digital that is discussed in Section II. In Section III, the scope of and Economic Affairs (BM:DW) and the National Foundation for Research, this work – the evaluated libraries as well as the benchmark Technology and Development as related to the Josef Ressel Center “Innovative applications and CPU architectures that constitute the basis of Platforms for Electronic-Based Systems” (INES) managed by the Christian Doppler Society (CDG). We thank Kapsch TrafficCom AG for their support of this research. 1https://www.netlib.org/blas/, last visited on 2020-06-27 IEEE Catalog Number: CFP2085N-ART ©2020, PTI 663 664 PROCEEDINGS OF THE FEDCSIS. SOFIA, 2020 these evaluations – is described. Section IV provides details in particular targeting ARM Cortex-A9 and RISC-V. The on the specific build and run-time environments as well as focus on embedded systems ruled out Intel’s Math Kernel employed metrics and measurement methodologies. Finally, Library (MKL) implementation that specifically targets x86 the results of the evaluations are discussed in Section V before architectures. Instead, the performance of ARM’s Ne10 library the paper concludes. was included. Although this is not an implementation of the BLAS specification, it has an overlapping scope. The treatment II. RELATED WORK of memory consumption of the libraries and the inclusion of Due to its widespread use, libraries implementing the BLAS RISC-V sets this work apart from past publications. specification have been compared in the past. For example, [2] In addition to academic literature, several authors of the presents a wrapper for multiple BLAS library implementations libraries evaluated in this work have published their own facilitating their interchangeable use along with an evaluation benchmark results. ATLAS provides timing data for the by way of several benchmarks. Similar evaluations are pro- DGEMM (double precision generic matrix multiplication) BLAS vided by the BLIS framework presented in [3] targeting the function (dimensions between 1000 and 2000) in various performance of several BLAS level 2 and 3 algorithms and library versions and CPU architectures (e.g., Intel, MIPS, comparing them with an optimized ISO C implementation. SPARC)2. The results, however, are provided for up to ATLAS The code generator LGen generates computational kernels version 3.9.5, which is about 8 years old. The Eigen project for some BLAS operations with fixed-sized operands that are provides benchmark results of various level-2 and level-3 optimized for different target architectures and thus improve BLAS functions, as well as more complex functions such over generic implementations that employ varying operand as lower-upper factorization of matrices3. The results date sizes, see [4]. This approach was refined in [5] where the from 2011 and are compared to GotoBLAS (an OpenBLAS targets have been shifted to some high-end embedded proces- predecessor), Intel MKL, and ATLAS 3.8.3. BLIS provides sors. the most recent results, dating from 2020. Results are pro- Finally, BLASFEO (see also Section III-B) is a BLAS im- vided for server- and workstation-class CPUs (i.e., 10s of plementation that is intended to enable executing optimization cores, ARM ThunderX2, Intel SkylakeX, Intel Haswell, AMD algorithms on embedded targets. It is especially optimized EPYC). BLAS level 3 functions are evaluated against Eigen, for widely used embedded application processors (e.g., ARM OpenBLAS, BLASFEO, and MKL. Different sets of results Cortex-A7, -A9, -A15) as well as modern Intel and AMD are available for large4 and small matrices5. architectures (e.g., Intel Haswell, Sandy Bridge). In [6], the Unlike the publications discussed above this work examines authors compare their implementation to multiple other BLAS BLAS libraries on comparably small embedded systems. Pre- libraries. BLASFEO provides its own API that is better suited liminary results of this evaluation were published in [8]. This for small input sizes than standard BLAS as it reduces the work widens the scope of the analysis both by inclusion of overhead of aligning the matrices for internal processing. the RISC-V architecture and the very recent BLASFEO linear However, the standard BLAS API has been implemented algebra library. It is, to the best of our knowledge, the first coequally to the native API and evaluated on Intel and ARM work that evaluates the performance of general-purpose linear microarchitectures [7]. algebra libraries on the RISC-V architecture. Furthermore, it TABLE I: An overview of BLAS Benchmarks extends previous work
Recommended publications
  • ARM HPC Ecosystem
    ARM HPC Ecosystem Darren Cepulis HPC Segment Manager ARM Business Segment Group HPC Forum, Santa Fe, NM 19th April 2017 © ARM 2017 ARM Collaboration for Exascale Programs United States Japan ARM is currently a participant in Fujitsu and RIKEN announced that the two Department of Energy funded Post-K system targeted at Exascale will pre-Exascale projects: Data be based on ARMv8 with new Scalable Movement Dominates and Fast Vector Extensions. Forward 2. European Union China Through FP7 and Horizon 2020, James Lin, vice director for the Center ARM has been involved in several of HPC at Shanghai Jiao Tong University funded pre-Exascale projects claims China will build three pre- including the Mont Blanc program Exascale prototypes to select the which deployed one of the first architecture for their Exascale system. ARM prototype HPC systems. The three prototypes are based on AMD, SunWei TaihuLight, and ARMv8. 2 © ARM 2017 ARM HPC deployments starting in 2H2017 Tw o recent announcements about ARM in HPC in Europe: 3 © ARM 2017 Japan Exascale slides from Fujitsu at ISC’16 4 © ARM 2017 Foundational SW Ecosystem for HPC . Linux OS’s – RedHat, SUSE, CENTOS, UBUNTU,… Commercial . Compilers – ARM, GNU, LLVM,… . Libraries – ARM, OpenBLAS, BLIS, ATLAS, FFTW… . Parallelism – OpenMP, OpenMPI, MVAPICH2,… . Debugging – Allinea, RW Totalview, GDB,… Open-source . Analysis – ARM, Allinea, HPCToolkit, TAU,… . Job schedulers – LSF, PBS Pro, SLURM,… . Cluster mgmt – Bright, CMU, warewulf,… Predictable Baseline 5 © ARM 2017 – now on ARM Functional Supported packages / components Areas OpenHPC defines a baseline. It is a community effort to Base OS RHEL/CentOS 7.1, SLES 12 provide a common, verified set of open source packages for Administrative Conman, Ganglia, Lmod, LosF, ORCM, Nagios, pdsh, HPC deployments Tools prun ARM’s participation: Provisioning Warewulf Resource Mgmt.
    [Show full text]
  • 0 BLIS: a Modern Alternative to the BLAS
    0 BLIS: A Modern Alternative to the BLAS FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin We propose the portable BLAS-like Interface Software (BLIS) framework which addresses a number of short- comings in both the original BLAS interface and present-day BLAS implementations. The framework allows developers to rapidly instantiate high-performance BLAS-like libraries on existing and new architectures with relatively little effort. The key to this achievement is the observation that virtually all computation within level-2 and level-3 BLAS operations may be expressed in terms of very simple kernels. Higher-level framework code is generalized so that it can be reused and/or re-parameterized for different operations (as well as different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any and all BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are supported through a straightforward compatibility layer, though calling sequences must be updated for those who wish to access new functionality. Experimental performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans. Math. Softw. 0, 0, Article 0 ( 0000), 31 pages.
    [Show full text]
  • 18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance
    18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin GREGORIO QUINTANA-ORT´I, Universitat Jaume I We show how both the tridiagonal and bidiagonal QR algorithms can be restructured so that they become rich in operations that can achieve near-peak performance on a modern processor. The key is a novel, cache-friendly algorithm for applying multiple sets of Givens rotations to the eigenvector/singular vec- tor matrix. This algorithm is then implemented with optimizations that: (1) leverage vector instruction units to increase floating-point throughput, and (2) fuse multiple rotations to decrease the total number of memory operations. We demonstrate the merits of these new QR algorithms for computing the Hermitian eigenvalue decomposition (EVD) and singular value decomposition (SVD) of dense matrices when all eigen- vectors/singular vectors are computed. The approach yields vastly improved performance relative to tradi- tional QR algorithms for these problems and is competitive with two commonly used alternatives—Cuppen’s Divide-and-Conquer algorithm and the method of Multiple Relatively Robust Representations—while inherit- ing the more modest O(n) workspace requirements of the original QR algorithms. Since the computations performed by the restructured algorithms remain essentially identical to those performed by the original methods, robust numerical properties are preserved. Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: Eigenvalues, singular values, tridiagonal, bidiagonal, EVD, SVD, QR algorithm, Givens rotations, linear algebra, libraries, high performance ACM Reference Format: Van Zee, F.
    [Show full text]
  • 0 BLIS: a Framework for Rapidly Instantiating BLAS Functionality
    0 BLIS: A Framework for Rapidly Instantiating BLAS Functionality FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly in- stantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels. While others have had similar insights, BLIS reduces the necessary kernels to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and imple- mented in ISO C99 so that it can be reused and/or re-parameterized for different operations (and different architectures) with little to no modification. Inserting high-performance kernels into the framework facili- tates the immediate optimization of any BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the the traditional Fortran-77 BLAS interface, a generalized C interface, or any other higher level interface that builds upon this latter API. Preliminary performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans.
    [Show full text]
  • DD2358 – Introduction to HPC Linear Algebra Libraries & BLAS
    DD2358 – Introduction to HPC Linear Algebra Libraries & BLAS Stefano Markidis, KTH Royal Institute of Technology After this lecture, you will be able to • Understand the importance of numerical libraries in HPC • List a series of key numerical libraries including BLAS • Describe which kind of operations BLAS supports • Experiment with OpenBLAS and perform a matrix-matrix multiply using BLAS 2021-02-22 2 Numerical Libraries are the Foundation for Application Developments • While these applications are used in a wide variety of very different disciplines, their underlying computational algorithms are very similar to one another. • Application developers do not have to waste time redeveloping supercomputing software that has already been developed elsewhere. • Libraries targeting numerical linear algebra operations are the most common, given the ubiquity of linear algebra in scientific computing algorithms. 2021-02-22 3 Libraries are Tuned for Performance • Numerical libraries have been highly tuned for performance, often for more than a decade – It makes it difficult for the application developer to match a library’s performance using a homemade equivalent. • Because they are relatively easy to use and their highly tuned performance across a wide range of HPC platforms – The use of scientific computing libraries as software dependencies in computational science applications has become widespread. 2021-02-22 4 HPC Community Standards • Apart from acting as a repository for software reuse, libraries serve the important role of providing a
    [Show full text]
  • The BLAS API of BLASFEO: Optimizing Performance for Small Matrices
    The BLAS API of BLASFEO: optimizing performance for small matrices Gianluca Frison1, Tommaso Sartor1, Andrea Zanelli1, Moritz Diehl1;2 University of Freiburg, 1 Department of Microsystems Engineering (IMTEK), 2 Department of Mathematics email: [email protected] February 5, 2020 This research was supported by the German Federal Ministry for Economic Affairs and Energy (BMWi) via eco4wind (0324125B) and DyConPV (0324166B), and by DFG via Research Unit FOR 2401. Abstract BLASFEO is a dense linear algebra library providing high-performance implementations of BLAS- and LAPACK-like routines for use in embedded optimization and other applications targeting relatively small matrices. BLASFEO defines an API which uses a packed matrix format as its native format. This format is analogous to the internal memory buffers of optimized BLAS, but it is exposed to the user and it removes the packing cost from the routine call. For matrices fitting in cache, BLASFEO outperforms optimized BLAS implementations, both open-source and proprietary. This paper investigates the addition of a standard BLAS API to the BLASFEO framework, and proposes an implementation switching between two or more algorithms optimized for different matrix sizes. Thanks to the modular assembly framework in BLASFEO, tailored linear algebra kernels with mixed column- and panel-major arguments are easily developed. This BLAS API has lower performance than the BLASFEO API, but it nonetheless outperforms optimized BLAS and especially LAPACK libraries for matrices fitting in cache. Therefore, it can boost a wide range of applications, where standard BLAS and LAPACK libraries are employed and the matrix size is moderate. In particular, this paper investigates the benefits in scientific programming languages such as Octave, SciPy and Julia.
    [Show full text]
  • Accelerating the “Motifs” in Machine Learning on Modern Processors
    Accelerating the \Motifs" in Machine Learning on Modern Processors Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering Jiyuan Zhang B.S., Electrical Engineering, Harbin Institute of Technology Carnegie Mellon University Pittsburgh, PA December 2020 c Jiyuan Zhang, 2020 All Rights Reserved Acknowledgments First and foremost, I would like to thank my advisor | Franz. Thank you for being a supportive and understanding advisor. You encouraged me to pursue projects I enjoyed and guided me to be the researcher that I am today. When I was stuck on a difficult problem, you are always there to offer help and stand side by side with me to figure out solutions. No matter what the difficulties are, from research to personal life issues, you can always consider in our shoes and provide us your unreserved support. Your encouragement and sense of humor has been a great help to guide me through the difficult times. You are the key enabler for me to accomplish the PhD journey, as well as making my PhD journey less suffering and more joyful. Next, I would like to thank the members of my thesis committee { Dr. Michael Garland, Dr. Phil Gibbons, and Dr. Tze Meng Low. Thank you for taking the time to be my thesis committee. Your questions and suggestions have greatly helped shape this work, and your insights have helped me under- stand the limitations that I would have overlooked in the original draft. I am grateful for the advice and suggestions I have received from you to improve this dissertation.
    [Show full text]
  • Introduchon to Arm for Network Stack Developers
    Introducon to Arm for network stack developers Pavel Shamis/Pasha Principal Research Engineer Mvapich User Group 2017 © 2017 Arm Limited Columbus, OH Outline • Arm Overview • HPC SoLware Stack • Porng on Arm • Evaluaon 2 © 2017 Arm Limited Arm Overview © 2017 Arm Limited An introduc1on to Arm Arm is the world's leading semiconductor intellectual property supplier. We license to over 350 partners, are present in 95% of smart phones, 80% of digital cameras, 35% of all electronic devices, and a total of 60 billion Arm cores have been shipped since 1990. Our CPU business model: License technology to partners, who use it to create their own system-on-chip (SoC) products. We may license an instrucBon set architecture (ISA) such as “ARMv8-A”) or a specific implementaon, such as “Cortex-A72”. …and our IP extends beyond the CPU Partners who license an ISA can create their own implementaon, as long as it passes the compliance tests. 4 © 2017 Arm Limited A partnership business model A business model that shares success Business Development • Everyone in the value chain benefits Arm Licenses technology to Partner • Long term sustainability SemiCo Design once and reuse is fundamental IP Partner Licence fee • Spread the cost amongst many partners Provider • Technology reused across mulBple applicaons Partners develop • Creates market for ecosystem to target chips – Re-use is also fundamental to the ecosystem Royalty Upfront license fee OEM • Covers the development cost Customer Ongoing royalBes OEM sells • Typically based on a percentage of chip price
    [Show full text]
  • Algorithms and Optimization Techniques for High-Performance Matrix-Matrix Multiplications of Very Small Matrices
    Algorithms and Optimization Techniques for High-Performance Matrix-Matrix Multiplications of Very Small Matrices I. Masliahc, A. Abdelfattaha, A. Haidara, S. Tomova, M. Baboulinb, J. Falcoub, J. Dongarraa,d aInnovative Computing Laboratory, University of Tennessee, Knoxville, TN, USA bUniversity of Paris-Sud, France cInria Bordeaux, France dUniversity of Manchester, Manchester, UK Abstract Expressing scientific computations in terms of BLAS, and in particular the general dense matrix-matrix multiplication (GEMM), is of fundamental impor- tance for obtaining high performance portability across architectures. However, GEMMs for small matrices of sizes smaller than 32 are not sufficiently optimized in existing libraries. We consider the computation of many small GEMMs and its performance portability for a wide range of computer architectures, includ- ing Intel CPUs, ARM, IBM, Intel Xeon Phi, and GPUs. These computations often occur in applications like big data analytics, machine learning, high-order finite element methods (FEM), and others. The GEMMs are grouped together in a single batched routine. For these cases, we present algorithms and their op- timization techniques that are specialized for the matrix sizes and architectures of interest. We derive a performance model and show that the new develop- ments can be tuned to obtain performance that is within 90% of the optimal for any of the architectures of interest. For example, on a V100 GPU for square matrices of size 32, we achieve an execution rate of about 1; 600 gigaFLOP/s in double-precision arithmetic, which is 95% of the theoretically derived peak for this computation on a V100 GPU. We also show that these results outperform I Preprint submitted to Journal of Parallel Computing September 25, 2018 currently available state-of-the-art implementations such as vendor-tuned math libraries, including Intel MKL and NVIDIA CUBLAS, as well as open-source libraries like OpenBLAS and Eigen.
    [Show full text]
  • BLAS-On-Flash: an Alternative for Training Large ML Models?
    BLAS-on-flash: an alternative for training large ML models? Suhas Jayaram Subramanya Srajan Garg Harsha Vardhan Simhadri Microsoft Research India IIT Bombay Microsoft Research India [email protected] [email protected] [email protected] ABSTRACT Linear algebra kernels offer plenty of locality, so much so Many ML training tasks admit learning algorithms that can that bandwidth required for supporting multiprocessor sys- be composed with linear algebra. On large datasets, the tems can be provided by a PCIe or SATA bus [3, 39]. Further, working set of these algorithms overflows the memory. For recent developments in hardware and software eco-system such scenarios, we propose a library that supports BLAS position non-volatile memory as an inexpensive alternative and sparseBLAS subroutines on large matrices resident on to DRAM [2, 11, 14, 33]. Hardware technology and interfaces inexpensive non-volatile memory. We demonstrate that such for non-volatile memories have increasingly lower end-to-end libraries can achieve near in-memory performance and be latency (few 휇s) [18] and higher bandwidth: from 4-8 GT/s in used for fast implementations of complex algorithms such as PCIe3.0 to 16GT/s in PCIe4.0 [31] and 32GT/s in PCIe5.0. eigen-solvers. We believe that this approach could be a cost- Hardware manufactures are packaging non-volatile memory effective alternative to expensive big-data compute systems. with processing units, e.g. Radeon PRO SSG [1]. These observations point to a cost-effective solution for scaling linear algebra based algorithms to large datasets in many scenarios. Use inexpensive PCIe-connected SSDs to 1 INTRODUCTION store large matrices corresponding to the data and the model, Data analysis pipelines, such as those that arise in scientific and exploit the locality of linear algebra to develop a libraries computing as well as ranking and relevance, rely on learning of routines that can operate on these matrices with a limited from datasets that are 100s of GB to a few TB in size.
    [Show full text]
  • A Systematic Approach for Obtaining Performance on Matrix-Like Operations Submitted in Partial Fulfillment of the Requirements
    A Systematic Approach for Obtaining Performance on Matrix-Like Operations Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering Richard Michael Veras B.S., Computer Science & Mathematics, The University of Texas at Austin M.S., Electrical and Computer Engineering, Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA August, 2017 Copyright c 2017 Richard Michael� Veras. All rights reserved. Acknowledgments This work would not have been possible without the encouragement and support of those around me. To the members of my thesis committee – Dr. Richard Vuduc, Dr. Keshav Pingali, Dr. James Hoe and my Advisor Dr. Franz Franchetti – thank you for taking the time to be part of this process. Your questions and suggestions helped shape this work. In addition, I am grateful for the advice I have received from the four of you throughout my PhD. Franz, thank you for being a supportive and understanding advisor. Your excitement is contagious and is easily the best part of our meetings, especially during the times when I was stuck on a difficult problem. You en- couraged me to pursue project I enjoyed and guided me to be the researcher that I am today. I am incredibly grateful for your understanding andflexi- bility when it came to my long distance relationship and working remotely. To the Spiral group past and present and A-Level, you are amazing group and community to be a part of. You are all wonderful friends, and you made Pittsburgh feel like a home. Thom and Tze Meng, whether we were talking shop or sci-fi over beers, our discussions have really shaped the way I think and approach ideas, and that will stick with me for a long time.
    [Show full text]
  • Flexiblas - a flexible BLAS Library with Runtime Exchangeable Backends
    FlexiBLAS - A flexible BLAS library with runtime exchangeable backends Martin K¨ohler Jens Saak December 20, 2013 Abstract The BLAS library is one of the central libraries for the implementation of nu- merical algorithms. It serves as the basis for many other numerical libraries like LAPACK, PLASMA or MAGMA (to mention only the most obvious). Thus a fast BLAS implementation is the key ingredient for efficient applications in this area. However, for debugging or benchmarking purposes it is often necessary to replace the underlying BLAS implementation of an application, e.g. to disable threading or to include debugging symbols. In this paper we present a novel framework that allows one to exchange the BLAS implementation at run-time via an envi- ronment variable. Our concept neither requires relinkage, nor recompilation of the application. Numerical experiments show that there is no notable overhead introduced by this new approach. For only a very little overhead the framework naturally extends to a minimal profiling setup that allows one to count numbers of calls to the BLAS routines used and measure the time spent therein. Contents 1 Introduction2 2 Implementation Details5 3 Usage 11 4 Numerical Test 15 5 Conclusions 16 6 Acknowledgment 17 1 libblas.so Application liblapack.so libblas.so libumfpack.so Figure 1: Shared library dependencies of an example application. 1 Introduction The BLAS library [2,3,6] is one of the most important basic software libraries in sci- entific computing. The Netlib reference implementation defines the standard Fortran interface and its general behavior. Besides this, optimized variants for nearly every computer architecture exist.
    [Show full text]