18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance

Total Page:16

File Type:pdf, Size:1020Kb

18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance 18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin GREGORIO QUINTANA-ORT´I, Universitat Jaume I We show how both the tridiagonal and bidiagonal QR algorithms can be restructured so that they become rich in operations that can achieve near-peak performance on a modern processor. The key is a novel, cache-friendly algorithm for applying multiple sets of Givens rotations to the eigenvector/singular vec- tor matrix. This algorithm is then implemented with optimizations that: (1) leverage vector instruction units to increase floating-point throughput, and (2) fuse multiple rotations to decrease the total number of memory operations. We demonstrate the merits of these new QR algorithms for computing the Hermitian eigenvalue decomposition (EVD) and singular value decomposition (SVD) of dense matrices when all eigen- vectors/singular vectors are computed. The approach yields vastly improved performance relative to tradi- tional QR algorithms for these problems and is competitive with two commonly used alternatives—Cuppen’s Divide-and-Conquer algorithm and the method of Multiple Relatively Robust Representations—while inherit- ing the more modest O(n) workspace requirements of the original QR algorithms. Since the computations performed by the restructured algorithms remain essentially identical to those performed by the original methods, robust numerical properties are preserved. Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: Eigenvalues, singular values, tridiagonal, bidiagonal, EVD, SVD, QR algorithm, Givens rotations, linear algebra, libraries, high performance ACM Reference Format: Van Zee, F. G., van de Geijn, R. A., and Quintana-Ort´ı, G. 2014. Restructuring the tridiagonal and bidiagonal QR algorithms for performance. ACM Trans. Math. Softw. 40, 3, Article 18 (April 2014), 34 pages. DOI:http://dx.doi.org/10.1145/2535371 1. INTRODUCTION The tridiagonal (and/or bidiagonal) QR algorithm is taught in a typical graduate-level numerical linear algebra course, and despite being among the most accurate1 meth- ods for performing eigenvalue and singular value decompositions (EVD and SVD, respectively), it is not used much in practice because its performance is not compet- itive [Dhillon and Parlett 2003; Golub and Loan 1996; Stewart 2001; Watkins 1982]. 1Notable algorithms which exceed the accuracy of the QR algorithm include the dqds algorithm (a variant of the QR algorithm) [Fernando and Parlett 1994; Parlett and Marques 1999] and the Jacobi-SVD algorithm by DrmacandVeseliˇ c´ [2008a, 2008b]. This research was partially sponsored by the UTAustin-Portugal Colab program, a grant from Microsoft, and grants from the National Science Foundation (awards OCI-0850750, CCF-0917167, and OCI-1148125. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). Authors’ addresses: F. G. Van Zee (corresponding author) and R. A. van de Geijn, Institute of Computational Engineering and Sciences, Department of Computer Science, The University of Texas at Austin, Austin, TX 78712; email: fi[email protected]; G. Quintana-Ort´ı, Departamento de Ingenier´ıa y Ciencia de Computa- dores, Universitat Jaume I, Campus Riu Sec, 12.071, Castellon,´ Spain. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. c 2014 ACM 0098-3500/2014/04-ART18 $15.00 DOI:http://dx.doi.org/10.1145/2535371 ACM Transactions on Mathematical Software, Vol. 40, No. 3, Article 18, Publication date: April 2014. 18:2 F. G. Van Zee et al. The reason for this is twofold: First, classic QR algorithm implementations, such as those in LAPACK, cast most of their computation (the application of Givens rotations) in terms of a routine that is absent from the BLAS, and thus is typically not available in optimized form. Second, even if such an optimized implementation existed, it would not matter because the QR algorithm is currently structured to apply Givens rotations via repeated instances of O(n2) computation on O(n2) data, effectively making it rich in level-2 BLAS-like operations, which inherently cannot achieve high performance because there is little opportunity for reuse of cached data. Many in the numerical linear algebra community have long speculated that the QR algorithm’s performance could be improved by saving up many sets of Givens rotations before applying them to the matrix in which eigenvectors or singular vectors are being accumulated. In this article we show that, when computing all eigenvectors of a dense Hermitian matrix or all singular vectors of a dense general matrix, dramatic improvements in performance can indeed be achieved. This work makes a number of contributions to this subject. — It describes how the traditional QR algorithm can be restructured so that compu- tation is cast in terms of an operation that applies many sets of Givens rotations to the matrix in which the eigen-/singular vectors are accumulated. This restruc- turing preserves a key feature of the original QR algorithm, namely the approach requires only linear (O(n)) workspace. An optional optimization to the restructured algorithm that requires O(n2) workspace is also discussed and tested. — It proposes an algorithm for applying many sets of Givens rotations that, in theory, exhibits greatly improved reuse of data in the cache. It then shows that an im- plementation of this algorithm can achieve near-peak performance by: (1) utilizing vector instruction units to increase floating-point operation throughput, and (2) fus- ing multiple rotations so that data can be reused in-register, which decreases costly memory operations. — It exposes and leverages the fact that lower computational costs of both the method of Multiple Relatively Robust Representations (MRRR) [Dhillon and Parlett 2004; Dhillon et al. 2006] and Cuppen’s Divide-and-Conquer method (D&C) [Cuppen 1980] are partially offset by an O(n3) difference in cost between the former methods’ back- transformations and the corresponding step in the QR algorithm. — It demonstrates performance of EVD via the QR algorithm that is competitive with that of D&C- and MRRR-based EVD, and QR-based SVD performance that is com- parable to D&C-based SVD. — It makes the resulting implementations available as part of the open-source libflame library. The article primarily focuses on the complex case; the mathematics then trivially sim- plify to the real case. We consider these results significant in part because we place a premium on sim- plicity and reliability. The restructured QR algorithm presented in this work is simple, gives performance that is almost as good as that of more intricate algorithms (such as D&C and MRRR), and does so using only O(n) workspace, and without the worry of what might happen to numerical accuracy in pathological cases such as tightly clus- tered eigen-/singular values. It should be emphasized that the improved performance we report is not so pro- nounced that the QR algorithm becomes competitive with D&C and MRRR when per- forming standalone tridiagonal EVD or bidiagonal SVD (that is, when the input matrix is already reduced to condensed form). Rather, we show that, in the context of the dense decompositions, which include two other stages of O(n3) computation, the restructured ACM Transactions on Mathematical Software, Vol. 40, No. 3, Article 18, Publication date: April 2014. Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance 18:3 QR algorithm provides enough speedup over the more traditional method to facilitate competitive overall performance. 2. COMPUTING THE SPECTRAL DECOMPOSITION OF A HERMITIAN MATRIX Given a Hermitian matrix A ∈ Cn×n, its eigenvalue decomposition (EVD) is given by A = QDQH, where Q ∈ Cn×n is unitary (QHQ = I)andD ∈ Rn×n is diagonal. The eigenvalues of matrix A can then be found on the diagonal of D while the correspond- ing eigenvectors are the columns of Q. The standard approach to computing the EVD proceeds in three steps [Stewart 2001]: Reduce matrix A to real tridiagonal form T via = H = T unitary similarity transformations: A QATQA ; compute the EVD of T: T QTDQT; H and back-transform the eigenvectors of T: Q = QAQT so that A = QDQ . Let us dis- cuss these in more detail. Note that we will use the general term “workspace” to refer to any significant space needed beyond the n × n space that holds the input matrix A and the n-length space that holds output eigenvalues (i.e., the diagonal of D). 2.1. Reduction to Real Tridiagonal Form The reduction to tridiagonal form proceeds as the computation and application of a sequence of Householder transformations. When the transformations are defined as reflectors [Golub and Loan 1996; Stewart 2001; Van Zee et al. 2012], the tridiagonal ··· ··· = H = reduction takes the form Hn−2 H1H0AH0H1 Hn−2 QA AQA T, a real-valued, tridiagonal matrix.2 4 3 The cost of the reduction to tridiagonal form is 3 n floating-point operations (flops) × 4 3 if A is real and 4 3 n flops if it is complex valued. About half of these computa- tions are in symmetric (or Hermitian) matrix-vector multiplications (a level-2 BLAS operation [Dongarra et al. 1988]), which are inherently slow since they perform O(n2) computations on O(n2) data.
Recommended publications
  • ARM HPC Ecosystem
    ARM HPC Ecosystem Darren Cepulis HPC Segment Manager ARM Business Segment Group HPC Forum, Santa Fe, NM 19th April 2017 © ARM 2017 ARM Collaboration for Exascale Programs United States Japan ARM is currently a participant in Fujitsu and RIKEN announced that the two Department of Energy funded Post-K system targeted at Exascale will pre-Exascale projects: Data be based on ARMv8 with new Scalable Movement Dominates and Fast Vector Extensions. Forward 2. European Union China Through FP7 and Horizon 2020, James Lin, vice director for the Center ARM has been involved in several of HPC at Shanghai Jiao Tong University funded pre-Exascale projects claims China will build three pre- including the Mont Blanc program Exascale prototypes to select the which deployed one of the first architecture for their Exascale system. ARM prototype HPC systems. The three prototypes are based on AMD, SunWei TaihuLight, and ARMv8. 2 © ARM 2017 ARM HPC deployments starting in 2H2017 Tw o recent announcements about ARM in HPC in Europe: 3 © ARM 2017 Japan Exascale slides from Fujitsu at ISC’16 4 © ARM 2017 Foundational SW Ecosystem for HPC . Linux OS’s – RedHat, SUSE, CENTOS, UBUNTU,… Commercial . Compilers – ARM, GNU, LLVM,… . Libraries – ARM, OpenBLAS, BLIS, ATLAS, FFTW… . Parallelism – OpenMP, OpenMPI, MVAPICH2,… . Debugging – Allinea, RW Totalview, GDB,… Open-source . Analysis – ARM, Allinea, HPCToolkit, TAU,… . Job schedulers – LSF, PBS Pro, SLURM,… . Cluster mgmt – Bright, CMU, warewulf,… Predictable Baseline 5 © ARM 2017 – now on ARM Functional Supported packages / components Areas OpenHPC defines a baseline. It is a community effort to Base OS RHEL/CentOS 7.1, SLES 12 provide a common, verified set of open source packages for Administrative Conman, Ganglia, Lmod, LosF, ORCM, Nagios, pdsh, HPC deployments Tools prun ARM’s participation: Provisioning Warewulf Resource Mgmt.
    [Show full text]
  • 0 BLIS: a Modern Alternative to the BLAS
    0 BLIS: A Modern Alternative to the BLAS FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin We propose the portable BLAS-like Interface Software (BLIS) framework which addresses a number of short- comings in both the original BLAS interface and present-day BLAS implementations. The framework allows developers to rapidly instantiate high-performance BLAS-like libraries on existing and new architectures with relatively little effort. The key to this achievement is the observation that virtually all computation within level-2 and level-3 BLAS operations may be expressed in terms of very simple kernels. Higher-level framework code is generalized so that it can be reused and/or re-parameterized for different operations (as well as different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any and all BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are supported through a straightforward compatibility layer, though calling sequences must be updated for those who wish to access new functionality. Experimental performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans. Math. Softw. 0, 0, Article 0 ( 0000), 31 pages.
    [Show full text]
  • 0 BLIS: a Framework for Rapidly Instantiating BLAS Functionality
    0 BLIS: A Framework for Rapidly Instantiating BLAS Functionality FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly in- stantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels. While others have had similar insights, BLIS reduces the necessary kernels to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and imple- mented in ISO C99 so that it can be reused and/or re-parameterized for different operations (and different architectures) with little to no modification. Inserting high-performance kernels into the framework facili- tates the immediate optimization of any BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the the traditional Fortran-77 BLAS interface, a generalized C interface, or any other higher level interface that builds upon this latter API. Preliminary performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans.
    [Show full text]
  • DD2358 – Introduction to HPC Linear Algebra Libraries & BLAS
    DD2358 – Introduction to HPC Linear Algebra Libraries & BLAS Stefano Markidis, KTH Royal Institute of Technology After this lecture, you will be able to • Understand the importance of numerical libraries in HPC • List a series of key numerical libraries including BLAS • Describe which kind of operations BLAS supports • Experiment with OpenBLAS and perform a matrix-matrix multiply using BLAS 2021-02-22 2 Numerical Libraries are the Foundation for Application Developments • While these applications are used in a wide variety of very different disciplines, their underlying computational algorithms are very similar to one another. • Application developers do not have to waste time redeveloping supercomputing software that has already been developed elsewhere. • Libraries targeting numerical linear algebra operations are the most common, given the ubiquity of linear algebra in scientific computing algorithms. 2021-02-22 3 Libraries are Tuned for Performance • Numerical libraries have been highly tuned for performance, often for more than a decade – It makes it difficult for the application developer to match a library’s performance using a homemade equivalent. • Because they are relatively easy to use and their highly tuned performance across a wide range of HPC platforms – The use of scientific computing libraries as software dependencies in computational science applications has become widespread. 2021-02-22 4 HPC Community Standards • Apart from acting as a repository for software reuse, libraries serve the important role of providing a
    [Show full text]
  • The BLAS API of BLASFEO: Optimizing Performance for Small Matrices
    The BLAS API of BLASFEO: optimizing performance for small matrices Gianluca Frison1, Tommaso Sartor1, Andrea Zanelli1, Moritz Diehl1;2 University of Freiburg, 1 Department of Microsystems Engineering (IMTEK), 2 Department of Mathematics email: [email protected] February 5, 2020 This research was supported by the German Federal Ministry for Economic Affairs and Energy (BMWi) via eco4wind (0324125B) and DyConPV (0324166B), and by DFG via Research Unit FOR 2401. Abstract BLASFEO is a dense linear algebra library providing high-performance implementations of BLAS- and LAPACK-like routines for use in embedded optimization and other applications targeting relatively small matrices. BLASFEO defines an API which uses a packed matrix format as its native format. This format is analogous to the internal memory buffers of optimized BLAS, but it is exposed to the user and it removes the packing cost from the routine call. For matrices fitting in cache, BLASFEO outperforms optimized BLAS implementations, both open-source and proprietary. This paper investigates the addition of a standard BLAS API to the BLASFEO framework, and proposes an implementation switching between two or more algorithms optimized for different matrix sizes. Thanks to the modular assembly framework in BLASFEO, tailored linear algebra kernels with mixed column- and panel-major arguments are easily developed. This BLAS API has lower performance than the BLASFEO API, but it nonetheless outperforms optimized BLAS and especially LAPACK libraries for matrices fitting in cache. Therefore, it can boost a wide range of applications, where standard BLAS and LAPACK libraries are employed and the matrix size is moderate. In particular, this paper investigates the benefits in scientific programming languages such as Octave, SciPy and Julia.
    [Show full text]
  • Introduchon to Arm for Network Stack Developers
    Introducon to Arm for network stack developers Pavel Shamis/Pasha Principal Research Engineer Mvapich User Group 2017 © 2017 Arm Limited Columbus, OH Outline • Arm Overview • HPC SoLware Stack • Porng on Arm • Evaluaon 2 © 2017 Arm Limited Arm Overview © 2017 Arm Limited An introduc1on to Arm Arm is the world's leading semiconductor intellectual property supplier. We license to over 350 partners, are present in 95% of smart phones, 80% of digital cameras, 35% of all electronic devices, and a total of 60 billion Arm cores have been shipped since 1990. Our CPU business model: License technology to partners, who use it to create their own system-on-chip (SoC) products. We may license an instrucBon set architecture (ISA) such as “ARMv8-A”) or a specific implementaon, such as “Cortex-A72”. …and our IP extends beyond the CPU Partners who license an ISA can create their own implementaon, as long as it passes the compliance tests. 4 © 2017 Arm Limited A partnership business model A business model that shares success Business Development • Everyone in the value chain benefits Arm Licenses technology to Partner • Long term sustainability SemiCo Design once and reuse is fundamental IP Partner Licence fee • Spread the cost amongst many partners Provider • Technology reused across mulBple applicaons Partners develop • Creates market for ecosystem to target chips – Re-use is also fundamental to the ecosystem Royalty Upfront license fee OEM • Covers the development cost Customer Ongoing royalBes OEM sells • Typically based on a percentage of chip price
    [Show full text]
  • Algorithms and Optimization Techniques for High-Performance Matrix-Matrix Multiplications of Very Small Matrices
    Algorithms and Optimization Techniques for High-Performance Matrix-Matrix Multiplications of Very Small Matrices I. Masliahc, A. Abdelfattaha, A. Haidara, S. Tomova, M. Baboulinb, J. Falcoub, J. Dongarraa,d aInnovative Computing Laboratory, University of Tennessee, Knoxville, TN, USA bUniversity of Paris-Sud, France cInria Bordeaux, France dUniversity of Manchester, Manchester, UK Abstract Expressing scientific computations in terms of BLAS, and in particular the general dense matrix-matrix multiplication (GEMM), is of fundamental impor- tance for obtaining high performance portability across architectures. However, GEMMs for small matrices of sizes smaller than 32 are not sufficiently optimized in existing libraries. We consider the computation of many small GEMMs and its performance portability for a wide range of computer architectures, includ- ing Intel CPUs, ARM, IBM, Intel Xeon Phi, and GPUs. These computations often occur in applications like big data analytics, machine learning, high-order finite element methods (FEM), and others. The GEMMs are grouped together in a single batched routine. For these cases, we present algorithms and their op- timization techniques that are specialized for the matrix sizes and architectures of interest. We derive a performance model and show that the new develop- ments can be tuned to obtain performance that is within 90% of the optimal for any of the architectures of interest. For example, on a V100 GPU for square matrices of size 32, we achieve an execution rate of about 1; 600 gigaFLOP/s in double-precision arithmetic, which is 95% of the theoretically derived peak for this computation on a V100 GPU. We also show that these results outperform I Preprint submitted to Journal of Parallel Computing September 25, 2018 currently available state-of-the-art implementations such as vendor-tuned math libraries, including Intel MKL and NVIDIA CUBLAS, as well as open-source libraries like OpenBLAS and Eigen.
    [Show full text]
  • BLAS-On-Flash: an Alternative for Training Large ML Models?
    BLAS-on-flash: an alternative for training large ML models? Suhas Jayaram Subramanya Srajan Garg Harsha Vardhan Simhadri Microsoft Research India IIT Bombay Microsoft Research India [email protected] [email protected] [email protected] ABSTRACT Linear algebra kernels offer plenty of locality, so much so Many ML training tasks admit learning algorithms that can that bandwidth required for supporting multiprocessor sys- be composed with linear algebra. On large datasets, the tems can be provided by a PCIe or SATA bus [3, 39]. Further, working set of these algorithms overflows the memory. For recent developments in hardware and software eco-system such scenarios, we propose a library that supports BLAS position non-volatile memory as an inexpensive alternative and sparseBLAS subroutines on large matrices resident on to DRAM [2, 11, 14, 33]. Hardware technology and interfaces inexpensive non-volatile memory. We demonstrate that such for non-volatile memories have increasingly lower end-to-end libraries can achieve near in-memory performance and be latency (few 휇s) [18] and higher bandwidth: from 4-8 GT/s in used for fast implementations of complex algorithms such as PCIe3.0 to 16GT/s in PCIe4.0 [31] and 32GT/s in PCIe5.0. eigen-solvers. We believe that this approach could be a cost- Hardware manufactures are packaging non-volatile memory effective alternative to expensive big-data compute systems. with processing units, e.g. Radeon PRO SSG [1]. These observations point to a cost-effective solution for scaling linear algebra based algorithms to large datasets in many scenarios. Use inexpensive PCIe-connected SSDs to 1 INTRODUCTION store large matrices corresponding to the data and the model, Data analysis pipelines, such as those that arise in scientific and exploit the locality of linear algebra to develop a libraries computing as well as ranking and relevance, rely on learning of routines that can operate on these matrices with a limited from datasets that are 100s of GB to a few TB in size.
    [Show full text]
  • Flexiblas - a flexible BLAS Library with Runtime Exchangeable Backends
    FlexiBLAS - A flexible BLAS library with runtime exchangeable backends Martin K¨ohler Jens Saak December 20, 2013 Abstract The BLAS library is one of the central libraries for the implementation of nu- merical algorithms. It serves as the basis for many other numerical libraries like LAPACK, PLASMA or MAGMA (to mention only the most obvious). Thus a fast BLAS implementation is the key ingredient for efficient applications in this area. However, for debugging or benchmarking purposes it is often necessary to replace the underlying BLAS implementation of an application, e.g. to disable threading or to include debugging symbols. In this paper we present a novel framework that allows one to exchange the BLAS implementation at run-time via an envi- ronment variable. Our concept neither requires relinkage, nor recompilation of the application. Numerical experiments show that there is no notable overhead introduced by this new approach. For only a very little overhead the framework naturally extends to a minimal profiling setup that allows one to count numbers of calls to the BLAS routines used and measure the time spent therein. Contents 1 Introduction2 2 Implementation Details5 3 Usage 11 4 Numerical Test 15 5 Conclusions 16 6 Acknowledgment 17 1 libblas.so Application liblapack.so libblas.so libumfpack.so Figure 1: Shared library dependencies of an example application. 1 Introduction The BLAS library [2,3,6] is one of the most important basic software libraries in sci- entific computing. The Netlib reference implementation defines the standard Fortran interface and its general behavior. Besides this, optimized variants for nearly every computer architecture exist.
    [Show full text]
  • Scientific Computing
    Scientific Computing (PHYS 2109/Ast 3100 H) II. Numerical Tools for Physical Scientists SciNet HPC Consortium University of Toronto Winter 2015 Part II - Application I Using packages for Linear Algebra I BLAS I LAPACK I etc... Lecture 13: Numerical Linear Algebra Part I - Theory I Solving Ax = b I System Properties I Direct Solvers I Iterative Solvers I Dense vs. Sparse matrices Lecture 13: Numerical Linear Algebra Part I - Theory I Solving Ax = b I System Properties I Direct Solvers I Iterative Solvers I Dense vs. Sparse matrices Part II - Application I Using packages for Linear Algebra I BLAS I LAPACK I etc... How to write numerical linear algebra As much as possible, rely on existing, mature software libraries for performing numerical linear algebra computations. By doing so... I Focus on your code details I Reduce the amout of code to produce/debug I Libraries are tuned and optimized, ie. your code will run faster I More options to switch methods if necessary I PETSc ( http://www.mcs.anl.gov/petsc/ ) I Argonne National Labs I Open Source I C++ I PDE & Iterative Linear Solvers I Trilinos ( http://trilinos.sandia.gov/ ) I Sandia National Labs I Collection of 50+ packages I Linear Solvers, Preconditioners, etc. I Others I http://www.netlib.org/utk/people/JackDongarra/la-sw.html Software Packages I Netlib ( http://www.netlib.org ) I Maintained by UT and ORNL I Most of the code is public domain or freely licensed I Mostly written in FORTRAN 77 ! I BLAS & LAPACK I Trilinos ( http://trilinos.sandia.gov/ ) I Sandia National Labs I Collection of 50+ packages I Linear Solvers, Preconditioners, etc.
    [Show full text]
  • BLAS and LAPACK Runtime Switching
    BLAS and LAPACK runtime switching Mo Zhou [email protected] April 9, 2019 Abstract mark different implementations efficiently, and painlessly switch the underlying implementation for different programs Classical numerical linear algebra libraries, BLAS and LA- under different scenarios. PACK play important roles in the scientific computing field. Various demands on these libraries pose non-trivial challenge on system management and linux distribution development. 1.1 Solutions of Other Distros By leveraging Debian’s update-alternatives mechanism which enables user to switch BLAS and LAPACK libraries smoothly Archlinux only provides the Netlib and OpenBLAS, where and painlessly, the problems could be properly and decently they conflict to each other. This won’t satisfy all user’s de- addressed. This project aims at introducing the mechanism mand, because the OpenMP version of OpenBLAS is not a into Gentoo’s eselect framework to manage BLAS and LA- good choice for pthread-based programs. PACK, providing equivalent or better functionality of De- Fedora’s solution is to provide every possible version, and bian’s update-alternatives. make them co-installable by assigning different SONAMEs. It leads to confusion and chaos if many alternative libraries co-exists on the system, e.g. libopenblas, libopenblasp, li- 1 Rationale bopenblaso. The BSD (Port) Family forces packages to use a specific BLAS (Basic Linear Algebra Subroutines)[1] and LAPACK implemtation on a per-package basis, which clearly doesn’t (Linear Algebra PACKage)[2] are important mathematical satisfy the divsersed user demand. APIs/ABIs/libraries to performance-critical programs that manipulate dense and contiguous numerical arrays. BLAS provides low-level and frequently used linear algebra routines 1.2 Gentoo’s Solution for vector and matrix operations; LAPACK provides higher- level functionality based on complex call graph over BLAS, Currently Gentoo’s solution is based on eselect/pkg-config.
    [Show full text]
  • Alternativas De Altas Prestaciones Para Migración De Aplicaciones Matlab a GPU
    Alternativas de altas prestaciones para migración de aplicaciones Matlab a GPU Francisco Javier García Blas y J. Daniel García [email protected] @redenzor Universidad Carlos III de Madrid Grupo ARCOS 2017 ADMINTECH 2017 - 7 al 9 de Febrero de 2017 2 ARCOS Group at UC3M § Universidad Carlos III of Madrid. § Founded in 1989. ¨ ARCOS Group: ¤ High Performance Computing and I/O. ¤ Data distribution and analysis. ¤ Real-time systems, maintenance and simulation. ¤ Programming models for application improvement. 3 Myths about Matlab 4 Myth 1: Matlab is difficult 5 Google Trend §Huge user community §Used in multiple areas of engineering, research, … 6 TIOBE index 7 Myth 1: Matlab is difficult 8 Myth 2: Matlab is slow 9 Matlab is slow? §Matlab relies on Intel Math Kernel Library (MKL) §Intel MKL provides automatic offloading (AO) § Multiprocessors § Intel Xeon Phi in a transparent and automatic way §Partially support for GPU parallelization 10 Myth 2: Matlab is slow 11 Myth 3: Matlab is expensive 12 13 Myth 3: Matlab is expensive 14 What we love from Matlab? §Fast and accurate prototyping §Good support for developers §Simple representation of algebraic operations §Graphical interface (GUI) for debugging and development §Matlab Simulink §Programing based on master-slave (workers) § Parfor – Parallel loops § GPU 15 What we don´t love from Matlab? §Memory management §Application deployment is highly dependent of Matlab §Limited alternatives for efficient parallelization in shared memory 16 Alternatives to Matlab in C++ (some) §Eigen §Armadillo §ArrayFire 17 ¿What is Armadillo? § Open-source library for C++ § Exploits a similar syntax to Matlab § Based on generic programming and templates with C++11 § Generic algorithms (transform, for_each, reduce) § Lambdas § C++ STL containers § Support for BLAS and LAPACK § Represent basic types for mathematical representation: § Mat (2D) § Cube (3D) § Support for acceleration by using GPUs § SIMD is also included as a feature (eg.
    [Show full text]