A Streamlined Linear Algebra Library for Python

Total Page:16

File Type:pdf, Size:1020Kb

A Streamlined Linear Algebra Library for Python PyArmadillo: a streamlined linear algebra library for Python Jason Rumengan yz, Terry Yue Zhuo , Conrad Sanderson †∗ y Data61/CSIRO, Australia z Queensland University of Technology, Australia University of New South Wales, Australia ∗ Griffith University, Australia Abstract While the Python language has extension frameworks such as NumPy and SciPy for providing functionality involving numerical arrays and scientific computing, these frameworks can be cumbersome to use in contrast to the well-established Matlab language. The issues include: a focus on operations involving multi-dimensional arrays rather than matrices, nested organisation of functions which increases code verbosity (leading to reduced user productivity), and syntax that significantly differs from Matlab. To address these shortcomings, we propose PyArmadillo, a streamlined linear algebra library for Python, with an emphasis on ease of use. PyArmadillo aims to provide a high-level syntax and functionality deliberately similar to Matlab/Octave, allowing mathematical operations to be expressed in a familiar and natural manner. Objects for matrices and cubes are provided, as well as over 200 associated functions for manipulating data stored in the objects. Integer, floating point and complex numbers are supported. Various matrix factorisations are provided through integration with LAPACK, or one of its high performance drop-in replacements such as Intel MKL or OpenBLAS. PyArmadillo is open-source software, distributed under the Apache 2.0 license; it can be obtained at https://pyarma.sourceforge.io or via the Python Package Index in precompiled form. Background Matlab is often considered to be the de facto standard for experimentation and prototyping involving numerical linear algebra, particularly in educational settings [9]. This stems from its ease of use, allowing mathematical expressions to be written in a concise and natural manner, especially in comparison to directly using low-level libraries such as LAPACK [2]. However, Matlab is proprietary and has license-based restrictions on use. This in turn may impose significant financial costs dependent on the maximum number of allowed users, and/or application area (eg. industry, academia), and/or proprietary add-on toolboxes. Furthermore, due to its focus on prototyping and linear algebra, Matlab is less suitable for the development of general programs and/or integration into larger systems. Octave, an open-source analog of Matlab, ameliorates the licensing costs and restrictions associated with Matlab. However, Octave still shares the remaining downsides and may not be fully compatible with Matlab; its associated add-on toolboxes tend to be less developed than their proprietary counterparts. Recently the Python language has gained traction for scientific computing through add-on frameworks such as NumPy [5] and SciPy [13]. In contrast to Matlab, Python is suitable for general programming and is available under a permissive open-source license. However, for scientists and engineers looking to transition from Matlab to Python, the programming interface provided by the NumPy and SciPy frameworks tends to be verbose and cumbersome to use in comparison to Matlab. From a linear algebra point of view, these arXiv:2104.11120v2 [cs.MS] 28 Apr 2021 frameworks have issues which include: (i) a focus on operations involving multi-dimensional arrays rather than matrices, (ii) nested organisation of functions which increases code verbosity (which in turn can lead to reduced user productivity), and (iii) syntax that notably differs from Matlab. With the aim to facilitate the transition from Matlab to the Python ecosystem, we have devised and implemented the PyArmadillo library, which addresses the above issues. Specifically, PyArmadillo primarily focuses on linear algebra (operations on matrices), has all functions accessible in one flat structure, and provides an API that closely follows Matlab wherever possible. Functionality PyArmadillo provides matrix objects for several distinct element types: integers, single- and double-precision floating point numbers, as well as complex numbers. In addition to matrices, PyArmadillo also has support for cubes (3 dimensional arrays), where each cube can be treated as an ordered set of matrices. PyArmadillo matrices and cubes are convertible to/from NumPy arrays, allowing users to tap into the wider Python data science ecosystem, including plotting tools such as Matplotlib [6]. Over 200 functions are provided for manipulating data stored in the objects, covering the following areas: fundamental arithmetic operations, contiguous and non-contiguous submatrix views, diagonal views, element-wise functions, scalar/vector/matrix valued functions of matrices, statistics, signal processing, storage of matrices in files, matrix decompositions/factorisations, matrix inverses, and equation solvers. An overview of the available functionality in PyArmadillo (as of version 0.500) is given in Tables1 to8. Table1 briefly describes the member functions and variables of the main matrix class; Table2 lists the main subset of overloaded Python operators; Table3 outlines matrix decompositions and equation solvers; Table4 overviews functions for generating vectors with various sequences; Table5 lists the main forms of general functions of matrices; Table6 lists various element-wise functions of matrices; Table7 summarises the set of functions and classes focused on statistics; Table8 lists functions specific to signal and image processing. Lastly, Table9 provides examples of Matlab/Octave syntax and conceptually corresponding PyArmadillo syntax. See the online documentation at https://pyarma.sourceforge.io/docs.html for more details. Figure1 shows a short example program that demonstrates the streamlined nature of PyArmadillo syntax and draws comparison with NumPy syntax. To generate a matrix with all elements set to one, PyArmadillo simply requires ones(m, n), where m and n indicate the number of rows and columns, respectively. In contrast, NumPy requires the use of double brackets in ones((m, n)), which is non-intuitive and notably differs from Matlab. To generate a matrix filled with values from a uniform random distribution, PyArmadillo has the randu(m,n) function. In comparison, NumPy uses np.random.rand(m, n), which shows the nested organisation of functions, and reveals an inconsistency in the specification of the matrix size: use of single brackets versus double brackets. To multiply two matrices, PyArmadillo uses the standard * operator, as per Matlab. In contrast, NumPy uses the non-intuitive @ character for matrix multiplication, and uses the * operator for element-wise multiplication, which in turn demonstrates a focus on arrays rather than matrices. # PyArmadillo example # corresponding NumPy code import pyarma as pa import numpy as np A = pa.ones(4, 5) A = np.ones((4,5)) B = pa.randu(4, 5) B = np.random.rand(4, 5) C = A*B.t() C = [email protected] C.print() print(C) Figure 1: A short example program using PyArmadillo, and corresponding NumPy implementation. Implementation PyArmadillo relies on Armadillo [10, 11] for the underlying C++ implementation of matrix objects and associated functions, as well as on pybind11 [8] for interfacing C++ and Python. Due to its expressiveness and relatively straightforward use, pybind11 was selected over other interfacing approaches such as Boost.Python [1] and manually writing C++ extensions for Python. Armadillo was selected due to its API having close similarity to Matlab, as well as being battle-tested in the context of RcppArmadillo, a widely used bridge between the R language and C++ [4]. In turn, Armadillo interfaces with low-level routines in BLAS and LAPACK [2], where BLAS is used for matrix multiplication, and LAPACK is used for various matrix decompositions/factorisations and equation solvers. As the low-level routines in BLAS and LAPACK are considered as a de facto standard for numerical linear algebra, it is possible to use high performance drop-in replacements such as Intel MKL [7] and OpenBLAS [14]. PyArmadillo is open-source software available under the permissive Apache License 2.0 [3], making it useful in both open-source and proprietary (closed-source) contexts [12]. Table 1: Subset of member functions and variables of the mat class, the main matrix object in PyArmadillo. Function/Variable Description :n rows number of rows (read only) :n cols number of columns (read only) :n elem total number of elements (read only) [i] access the i-th element, assuming a column-by-column layout [r, c] access the element at row r and column c :in range(i) test whether the i-th element can be accessed :in range(r, c) test whether the element at row r and column c can be accessed :reset() set the number of elements to zero :copy size(A) set the size to be the same as matrix A :set size(n rows, n cols) change size to specified dimensions, without preserving data (fast) .reshape(n rows, n cols) change size to specified dimensions, with elements copied column-wise (slow) .resize(n rows, n cols) change size to specified dimensions, while preserving elements & their layout (slow) :ones(n rows, n cols) set all elements to one, optionally first resizing to specified dimensions :zeros(n rows, n cols) as above, but set all elements to zero :randu(n rows, n cols) as above, but set elements to uniformly distributed random values in [0,1] interval :randn(n rows, n cols) as above, but use a Gaussian/normal distribution with µ = 0 and σ = 1 :fill(k) set all elements to be equal to k :for each(lambda
Recommended publications
  • Armadillo C++ Library
    Armadillo: a template-based C++ library for linear algebra Conrad Sanderson and Ryan Curtin Abstract The C++ language is often used for implementing functionality that is performance and/or resource sensitive. While the standard C++ library provides many useful algorithms (such as sorting), in its current form it does not provide direct handling of linear algebra (matrix maths). Armadillo is an open source linear algebra library for the C++ language, aiming towards a good balance between speed and ease of use. Its high-level application programming interface (function syntax) is deliberately similar to the widely used Matlab and Octave languages [4], so that mathematical operations can be expressed in a familiar and natural manner. The library is useful for algorithm development directly in C++, or relatively quick conversion of research code into production environments. Armadillo provides efficient objects for vectors, matrices and cubes (third order tensors), as well as over 200 associated functions for manipulating data stored in the objects. Integer, floating point and complex numbers are supported, as well as dense and sparse storage formats. Various matrix factorisations are provided through integration with LAPACK [3], or one of its high performance drop-in replacements such as Intel MKL [6] or OpenBLAS [9]. It is also possible to use Armadillo in conjunction with NVBLAS to obtain GPU-accelerated matrix multiplication [7]. Armadillo is used as a base for other open source projects, such as MLPACK, a C++ library for machine learning and pattern recognition [2], and RcppArmadillo, a bridge between the R language and C++ in order to speed up computations [5].
    [Show full text]
  • Rcppmlpack: R Integration with MLPACK Using Rcpp
    RcppMLPACK: R integration with MLPACK using Rcpp Qiang Kou [email protected] April 20, 2020 1 MLPACK MLPACK[1] is a C++ machine learning library with emphasis on scalability, speed, and ease-of-use. It outperforms competing machine learning libraries by large margins, for detailed results of benchmarking, please refer to Ref[1]. 1.1 Input/Output using Armadillo MLPACK uses Armadillo as input and output, which provides vector, matrix and cube types (integer, floating point and complex numbers are supported)[2]. By providing Matlab-like syntax and various matrix decompositions through integration with LAPACK or other high performance libraries (such as Intel MKL, AMD ACML, or OpenBLAS), Armadillo keeps a good balance between speed and ease of use. Armadillo is widely used in C++ libraries like MLPACK. However, there is one point we need to pay attention to: Armadillo matrices in MLPACK are stored in a column-major format for speed. That means observations are stored as columns and dimensions as rows. So when using MLPACK, additional transpose may be needed. 1.2 Simple API Although MLPACK relies on template techniques in C++, it provides an intuitive and simple API. For the standard usage of methods, default arguments are provided. The MLPACK paper[1] provides a good example: standard k-means clustering in Euclidean space can be initialized like below: 1 KMeans<> k(); Listing 1: k-means with all default arguments If we want to use Manhattan distance, a different cluster initialization policy, and allow empty clusters, the object can be initialized with additional arguments: 1 KMeans<ManhattanDistance, KMeansPlusPlusInitialization, AllowEmptyClusters> k(); Listing 2: k-means with additional arguments 1 1.3 Available methods in MLPACK Commonly used machine learning methods are all implemented in MLPACK.
    [Show full text]
  • ARM HPC Ecosystem
    ARM HPC Ecosystem Darren Cepulis HPC Segment Manager ARM Business Segment Group HPC Forum, Santa Fe, NM 19th April 2017 © ARM 2017 ARM Collaboration for Exascale Programs United States Japan ARM is currently a participant in Fujitsu and RIKEN announced that the two Department of Energy funded Post-K system targeted at Exascale will pre-Exascale projects: Data be based on ARMv8 with new Scalable Movement Dominates and Fast Vector Extensions. Forward 2. European Union China Through FP7 and Horizon 2020, James Lin, vice director for the Center ARM has been involved in several of HPC at Shanghai Jiao Tong University funded pre-Exascale projects claims China will build three pre- including the Mont Blanc program Exascale prototypes to select the which deployed one of the first architecture for their Exascale system. ARM prototype HPC systems. The three prototypes are based on AMD, SunWei TaihuLight, and ARMv8. 2 © ARM 2017 ARM HPC deployments starting in 2H2017 Tw o recent announcements about ARM in HPC in Europe: 3 © ARM 2017 Japan Exascale slides from Fujitsu at ISC’16 4 © ARM 2017 Foundational SW Ecosystem for HPC . Linux OS’s – RedHat, SUSE, CENTOS, UBUNTU,… Commercial . Compilers – ARM, GNU, LLVM,… . Libraries – ARM, OpenBLAS, BLIS, ATLAS, FFTW… . Parallelism – OpenMP, OpenMPI, MVAPICH2,… . Debugging – Allinea, RW Totalview, GDB,… Open-source . Analysis – ARM, Allinea, HPCToolkit, TAU,… . Job schedulers – LSF, PBS Pro, SLURM,… . Cluster mgmt – Bright, CMU, warewulf,… Predictable Baseline 5 © ARM 2017 – now on ARM Functional Supported packages / components Areas OpenHPC defines a baseline. It is a community effort to Base OS RHEL/CentOS 7.1, SLES 12 provide a common, verified set of open source packages for Administrative Conman, Ganglia, Lmod, LosF, ORCM, Nagios, pdsh, HPC deployments Tools prun ARM’s participation: Provisioning Warewulf Resource Mgmt.
    [Show full text]
  • 0 BLIS: a Modern Alternative to the BLAS
    0 BLIS: A Modern Alternative to the BLAS FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin We propose the portable BLAS-like Interface Software (BLIS) framework which addresses a number of short- comings in both the original BLAS interface and present-day BLAS implementations. The framework allows developers to rapidly instantiate high-performance BLAS-like libraries on existing and new architectures with relatively little effort. The key to this achievement is the observation that virtually all computation within level-2 and level-3 BLAS operations may be expressed in terms of very simple kernels. Higher-level framework code is generalized so that it can be reused and/or re-parameterized for different operations (as well as different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any and all BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are supported through a straightforward compatibility layer, though calling sequences must be updated for those who wish to access new functionality. Experimental performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans. Math. Softw. 0, 0, Article 0 ( 0000), 31 pages.
    [Show full text]
  • Release 0.11 Todd Gamblin
    Spack Documentation Release 0.11 Todd Gamblin Feb 07, 2018 Basics 1 Feature Overview 3 1.1 Simple package installation.......................................3 1.2 Custom versions & configurations....................................3 1.3 Customize dependencies.........................................4 1.4 Non-destructive installs.........................................4 1.5 Packages can peacefully coexist.....................................4 1.6 Creating packages is easy........................................4 2 Getting Started 7 2.1 Prerequisites...............................................7 2.2 Installation................................................7 2.3 Compiler configuration..........................................9 2.4 Vendor-Specific Compiler Configuration................................ 13 2.5 System Packages............................................. 16 2.6 Utilities Configuration.......................................... 18 2.7 GPG Signing............................................... 20 2.8 Spack on Cray.............................................. 21 3 Basic Usage 25 3.1 Listing available packages........................................ 25 3.2 Installing and uninstalling........................................ 42 3.3 Seeing installed packages........................................ 44 3.4 Specs & dependencies.......................................... 46 3.5 Virtual dependencies........................................... 50 3.6 Extensions & Python support...................................... 53 3.7 Filesystem requirements........................................
    [Show full text]
  • 18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance
    18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin GREGORIO QUINTANA-ORT´I, Universitat Jaume I We show how both the tridiagonal and bidiagonal QR algorithms can be restructured so that they become rich in operations that can achieve near-peak performance on a modern processor. The key is a novel, cache-friendly algorithm for applying multiple sets of Givens rotations to the eigenvector/singular vec- tor matrix. This algorithm is then implemented with optimizations that: (1) leverage vector instruction units to increase floating-point throughput, and (2) fuse multiple rotations to decrease the total number of memory operations. We demonstrate the merits of these new QR algorithms for computing the Hermitian eigenvalue decomposition (EVD) and singular value decomposition (SVD) of dense matrices when all eigen- vectors/singular vectors are computed. The approach yields vastly improved performance relative to tradi- tional QR algorithms for these problems and is competitive with two commonly used alternatives—Cuppen’s Divide-and-Conquer algorithm and the method of Multiple Relatively Robust Representations—while inherit- ing the more modest O(n) workspace requirements of the original QR algorithms. Since the computations performed by the restructured algorithms remain essentially identical to those performed by the original methods, robust numerical properties are preserved. Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: Eigenvalues, singular values, tridiagonal, bidiagonal, EVD, SVD, QR algorithm, Givens rotations, linear algebra, libraries, high performance ACM Reference Format: Van Zee, F.
    [Show full text]
  • 0 BLIS: a Framework for Rapidly Instantiating BLAS Functionality
    0 BLIS: A Framework for Rapidly Instantiating BLAS Functionality FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly in- stantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels. While others have had similar insights, BLIS reduces the necessary kernels to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and imple- mented in ISO C99 so that it can be reused and/or re-parameterized for different operations (and different architectures) with little to no modification. Inserting high-performance kernels into the framework facili- tates the immediate optimization of any BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the the traditional Fortran-77 BLAS interface, a generalized C interface, or any other higher level interface that builds upon this latter API. Preliminary performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans.
    [Show full text]
  • DD2358 – Introduction to HPC Linear Algebra Libraries & BLAS
    DD2358 – Introduction to HPC Linear Algebra Libraries & BLAS Stefano Markidis, KTH Royal Institute of Technology After this lecture, you will be able to • Understand the importance of numerical libraries in HPC • List a series of key numerical libraries including BLAS • Describe which kind of operations BLAS supports • Experiment with OpenBLAS and perform a matrix-matrix multiply using BLAS 2021-02-22 2 Numerical Libraries are the Foundation for Application Developments • While these applications are used in a wide variety of very different disciplines, their underlying computational algorithms are very similar to one another. • Application developers do not have to waste time redeveloping supercomputing software that has already been developed elsewhere. • Libraries targeting numerical linear algebra operations are the most common, given the ubiquity of linear algebra in scientific computing algorithms. 2021-02-22 3 Libraries are Tuned for Performance • Numerical libraries have been highly tuned for performance, often for more than a decade – It makes it difficult for the application developer to match a library’s performance using a homemade equivalent. • Because they are relatively easy to use and their highly tuned performance across a wide range of HPC platforms – The use of scientific computing libraries as software dependencies in computational science applications has become widespread. 2021-02-22 4 HPC Community Standards • Apart from acting as a repository for software reuse, libraries serve the important role of providing a
    [Show full text]
  • Armadillo: an Open Source C++ Linear Algebra Library for Fast Prototyping and Computationally Intensive Experiments
    Armadillo: An Open Source C++ Linear Algebra Library for Fast Prototyping and Computationally Intensive Experiments Conrad Sanderson http://conradsanderson.id.au Technical Report, NICTA, Australia http://nicta.com.au September 2010 (revised December 2011) Abstract In this report we provide an overview of the open source Armadillo C++ linear algebra library (matrix maths). The library aims to have a good balance between speed and ease of use, and is useful if C++ is the language of choice (due to speed and/or integration capabilities), rather than another language like Matlab or Octave. In particular, Armadillo can be used for fast prototyping and computationally intensive experiments, while at the same time allowing for relatively painless transition of research code into production environments. It is distributed under a license that is applicable in both open source and proprietary software development contexts. The library supports integer, floating point and complex numbers, as well as a subset of trigonometric and statistics functions. Various matrix decompositions are provided through optional integration with LAPACK, or one its high-performance drop-in replacements, such as MKL from Intel or ACML from AMD. A delayed evaluation approach is employed (during compile time) to combine several operations into one and reduce (or eliminate) the need for temporaries. This is accomplished through C++ template meta-programming. Performance comparisons suggest that the library is considerably faster than Matlab and Octave, as well as previous C++ libraries such as IT++ and Newmat. This report reflects a subset of the functionality present in Armadillo v2.4. Armadillo can be downloaded from: • http://arma.sourceforge.net If you use Armadillo in your research and/or software, we would appreciate a citation to this document.
    [Show full text]
  • The BLAS API of BLASFEO: Optimizing Performance for Small Matrices
    The BLAS API of BLASFEO: optimizing performance for small matrices Gianluca Frison1, Tommaso Sartor1, Andrea Zanelli1, Moritz Diehl1;2 University of Freiburg, 1 Department of Microsystems Engineering (IMTEK), 2 Department of Mathematics email: [email protected] February 5, 2020 This research was supported by the German Federal Ministry for Economic Affairs and Energy (BMWi) via eco4wind (0324125B) and DyConPV (0324166B), and by DFG via Research Unit FOR 2401. Abstract BLASFEO is a dense linear algebra library providing high-performance implementations of BLAS- and LAPACK-like routines for use in embedded optimization and other applications targeting relatively small matrices. BLASFEO defines an API which uses a packed matrix format as its native format. This format is analogous to the internal memory buffers of optimized BLAS, but it is exposed to the user and it removes the packing cost from the routine call. For matrices fitting in cache, BLASFEO outperforms optimized BLAS implementations, both open-source and proprietary. This paper investigates the addition of a standard BLAS API to the BLASFEO framework, and proposes an implementation switching between two or more algorithms optimized for different matrix sizes. Thanks to the modular assembly framework in BLASFEO, tailored linear algebra kernels with mixed column- and panel-major arguments are easily developed. This BLAS API has lower performance than the BLASFEO API, but it nonetheless outperforms optimized BLAS and especially LAPACK libraries for matrices fitting in cache. Therefore, it can boost a wide range of applications, where standard BLAS and LAPACK libraries are employed and the matrix size is moderate. In particular, this paper investigates the benefits in scientific programming languages such as Octave, SciPy and Julia.
    [Show full text]
  • Introduchon to Arm for Network Stack Developers
    Introducon to Arm for network stack developers Pavel Shamis/Pasha Principal Research Engineer Mvapich User Group 2017 © 2017 Arm Limited Columbus, OH Outline • Arm Overview • HPC SoLware Stack • Porng on Arm • Evaluaon 2 © 2017 Arm Limited Arm Overview © 2017 Arm Limited An introduc1on to Arm Arm is the world's leading semiconductor intellectual property supplier. We license to over 350 partners, are present in 95% of smart phones, 80% of digital cameras, 35% of all electronic devices, and a total of 60 billion Arm cores have been shipped since 1990. Our CPU business model: License technology to partners, who use it to create their own system-on-chip (SoC) products. We may license an instrucBon set architecture (ISA) such as “ARMv8-A”) or a specific implementaon, such as “Cortex-A72”. …and our IP extends beyond the CPU Partners who license an ISA can create their own implementaon, as long as it passes the compliance tests. 4 © 2017 Arm Limited A partnership business model A business model that shares success Business Development • Everyone in the value chain benefits Arm Licenses technology to Partner • Long term sustainability SemiCo Design once and reuse is fundamental IP Partner Licence fee • Spread the cost amongst many partners Provider • Technology reused across mulBple applicaons Partners develop • Creates market for ecosystem to target chips – Re-use is also fundamental to the ecosystem Royalty Upfront license fee OEM • Covers the development cost Customer Ongoing royalBes OEM sells • Typically based on a percentage of chip price
    [Show full text]
  • Algorithms and Optimization Techniques for High-Performance Matrix-Matrix Multiplications of Very Small Matrices
    Algorithms and Optimization Techniques for High-Performance Matrix-Matrix Multiplications of Very Small Matrices I. Masliahc, A. Abdelfattaha, A. Haidara, S. Tomova, M. Baboulinb, J. Falcoub, J. Dongarraa,d aInnovative Computing Laboratory, University of Tennessee, Knoxville, TN, USA bUniversity of Paris-Sud, France cInria Bordeaux, France dUniversity of Manchester, Manchester, UK Abstract Expressing scientific computations in terms of BLAS, and in particular the general dense matrix-matrix multiplication (GEMM), is of fundamental impor- tance for obtaining high performance portability across architectures. However, GEMMs for small matrices of sizes smaller than 32 are not sufficiently optimized in existing libraries. We consider the computation of many small GEMMs and its performance portability for a wide range of computer architectures, includ- ing Intel CPUs, ARM, IBM, Intel Xeon Phi, and GPUs. These computations often occur in applications like big data analytics, machine learning, high-order finite element methods (FEM), and others. The GEMMs are grouped together in a single batched routine. For these cases, we present algorithms and their op- timization techniques that are specialized for the matrix sizes and architectures of interest. We derive a performance model and show that the new develop- ments can be tuned to obtain performance that is within 90% of the optimal for any of the architectures of interest. For example, on a V100 GPU for square matrices of size 32, we achieve an execution rate of about 1; 600 gigaFLOP/s in double-precision arithmetic, which is 95% of the theoretically derived peak for this computation on a V100 GPU. We also show that these results outperform I Preprint submitted to Journal of Parallel Computing September 25, 2018 currently available state-of-the-art implementations such as vendor-tuned math libraries, including Intel MKL and NVIDIA CUBLAS, as well as open-source libraries like OpenBLAS and Eigen.
    [Show full text]