AUGEM:Automatically Generate High Performance Dense Linear Algebra Kernels on X86 Cpus
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators
Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators Toshio Endo Akira Nukada Graduate School of Information Science and Engineering Global Scientific Information and Computing Center Tokyo Institute of Technology Tokyo Institute of Technology Tokyo, Japan Tokyo, Japan [email protected] [email protected] Satoshi Matsuoka Naoya Maruyama Global Scientific Information and Computing Center Global Scientific Information and Computing Center Tokyo Institute of Technology/National Institute of Informatics Tokyo Institute of Technology Tokyo, Japan Tokyo, Japan [email protected] [email protected] Abstract—We report Linpack benchmark results on the Roadrunner or other systems described above, it includes TSUBAME supercomputer, a large scale heterogeneous system two types of accelerators. This is due to incremental upgrade equipped with NVIDIA Tesla GPUs and ClearSpeed SIMD of the system, which has been the case in commodity CPU accelerators. With all of 10,480 Opteron cores, 640 Xeon cores, 648 ClearSpeed accelerators and 624 NVIDIA Tesla GPUs, clusters; they may have processors with different speeds as we have achieved 87.01TFlops, which is the third record as a result of incremental upgrade. In this paper, we present a heterogeneous system in the world. This paper describes a Linpack implementation and evaluation results on TSUB- careful tuning and load balancing method required to achieve AME with 10,480 Opteron cores, 624 Tesla GPUs and 648 this performance. On the other hand, since the peak speed is ClearSpeed accelerators. In the evaluation, we also used a 163 TFlops, the efficiency is 53%, which is lower than other systems. -
Elementary Functions: Towards Automatically Generated, Efficient
Elementary functions : towards automatically generated, efficient, and vectorizable implementations Hugues De Lassus Saint-Genies To cite this version: Hugues De Lassus Saint-Genies. Elementary functions : towards automatically generated, efficient, and vectorizable implementations. Other [cs.OH]. Université de Perpignan, 2018. English. NNT : 2018PERP0010. tel-01841424 HAL Id: tel-01841424 https://tel.archives-ouvertes.fr/tel-01841424 Submitted on 17 Jul 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Délivré par l’Université de Perpignan Via Domitia Préparée au sein de l’école doctorale 305 – Énergie et Environnement Et de l’unité de recherche DALI – LIRMM – CNRS UMR 5506 Spécialité: Informatique Présentée par Hugues de Lassus Saint-Geniès [email protected] Elementary functions: towards automatically generated, efficient, and vectorizable implementations Version soumise aux rapporteurs. Jury composé de : M. Florent de Dinechin Pr. INSA Lyon Rapporteur Mme Fabienne Jézéquel MC, HDR UParis 2 Rapporteur M. Marc Daumas Pr. UPVD Examinateur M. Lionel Lacassagne Pr. UParis 6 Examinateur M. Daniel Menard Pr. INSA Rennes Examinateur M. Éric Petit Ph.D. Intel Examinateur M. David Defour MC, HDR UPVD Directeur M. Guillaume Revy MC UPVD Codirecteur À la mémoire de ma grand-mère Françoise Lapergue et de Jos Perrot, marin-pêcheur bigouden. -
Andre Heidekrueger
AMD Heterogenous Computing X86 in development AMD new CPU and Accelerator Designs Building blocks for Heterogenous computing with the GPU Accelerators and the Latest x86 Platform Innovations 1 | Hot Chips | August, 2010 Server Industry Trends China has Seismic The performance of the 265m datasets online gamers fastest supercomputer typically exceed a terabyte grew 500x in the last decade The top 8 systems Accelerator-based on the Green 500 list use accelerators 800 servers on the Green 500 images list are 3x as energy are uploaded to Facebook efficient as those without every second 2 | Hot Chips | August,accelerators 2010 2 Top500.org Performance Projections: Can the Current Trajectory Achieve Exascale? 1 EFlops Might get there on current trajectory, but… • Too late for major government programs leading to 2018 • System power in traditional x86 architecture would be unmanageable Source for chart: Top500.org; annotations by AMD 3 | Hot Chips | August, 2010 Three Eras of Processor Performance Multi-Core Heterogeneous Single-Core Systems Era Era Era Enabled by: Enabled by: Enabled by: Moore’s Law Moore’s Law Moore’s Law Voltage Scaling Desire for Throughput Abundant data parallelism Micro-Architecture 20 years of SMP arch Power efficient GPUs Constrained by: Constrained by: Temporarily constrained by: Power Power Programming models Complexity Parallel SW availability Communication overheads Scalability o o ? we are we are here o here we are Performance here thread Performance - Time Time Application Targeted Time Throughput Throughput Performance (# of Processors) (Data-parallel exploitation) Single 4 | Driving HPC Performance Efficiency Fusion Architecture The Benefits of Heterogeneous Computing x86 CPU owns GPU Optimized for the Software World Modern Workloads . -
Hierarchical Roofline Analysis for Gpus: Accelerating Performance
Hierarchical Roofline Analysis for GPUs: Accelerating Performance Optimization for the NERSC-9 Perlmutter System Charlene Yang, Thorsten Kurth Samuel Williams National Energy Research Scientific Computing Center Computational Research Division Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory Berkeley, CA 94720, USA Berkeley, CA 94720, USA fcjyang, [email protected] [email protected] Abstract—The Roofline performance model provides an Performance (GFLOP/s) is bound by: intuitive and insightful approach to identifying performance bottlenecks and guiding performance optimization. In prepa- Peak GFLOP/s GFLOP/s ≤ min (1) ration for the next-generation supercomputer Perlmutter at Peak GB/s × Arithmetic Intensity NERSC, this paper presents a methodology to construct a hi- erarchical Roofline on NVIDIA GPUs and extend it to support which produces the traditional Roofline formulation when reduced precision and Tensor Cores. The hierarchical Roofline incorporates L1, L2, device memory and system memory plotted on a log-log plot. bandwidths into one single figure, and it offers more profound Previously, the Roofline model was expanded to support insights into performance analysis than the traditional DRAM- the full memory hierarchy [2], [3] by adding additional band- only Roofline. We use our Roofline methodology to analyze width “ceilings”. Similarly, additional ceilings beneath the three proxy applications: GPP from BerkeleyGW, HPGMG Roofline can be added to represent performance bottlenecks from AMReX, and conv2d from TensorFlow. In so doing, we demonstrate the ability of our methodology to readily arising from lack of vectorization or the failure to exploit understand various aspects of performance and performance fused multiply-add (FMA) instructions. bottlenecks on NVIDIA GPUs and motivate code optimizations. -
Supermatrix: a Multithreaded Runtime Scheduling System for Algorithms-By-Blocks
SuperMatrix: A Multithreaded Runtime Scheduling System for Algorithms-by-Blocks Ernie Chan Field G. Van Zee Paolo Bientinesi Enrique S. Quintana-Ort´ı Robert van de Geijn Department of Computer Science Gregorio Quintana-Ort´ı Department of Computer Sciences Duke University Departamento de Ingenier´ıa y Ciencia de The University of Texas at Austin Durham, NC 27708 Computadores Austin, Texas 78712 [email protected] Universidad Jaume I {echan,field,rvdg}@cs.utexas.edu 12.071–Castellon,´ Spain {quintana,gquintan}@icc.uji.es Abstract which led to the adoption of out-of-order execution in many com- This paper describes SuperMatrix, a runtime system that paral- puter microarchitectures [32]. For the dense linear algebra opera- lelizes matrix operations for SMP and/or multi-core architectures. tions on which we will concentrate in this paper, many researchers We use this system to demonstrate how code described at a high in the early days of distributed-memory computing recognized that level of abstraction can achieve high performance on such archi- “compute-ahead” techniques could be used to improve parallelism. tectures while completely hiding the parallelism from the library However, the coding complexity required of such an effort proved programmer. The key insight entails viewing matrices hierarchi- too great for these techniques to gain wide acceptance. In fact, cally, consisting of blocks that serve as units of data where oper- compute-ahead optimizations are still absent from linear algebra ations over those blocks are treated as units of computation. The packages such as ScaLAPACK [12] and PLAPACK [34]. implementation transparently enqueues the required operations, in- Recently, there has been a flurry of interest in reviving the idea ternally tracking dependencies, and then executes the operations of compute-ahead [1, 25, 31]. -
Benchmark of C++ Libraries for Sparse Matrix Computation
Benchmark of C++ Libraries for Sparse Matrix Computation Georg Holzmann http://grh.mur.at email: [email protected] August 2007 This report presents benchmarks of C++ scientific computing libraries for small and medium size sparse matrices. Be warned: these benchmarks are very special- ized on a neural network like algorithm I had to implement. However, also the initialization time of sparse matrices and a matrix-vector multiplication was mea- sured, which might be of general interest. WARNING At the time of its writing this document did not consider the eigen library (http://eigen.tuxfamily.org). Please evaluate also this very nice, fast and well maintained library before making any decisions! See http://grh.mur.at/blog/matrix-library-benchmark-follow for more information. Contents 1 Introduction 2 2 Benchmarks 2 2.1 Initialization.....................................3 2.2 Matrix-Vector Multiplication............................4 2.3 Neural Network like Operation...........................4 3 Results 4 4 Conclusion 12 A Libraries and Flags 12 B Implementations 13 1 1 Introduction Quite a lot open-source libraries exist for scientific computing, which makes it hard to choose between them. Therefore, after communication on various mailing lists, I decided to perform some benchmarks, specialized to the kind of algorithms I had to implement. Ideally algorithms should be implemented in an object oriented, reuseable way, but of course without a loss of performance. BLAS (Basic Linear Algebra Subprograms - see [1]) and LA- PACK (Linear Algebra Package - see [3]) are the standard building blocks for efficient scientific software. Highly optimized BLAS implementations for all relevant hardware architectures are available, traditionally expressed in Fortran routines for scalar, vector and matrix operations (called Level 1, 2 and 3 BLAS). -
Using Machine Learning to Improve Dense and Sparse Matrix Multiplication Kernels
Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2019 Using machine learning to improve dense and sparse matrix multiplication kernels Brandon Groth Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Applied Mathematics Commons, and the Computer Sciences Commons Recommended Citation Groth, Brandon, "Using machine learning to improve dense and sparse matrix multiplication kernels" (2019). Graduate Theses and Dissertations. 17688. https://lib.dr.iastate.edu/etd/17688 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Using machine learning to improve dense and sparse matrix multiplication kernels by Brandon Micheal Groth A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Applied Mathematics Program of Study Committee: Glenn R. Luecke, Major Professor James Rossmanith Zhijun Wu Jin Tian Kris De Brabanter The student author, whose presentation of the scholarship herein was approved by the program of study committee, is solely responsible for the content of this dissertation. The Graduate College will ensure this dissertation is globally accessible and will not permit alterations after a degree is conferred. Iowa State University Ames, Iowa 2019 Copyright c Brandon Micheal Groth, 2019. All rights reserved. ii DEDICATION I would like to dedicate this thesis to my wife Maria and our puppy Tiger. -
Theoretical Peak FLOPS Per Instruction Set on Modern Intel Cpus
Theoretical Peak FLOPS per instruction set on modern Intel CPUs Romain Dolbeau Bull – Center for Excellence in Parallel Programming Email: [email protected] Abstract—It used to be that evaluating the theoretical and potentially multiple threads per core. Vector of peak performance of a CPU in FLOPS (floating point varying sizes. And more sophisticated instructions. operations per seconds) was merely a matter of multiplying Equation2 describes a more realistic view, that we the frequency by the number of floating-point instructions will explain in details in the rest of the paper, first per cycles. Today however, CPUs have features such as vectorization, fused multiply-add, hyper-threading or in general in sectionII and then for the specific “turbo” mode. In this paper, we look into this theoretical cases of Intel CPUs: first a simple one from the peak for recent full-featured Intel CPUs., taking into Nehalem/Westmere era in section III and then the account not only the simple absolute peak, but also the full complexity of the Haswell family in sectionIV. relevant instruction sets and encoding and the frequency A complement to this paper titled “Theoretical Peak scaling behavior of current Intel CPUs. FLOPS per instruction set on less conventional Revision 1.41, 2016/10/04 08:49:16 Index Terms—FLOPS hardware” [1] covers other computing devices. flop 9 I. INTRODUCTION > operation> High performance computing thrives on fast com- > > putations and high memory bandwidth. But before > operations => any code or even benchmark is run, the very first × micro − architecture instruction number to evaluate a system is the theoretical peak > > - how many floating-point operations the system > can theoretically execute in a given time. -
Anatomy of High-Performance Matrix Multiplication
Anatomy of High-Performance Matrix Multiplication KAZUSHIGE GOTO The University of Texas at Austin and ROBERT A. VAN DE GEIJN The University of Texas at Austin We present the basic principles which underlie the high-performance implementation of the matrix- matrix multiplication that is part of the widely used GotoBLAS library. Design decisions are justified by successively refining a model of architectures with multilevel memories. A simple but effective algorithm for executing this operation results. Implementations on a broad selection of architectures are shown to achieve near-peak performance. Categories and Subject Descriptors: G.4 [Mathematical Software]: —Efficiency General Terms: Algorithms;Performance Additional Key Words and Phrases: linear algebra, matrix multiplication, basic linear algebra subprogrms 1. INTRODUCTION Implementing matrix multiplication so that near-optimal performance is attained requires a thorough understanding of how the operation must be layered at the macro level in combination with careful engineering of high-performance kernels at the micro level. This paper primarily addresses the macro issues, namely how to exploit a high-performance “inner-kernel”, more so than the the micro issues related to the design and engineering of that “inner-kernel”. In [Gunnels et al. 2001] a layered approach to the implementation of matrix multiplication was reported. The approach was shown to optimally amortize the required movement of data between two adjacent memory layers of an architecture with a complex multi-level memory. Like other work in the area [Agarwal et al. 1994; Whaley et al. 2001], that paper ([Gunnels et al. 2001]) casts computation in terms of an “inner-kernel” that computes C := AB˜ + C for some mc × kc matrix A˜ that is stored contiguously in some packed format and fits in cache memory. -
DD2358 – Introduction to HPC Linear Algebra Libraries & BLAS
DD2358 – Introduction to HPC Linear Algebra Libraries & BLAS Stefano Markidis, KTH Royal Institute of Technology After this lecture, you will be able to • Understand the importance of numerical libraries in HPC • List a series of key numerical libraries including BLAS • Describe which kind of operations BLAS supports • Experiment with OpenBLAS and perform a matrix-matrix multiply using BLAS 2021-02-22 2 Numerical Libraries are the Foundation for Application Developments • While these applications are used in a wide variety of very different disciplines, their underlying computational algorithms are very similar to one another. • Application developers do not have to waste time redeveloping supercomputing software that has already been developed elsewhere. • Libraries targeting numerical linear algebra operations are the most common, given the ubiquity of linear algebra in scientific computing algorithms. 2021-02-22 3 Libraries are Tuned for Performance • Numerical libraries have been highly tuned for performance, often for more than a decade – It makes it difficult for the application developer to match a library’s performance using a homemade equivalent. • Because they are relatively easy to use and their highly tuned performance across a wide range of HPC platforms – The use of scientific computing libraries as software dependencies in computational science applications has become widespread. 2021-02-22 4 HPC Community Standards • Apart from acting as a repository for software reuse, libraries serve the important role of providing a -
Level-3 BLAS on Myriad Multi-Core Media-Processor
Level-3 BLAS on Myriad Multi-Core Media-Processor SoC Tomasz Szydzik1 Marius Farcas2 Valeriu Ohan2 David Moloney3 1Insititute for Applied Microelectronics, University of Las Palmas of Gran Canaria, 2Codecart, 3Movidius Ltd. BLAS-like Library Instantiation Subprograms (BLIS) The Myriad media-processor SoCs The Basic Linear Algebra Subprograms (BLAS) are a set of low-level subroutines that perform common linear algebra Myriad architecture prioritises power-efficient operation and area efficiency. In order to guarantee sustained high operations. BLIS is a software framework for instantiating high-performance BLAS-like dense linear algebra libraries. performance and minimise power the proprietary SHAVE (Streaming Hybrid Architecture Vector Engine) processor was BLIS[1] was chosen over GoTOBLAS, ATLAS, etc. due to its portable micro-kernel architecture and active user-base. developed. Data and Instructions reside in a shared Connection MatriX (CMX) memory block shared by all Shave processors. Data is moved between peripherals, processors and memory via a bank of software-controlled DMA engines. BLIS features Linear Algebra Myriad Acceleration DDR Controller 128/256MB LPDDR2/3 Stacked Die 128-bit AXI 128-bit AHB High-level LAPACK EIGEN In-house 128kB 2-way L2 cache (SHAVE) IISO C99 code with flexible BSD license. routines DDR Controller 1MB CMX SRAM ISupport for BLAS API calling conventions. 128-bit AXI 128-bit AHB 128-bit AHB BLIS 128-bit 64-bit 64-bit 256kB 2-way L2 cache (SHAVE) 32-bit CMX Instr CMX CMX Competitive performance [2]. Level Level Level Level APB x 8 SHAVEs I f Port Port Port 32kB LRAM 2MB CMX SRAM 0 1m 2 3 Subroutines IMulti-core friendly. -
Introduction to Intel Scalable Architectures
Introduction to Intel scalable architectures Fabio Affinito (SCAI - Cineca) Available options... Right here, right now… two kind of solutions are available on the market: ● IBM+ nVIDIA (Coral-like) ● Intel-based (Xeon/Xeon Phi) IBM+NVIDIA Each node will be based on a Power CPU + 4/6/8 nVIDIA TESLA GPUs connected using an nVIDIA NVlink interconnect Intel Xeon and Xeon Phi Intel will keep on with the production of server processors on the Xeon line, together with the introduction of the Xeon Phi many-core chips Intel Xeon Phi will not be a co-processor anymore, but a self-standing CPU with a very high number of cores Such systems are integrated by several vendors in many different configurations (Cray, HP, Lenovo, E4..) MARCONI FERMI, the IBM BlueGene/Q deployed in Cineca ended its lifecycle in 2016 We needed a new HPC machine that could - increase the computational power - respect the agreements with PRACE - satisfy the needs of the italian computing community MARCONI MARCONI NeXtScale architecture nx360M5 nodes: Supporting Intel HSW & BDW Able to host both IB network Mellanox EDR & Intel Omni-Path Twelve nodes are grouped into a Chassis (6 chassis per rack) The compute node is made of: 2 x Intel Broadwell (Xeon processor E5-2697 v4) 18 cores, 2,3 HGz 8 x 16GB DIMM memory (RAM DDR4 2400 MHz), 128 GB total 1 x 129 GB SATA MLC S3500 Enterprise Value SSD Further details 1 x link OPA 100GBs 2*18*2,3*16 = 1.325 GFs peak 24 rack in total: 21 rack compute 1 rack service nodes 2 racks core switch MARCONI - Network MARCONI - Network MARCONI - Storage