Optimization of GPU-Based Sparse Matrix Multiplication for Large Sparse Networks

Total Page:16

File Type:pdf, Size:1020Kb

Optimization of GPU-Based Sparse Matrix Multiplication for Large Sparse Networks 2020 IEEE 36th International Conference on Data Engineering (ICDE) Optimization of GPU-based Sparse Matrix Multiplication for Large Sparse Networks Jeongmyung Lee, Seokwon Kang, Yongseung Yu, Yong-Yeon Jo, Sang-Wook Kim, Yongjun Park Department of Computer Science Hanyang University, Seoul, Korea {jeongmyung, kswon0202, dydtmd1991, jyy0430, wook, yongjunpark}@hanyang.ac.kr Abstract—Sparse matrix multiplication (spGEMM) is widely computational throughput using single-instruction, multiple- used to analyze the sparse network data, and extract important thread (SIMT) programming models, such as CUDA [6] information based on matrix representation. As it contains a and OpenCL [7]. A GPU generally consists of a set of high degree of data parallelism, many efficient implementations using data-parallel programming platforms such as CUDA and Streaming Multiprocessors (SMs). OpenCL/CUDA programs OpenCL have been introduced on graphic processing units are executed on GPUs by allocating Thread Blocks (TBs) or (GPUs). Several well-known spGEMM techniques, such as cuS- Cooperative Thread Arrays (CTAs) 1, which are groups of PARSE and CUSP, often do not utilize the GPU resources fully, threads, to each SM in parallel. owing to the load imbalance between threads in the expansion The main challenge is developing an efficient matrix multi- process and high memory contention in the merge process. Furthermore, even though several outer-product-based spGEMM plication technique considering the data-specific characteristics techniques are proposed to solve the load balancing problem of sparsity and power-law degree distribution [8]. Typical on expansion, they still do not utilize the GPU resources fully, sparse networks contain a much smaller number of edges with because severe computation load variations exist among the non-zero values, compared to the number of all possible edges multiple thread blocks. between nodes, and therefore, most of the elements in a sparse To solve these challenges, this paper proposes a new opti- mization pass called Block Reorganizer, which balances the total matrix have a value of zero. To reduce memory waste caused computations of each computing unit on target GPUs, based by sparsity, matrices are typically represented in the sparse on the outer-product-based expansion process, and reduces the format [9]. Sparse networks also commonly have power-law memory pressure during the merge process. For expansion, it distributions [8], where a very small number of hub nodes first identifies the actual computation amount for each block, have extremely large numbers of connections and most other and then performs two thread block transformation processes based on their characteristics: 1) B-Splitting to transform a nodes have very small numbers of connections. Based on heavy-computation blocks into multiple small blocks and 2) B- the power-law, the distribution of non-zero elements is often Gathering to aggregate multiple small-computation blocks to a highly skewed, and the resulting matrices for sparse networks larger block. While merging, it improves the overall performance generally contain a few rows with large numbers of non-zero by performing B-Limiting to limit the number of blocks on each elements while a large number of rows have a few non-zero computing unit. Experimental results show that it improves the total performance of kernel execution by 1.43x, on an average, elements. when compared to the row-product-based spGEMM, for NVIDIA There have been several previous studies on implement- TitanXpGPUsonreal-worlddatasets. ing efficient sparse matrix multiplication (spGEMM) for Index Terms—Sparse matrix multiplication; sparse network; two sparse matrices on GPUs, including cuSPARSE [10] GPU; linear algebra; and CUSP [11]. These techniques generally consist of row- I. INTRODUCTION product-based intermediate data expansion and parallel data Matrix multiplication is one of the core kernels in various merge processes. Despite their promising performance, GPU data-mining applications, such as social network services resources are still not fully utilized. First, the row-product- (SNSs) and graph analytics, and is used to extract key informa- based expansion process often leads to poor load balancing tion. Based on the rapid growth of the size of sparse networks, among threads due to the irregular distributions of target sparse the extraction of valuable information required for various networks. Second, excessive memory accesses during the par- operations, such as ranking [1], similarity computation [2], allel merge process frequently leads to degraded performance [3], and recommendation [4], [5], has become a critical than expected because of significant memory contention challenge. Weighted graphs are typically used to model such caused by excessive accesses. Although several improved network data and are represented in matrix forms, where each row-product-based techniques, such as bhSPARSE [12], have element contains an edge weight between two nodes. Matrix recently been introduced, experimental results have shown that multiplication based on the adjacent matrix format is widely they still suffer from poor thread-level load balancing problem used to extract useful information from original data. of the row-product-based scheme and the high performance Because matrix multiplication is a data-parallel operation, overhead during the merge process while performing multipli- graphic processing units (GPUs) are considered to be the most appropriate accelerators for their speed-up by providing high 1In this work, we use the term thread block and CTA interchangeably. 2375-026X/20/$31.00 ©2020 IEEE 925 DOI 10.1109/ICDE48307.2020.00085 cation on highly irregular matrices. SM Thread Shared Memory Requirement of each TB Warp Scheduler SM To overcome these limitations, several new spGEMM ap- Register File Thread Block proaches have been introduced by adopting the outer-product Core Core Core Core Core Shared Memory (column-row product) scheme [13], [14]. Outer-product-based L1 cache Shared Memory expansion is expected to produce higher performance than Thread Block SM row-product-based expansion, because the computational loads GPU Shared Memory of all threads in a TB are identical. However, the outer- SM 0 SM 1 SM 2 SM 3 SM product is not yet an ideal solution. First, the outer-product Thread Block L2 cache algorithm creates another load imbalance problem among SMs Global Memory because of the high block-level workload variance. In the Shared Memory (a) (b) outer-product scheme, each TB is formulated by a column and Fig. 1: (a) A GPU architecture overview and (b) an effect of a row of input matrices. Therefore, the resulting TBs consist shared memory requirement per thread block on thread block of several computation-heavy TBs (overloaded blocks) from allocation. several columns and rows with huge numbers of non-zero 2) Block Gathering: it merges several underloaded elements, and a massive number of computation-light TBs blocks into a combined block for better SM resource (underloaded blocks) with large numbers of zero elements. As utilization and latency hiding effectiveness. a result, the SMs that execute overloaded blocks can become 3) Block Limiting: it prevents the blocks from exe- a performance bottleneck, while all other SMs are idle. cuting with other blocks on an SM for minimizing Second, the outer-product scheme is mainly effective for resource contention. expansion, and the merge performance remains the same or might even become worse, because it produces intermediate • An extensive evaluation of the effectiveness of the Block results in a matrix form during expansion, whereas the row- Reorganizer framework using synthetic and real-world product produces the intermediate results in a single row datasets on multiple target GPUs. form [15]. Therefore, full matrix-wise accumulation may be slower than row-wise accumulation owing to the additional II. BACKGROUND column address indexing. A. GPU Architectures and SIMT Programming Model To address the limitations, we propose a novel outer- product-based spGEMM optimization pass referred to as the GPUs are accelerators that provide high throughput by Block Reorganizer. It first identifies the computation amount maximizing data parallelism using an SIMT programming of each block and categorizes the blocks as overloaded blocks, model such as CUDA [6] and OpenCL [7], which enables normal blocks, and underloaded blocks, based on their compu- multiple independent threads to execute the same instructions tational loads. It then performs two different optimizations in concurrently. In such programming languages, a thread is the the expansion process: Block Splitting for overloaded blocks basic unit of execution, and several threads are grouped into and Block Gathering for underloaded blocks. Block Splitting is TBs or CTAs. A TB is the main scheduling unit for execution the process of dividing an overloaded block into multiple small on GPUs, and the threads within a TB are affected by blocks for better load balancing. For underloaded blocks, the barrier operations for synchronization. For NVIDIA GPUs in Block Reorganizer performs the Block Gathering process by particular, a number of threads (typically 32) are also grouped creating a combined block from multiple underloaded blocks into another scheduling unit, called a warp. In NVIDIA GPUs, to increase intra-SM computation unit utilization and improve the threads in a warp are executed in lock-step similar to SIMD latency hiding efficiency via
Recommended publications
  • Speeding up Spmv for Power-Law Graph Analytics by Enhancing Locality & Vectorization
    Speeding Up SpMV for Power-Law Graph Analytics by Enhancing Locality & Vectorization Serif Yesil Azin Heidarshenas Adam Morrison Josep Torrellas Dept. of Computer Science Dept. of Computer Science Blavatnik School of Dept. of Computer Science University of Illinois at University of Illinois at Computer Science University of Illinois at Urbana-Champaign Urbana-Champaign Tel Aviv University Urbana-Champaign [email protected] [email protected] [email protected] [email protected] Abstract—Graph analytics applications often target large-scale data-dependent behavior of some accesses makes them hard web and social networks, which are typically power-law graphs. to predict and optimize for. As a result, SpMV on large power- Graph algorithms can often be recast as generalized Sparse law graphs becomes memory bound. Matrix-Vector multiplication (SpMV) operations, making SpMV optimization important for graph analytics. However, executing To address this challenge, previous work has focused on SpMV on large-scale power-law graphs results in highly irregular increasing SpMV’s Memory-Level Parallelism (MLP) using memory access patterns with poor cache utilization. Worse, we vectorization [9], [10] and/or on improving memory access find that existing SpMV locality and vectorization optimiza- locality by rearranging the order of computation. The main tions are largely ineffective on modern out-of-order (OOO) techniques for improving locality are binning [11], [12], which processors—they are not faster (or only marginally so) than the standard Compressed Sparse Row (CSR) SpMV implementation. translates indirect memory accesses into efficient sequential To improve performance for power-law graphs on modern accesses, and cache blocking [13], which processes the ma- OOO processors, we propose Locality-Aware Vectorization (LAV).
    [Show full text]
  • Sparse-Matrix Representation of Spiking Neural P Systems for Gpus
    Sparse-matrix Representation of Spiking Neural P Systems for GPUs Miguel A.´ Mart´ınez-del-Amor1, David Orellana-Mart´ın1, Francis G.C. Cabarle2, Mario J. P´erez-Jim´enez1, Henry N. Adorna2 1Research Group on Natural Computing Department of Computer Science and Artificial Intelligence Universidad de Sevilla Avda. Reina Mercedes s/n, 41012 Sevilla, Spain E-mail: [email protected], [email protected], [email protected] 2Algorithms and Complexity Laboratory Department of Computer Science University of the Philippines Diliman Diliman 1101 Quezon City, Philippines E-mail: [email protected], [email protected] Summary. Current parallel simulation algorithms for Spiking Neural P (SNP) systems are based on a matrix representation. This helps to harness the inherent parallelism in algebraic operations, such as vector-matrix multiplication. Although it has been convenient for the first parallel simulators running on Graphics Processing Units (GPUs), such as CuSNP, there are some bottlenecks to cope with. For example, matrix representation of SNP systems with a low-connectivity-degree graph lead to sparse matrices, i.e. containing more zeros than actual values. Having to deal with sparse matrices downgrades the performance of the simulators because of wasting memory and time. However, sparse matrices is a known problem on parallel computing with GPUs, and several solutions and algorithms are available in the literature. In this paper, we briefly analyse some of these ideas and apply them to represent some variants of SNP systems. We also conclude which variant better suit a sparse-matrix representation. Keywords: Spiking Neural P systems, Simulation Algorithm, Sparse Matrix Representation, GPU computing, CUDA 1 Introduction Spiking Neural P (SNP) systems [9] are a type of P systems [16] composed of a directed graph inspired by how neurons are interconnected by axons and synapses 162 M.A.
    [Show full text]
  • A Framework for Efficient Execution of Matrix Computations
    A Framework for Efficient Execution of Matrix Computations Doctoral Thesis May 2006 Jos´e Ram´on Herrero Advisor: Prof. Juan J. Navarro UNIVERSITAT POLITECNICA` DE CATALUNYA Departament D'Arquitectura de Computadors To Joan and Albert my children To Eug`enia my wife To Ram´on and Gloria my parents Stillicidi casus lapidem cavat Lucretius (c. 99 B.C.-c. 55 B.C.) De Rerum Natura1 1Continual dropping wears away a stone. Titus Lucretius Carus. On the nature of things Abstract Matrix computations lie at the heart of most scientific computational tasks. The solution of linear systems of equations is a very frequent operation in many fields in science, engineering, surveying, physics and others. Other matrix op- erations occur frequently in many other fields such as pattern recognition and classification, or multimedia applications. Therefore, it is important to perform matrix operations efficiently. The work in this thesis focuses on the efficient execution on commodity processors of matrix operations which arise frequently in different fields. We study some important operations which appear in the solution of real world problems: some sparse and dense linear algebra codes and a classification algorithm. In particular, we focus our attention on the efficient execution of the following operations: sparse Cholesky factorization; dense matrix multipli- cation; dense Cholesky factorization; and Nearest Neighbor Classification. A lot of research has been conducted on the efficient parallelization of nu- merical algorithms. However, the efficiency of a parallel algorithm depends ultimately on the performance obtained from the computations performed on each node. The work presented in this thesis focuses on the sequential execution on a single processor.
    [Show full text]
  • Automatic Selection of Sparse Matrix Representation on Gpus
    Automatic Selection of Sparse Matrix Representation on GPUs Naser Sedaghati, Te Mu, Louis-Noël Pouchet, Srinivasan Parthasarathy, P. Sadayappan The Ohio State University Columbus, OH, USA {sedaghat,mut,pouchet,srini,saday}@cse.ohio-state.edu ABSTRACT applications in terms of a dataset-algorithm-platform tri- Sparse matrix-vector multiplication (SpMV) is a core kernel angle. This is a complex relationship that is not yet well in numerous applications, ranging from physics simulation characterized even for much studied core kernels like SpMV. and large-scale solvers to data analytics. Many GPU im- In this paper, we make a first attempt at understanding the plementations of SpMV have been proposed, targeting sev- dataset-algorithm dependences for SpMV on GPUs. eral sparse representations and aiming at maximizing over- The optimization of SpMV for GPUs has been a much re- all performance. No single sparse matrix representation is searched topic over the last few years, including auto-tuning uniformly superior, and the best performing representation approaches to deliver very high performance [22, 11] by tun- varies for sparse matrices with different sparsity patterns. ing the block size to the sparsity features of the computa- In this paper, we study the inter-relation between GPU tion. For CPUs, the compressed sparse row (CSR) – or the architecture, sparse matrix representation and the sparse dual compressed sparse column – is the dominant represen- dataset. We perform extensive characterization of perti- tation, effective across domains from which the sparse ma- nent sparsity features of around 700 sparse matrices, and trices arise. In contrast, for GPUs no single representation their SpMV performance with a number of sparse represen- has been found to be effective across a range of sparse matri- tations implemented in the NVIDIA CUSP and cuSPARSE ces.
    [Show full text]
  • Tridiagonalization of an Arbitrary Square Matrix William Lee Waltmann Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1964 Tridiagonalization of an arbitrary square matrix William Lee Waltmann Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Mathematics Commons Recommended Citation Waltmann, William Lee, "Tridiagonalization of an arbitrary square matrix " (1964). Retrospective Theses and Dissertations. 3829. https://lib.dr.iastate.edu/rtd/3829 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. This dissertation has been 65—3811 microfilmed exactly as received WALTMANN, William Lee, 1934- TRIDIAGONA.LIZATION OF AN ARBITRARY SQUARE MATRIX. Iowa State University of Science and Technology Ph.D., 1964 Mathematics University Microfilms, Inc., Ann Arbor, Michigan TRIDIAGONALIZATION OP AN ARBITRARY SQUARE MATRIX %lliam Lee Waltmann A Dissertation Submitted to the Graduate Faculty in Partial Fulfillment of The Requirements for the Degree of DOCTOR OF PHILOSOPHY Major Subject: Mathematics Approved: Signature was redacted for privacy. In Oharg Signature was redacted for privacy. Head of Major Department Signature was redacted for privacy. Dean of Gradua# College Iowa State University Of Science and Technology Ames, Iowa 1964 il TABLE OP CONTENTS Page I. INTRODUCTION 1 II. KNOW METHODS FOR THE MATRIX EIGENPROBLEM 4 III. KNOm METHODS FOR TRIDIAGONALIZATIOH 11 IV. T-ALGORITHM FOR TRIDIAGONALIZATION 26 V. OBSERVATIONS RELATED TO TRIDIAGONALIZATION 53 VI.
    [Show full text]
  • REPRESENTATIONS of FINITE GROUPS Contents 1. Introduction 1
    REPRESENTATIONS OF FINITE GROUPS SANG HOON KIM Abstract. This paper provides the definition of a representation of a finite group and ways to study it with several concepts and remarkable theorems such as an irreducible representation, the character, and Maschke's Theorem. Contents 1. Introduction 1 2. Group representations 1 3. Important theorems regarding G-invariant Hermitian form and unitary representations 3 4. Irreducible representations and Maschke's Theorem 7 5. Characters 8 6. Permutation representations 12 7. Regular representations 13 8. Schur's Lemma and proof of Orthogonality relations 14 Acknowledgments 18 References 18 1. Introduction Representation theory studies linear operators that are given by group elements acting on a vector space. Thus, representation theory is very useful in that it makes it possible to delve into geometric symmetries effectively with using coordinates, which is not possible if abstract group elements are only considered. This paper assumes that readers prior knowledge of linear algebra, including familiarity with Hermitian form and Spectral Theorem. Moreover, we deal only with finite groups. 2. Group representations We will start with a simple example of a representation of the group T of rotations of a tetrahedron. Suppose that the group T acts on a three-dimensional vector space V . Choose a basis (v1; v2; v3) so that each element of the basis passes through the midpoints A, B, C of three edges as follows. 1 2 SANG HOON KIM v3 C D v B 2 A v1 Figure 1. Tetrahedron on a 3-dimensional space Let x 2 T be the counter clockwise rotation around the vertex D by 2π=3 and y1; y2; y3 2 T be the counter clockwise rotations by π around the vertices A, B, C, respectively.
    [Show full text]
  • A Matrix-Based Approach to Global Locality Optimization
    Appears in Journal of Parallel and Distributed Computing, September 1999 A Matrix-Based Approach to Global Locality Optimization ¡ ¢ £ ¢ Mahmut Kandemir Alok Choudhary J. Ramanujam Prith Banerjee Abstract Global locality optimization is a technique for improving the cache performance of a sequence of loop nests through a combination of loop and data layout transformations. Pure loop transformations are restricted by data dependences and may not be very successful in optimizing imperfectly nested loops and explicitly parallelized programs. Although pure data transformations are not constrained by data dependences, the impact of a data transformation on an array might be program-wide; that is, it can affect all the references to that array in all the loop nests. Therefore, in this paper we argue for an integrated approach that employs both loop and data transformations. The method enjoys the advantages of most of the previous techniques for enhancing locality and is efficient. In our approach, the loop nests in a program are processed one by one and the data layout constraints obtained from one nest are propagated for the optimizing the remaining loop nests. We show a simple and effective matrix-based framework to implement this process. The search space that we consider for possible loop transformations can be represented by general non-singular linear transformation matrices and the data layouts that we consider are those that can be expressed using hyperplanes. Experiments with several floating-point programs on an ¤ -processor SGI Origin 2000 distributed-shared-memory machine demonstrate the efficacy of our approach. Keywords: data reuse, locality, memory hierarchy, parallelism, loop transformations, array restructuring, data trans- formations.
    [Show full text]
  • Data-Parallel Language for Correct and Efficient Sparse Matrix Codes
    Data-Parallel Language for Correct and Efficient Sparse Matrix Codes Gilad Arnold Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2011-142 http://www.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-142.html December 16, 2011 Copyright © 2011, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Data-Parallel Language for Correct and Efficient Sparse Matrix Codes by Gilad Arnold A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Rastislav Bodík, Chair Professor Koushik Sen Professor Leo A. Harrington Fall 2011 Data-Parallel Language for Correct and Efficient Sparse Matrix Codes Copyright 2011 by Gilad Arnold 1 Abstract Data-Parallel Language for Correct and Efficient Sparse Matrix Codes by Gilad Arnold Doctor of Philosophy in Computer Science University of California, Berkeley Professor Rastislav Bodík, Chair Sparse matrix formats encode very large numerical matrices with relatively few nonzeros. They are typically implemented using imperative languages, with emphasis on low-level optimization. Such implementations are far removed from the conceptual organization of the sparse format, which obscures the representation invariants.
    [Show full text]
  • Scientific Software Libraries for Scalable Architectures
    Scientific Software Libraries for Scalable Architectures The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Johnsson, S. Lennart and Kapil K. Mathur. 1994. Scientific Software Libraries for Scalable Architectures. Harvard Computer Science Group Technical Report TR-19-94. Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:25811003 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Scientic Software Libraries for Scalable Architectures S Lennart Johnsson Kapil K Mathur TR August Parallel Computing Research Group Center for Research in Computing Technology Harvard University Cambridge Massachusetts To app ear in Parallel Scientic Computing SpringerVerlag Scientic Software Libraries for Scalable Architectures S Lennart Johnsson Kapil K Mathur Thinking Machines Corp and Thinking Machines Corp and Harvard University Abstract Massively parallel pro cessors introduce new demands on software systems with resp ect to p erfor mance scalability robustness and p ortability The increased complexity of the memory systems and the increased range of problem sizes for which a given piece of software is used p oses se rious challenges to software developers The Connection Machine Scientic Software Library CMSSL uses several novel techniques to meet these challenges
    [Show full text]
  • Complex Conjugation of Group Representations by Inner Automorphlsms
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Complex Conjugation of Group Representations by Inner Automorphlsms Ture Damhus Chemistry Department Z H. C. @sted Institute Universitetsparken 5 DK-2100 Copenhagen 0, Denmark Submitted by Richard A. Brualdi ABSTRACT For finite-dimensional unitary irreducible group representations, theorems are established giving conditions under which the transition from a representation to its complex conjugate may be accomplished by an inner automorphism of the group. The central arguments are of a purely matrix-theoretical nature. Since the investiga- tion naturally falls into two cases according to the Frobenius-Schur classification of irreducible representations, this classification is briefly discussed. INTRODUCTION This paper is concerned with finite-dimensional linear group representa- tions over the field of complex numbers. Given such a representation of a group G, we ask whether it is possible to find an inner automorphism of G carrying the representation into its complex conjugate. It turns out that necessary and sufficient conditions are easily formulated and may be estab- lished using only elementary linear algebra. This is done in Sec. 2. Since the discussion almost immediately splits up into two cases accord- ing to the Frobenius-Schur classification, we give in Sec. 1 a review of this classification. The present investigation was prompted by certain representation-theo- retic problems in the so-called Wigner-Racah algebra of groups, which is of great importance in theoretical chemistry and physics. For these applications we refer the reader to [l] and [16] and references therein.
    [Show full text]
  • Introduction to Programming with Arrays Using ELI
    Introduction to Programming with Arrays using ELI by Wai-Mee Ching July, Nov., Dec. 2014 Copyright © 2014 Contents 1. Array, List and Primitive Operations in ELI ......................................................................................................... 3 1.1 Computers and Programming Languages ............................................................................................................ 3 1.2 ELI System and its Data Types ............................................................................................................................ 6 1.3 Shape of Data, Reshape and Data Conversion ................................................................................................... 10 1.4 Mathematical Computations .............................................................................................................................. 13 1.5 Comparisons, Selection, Membership and Index of .......................................................................................... 17 1.6 Array Indexing, Indexed Assignment and taking Sections ................................................................................ 21 1.7 Array Transformations ....................................................................................................................................... 26 1.8 Operators and Derived Functions ...................................................................................................................... 31 1.9 Lists and Operations on Lists............................................................................................................................
    [Show full text]
  • Sparse Matrix-Vector Multiplication on Gpgpus
    Sparse Matrix-Vector Multiplication on GPGPUs SALVATORE FILIPPONE VALERIA CARDELLINI Cranfield University DAVIDE BARBIERI ALESSANDRO FANFARILLO Universit`adegli Studi di Roma \Tor Vergata" Abstract The multiplication of a sparse matrix by a dense vector (SpMV) is a centerpiece of scientific computing applications: it is the essential kernel for the solution of sparse linear systems and sparse eigenvalue problems by iterative methods. The efficient implementation of the sparse matrix- vector multiplication is therefore crucial and has been the subject of an immense amount of research, with interest renewed with every major new trend in high performance computing architectures. The introduction of General Purpose Graphics Processing Units (GPGPUs) is no exception, and many articles have been devoted to this problem. With this paper we provide a review of the techniques for implementing the SpMV kernel on GPGPUs that have appeared in the literature of the last few years. We discuss the issues and tradeoffs that have been encountered by the various researchers, and a list of solutions, organized in categories according to common features. We also provide a performance comparison across different GPGPU models and on a set of test matrices coming from various application domains. Categories and subject descriptors:D.2.11[Software Engineering]: Soft- ware Architectures|Data abstraction; G.1.3[Numerical Analysis]: Nu- merical Linear Algebra|Sparse, structured, and very large systems (direct and iterative methods); G.4[Mathematical Software]: Algorithm
    [Show full text]