Clsparse: a Vendor-Optimized Open-Source Sparse BLAS Library

Total Page:16

File Type:pdf, Size:1020Kb

Clsparse: a Vendor-Optimized Open-Source Sparse BLAS Library clSPARSE: A Vendor-Optimized Open-Source Sparse BLAS Library Joseph L. Greathousey Kent Knoxy Jakub Połazx Kiran Varagantiy Mayank Dagay yAdvanced Micro Devices, Inc. zUniversity of Wrocław xVratis Ltd. {Joseph.Greathouse, Kent.Knox, Kiran.Varaganti, Mayank.Daga}@amd.com [email protected] ABSTRACT 1. INTRODUCTION Sparse linear algebra is a cornerstone of modern compu- Modern high-performance computing systems, including tational science. These algorithms ignore the zero-valued those used for physics [1] and engineering [5], rely on sparse entries found in many domains in order to work on much data containing many zero values. Sparse data is best ser- larger problems at much faster rates than dense algorithms. viced by algorithms that store and rely only on the non-zero Nonetheless, optimizing these algorithms is not straightfor- values. Such algorithms enable faster solutions by skipping ward. Highly optimized algorithms for multiplying a sparse many needless calculations and allow larger problems to fit matrix by a dense vector, for instance, are the subject of a into memory. Towards this end, the Basic Linear Algebra vast corpus of research and can be hundreds of times longer Subprograms (BLAS) specification dedicates a chapter to than na¨ıve implementations. Optimized sparse linear alge- sparse linear algebra [3]. bra libraries are thus needed so that users can build appli- The description of these sparse algorithms is straightfor- cations without enormous effort. ward. While simple serial implementation of sparse matrix- Hardware vendors release proprietary libraries that are dense vector multiplication (SpMV) can be described in six highly optimized for their devices, but they limit interop- lines of pseudocode [6], highly optimized implementations erability and promote vendor lock-in. Open libraries often require a great deal more effort. Langr and Tvrd´ıkcite over work across multiple devices and can quickly take advantage fifty SpMV implementations from the recent literature, each of new innovations, but they may not reach peak perfor- ranging from hundreds to thousands of lines of code [8]. Su mance. The goal of this work is to provide a sparse linear and Keutzer found that the best algorithm depended both algebra library that offers both of these advantages. on the input and on the target hardware [12]. We thus describe clSPARSE, a permissively licensed open- Good sparse BLAS implementations must thus under- source sparse linear algebra library that offers state-of-the- stand not just the data structures and algorithms but also art optimized algorithms implemented in OpenCLTM. We the underlying hardware. Towards this end, hardware ven- test clSPARSE on GPUs from AMD and Nvidia and show dors offer optimized sparse BLAS libraries, such as Nvidia's performance benefits over both the proprietary cuSPARSE cuSPARSE. These libraries are proprietary and offer lim- library and the open-source ViennaCL library. ited or no support for hardware from other vendors. Their closed-source nature prevents even minor code or data struc- ture modifications that would result in faster or more main- CCS Concepts tainable code. In addition, the speed of advancement for these libraries •Mathematics of computing ! Computations on ma- can be limited because the community has no control over trices; Mathematical software performance; their code. Few (if any) of the algorithms described by Langr •Computing methodologies ! Linear algebra algo- and Tvrd´ıkmade their way into closed-source libraries, and rithms; Graphics processors; researchers do not know what optimizations or algorithms •Software and its engineering ! Software libraries and the vendors use. While proprietary, vendor-optimized li- repositories; braries offer good performance for a small selection of hard- ware, they limit innovation and promote vendor lock-in. Existing open-source libraries, such as ViennaCL [10], help Keywords alleviate these issues. ViennaCL offers support for devices clSPARSE; OpenCL; Sparse Linear Algebra; GPGPU from multiple vendors, demonstrating the benefits of open- source software in preventing vendor lock-in [11]. As an ex- ample of the benefits to open source innovation, ViennaCL Permission to make digital or hard copies of part or all of this work for personal or added support for the new CSR-Adaptive algorithm within classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation days of its publication [4], while additionally adding the new on the first page. Copyrights for third-party components of this work must be honored. SELL-C-σ algorithm [7]. For all other uses, contact the owner/author(s). Nonetheless, as a library that must sometimes trade per- IWOCL ’16 April 19-21, 2016, Vienna, Austria formance for portability concerns, ViennaCL can sometimes c 2016 Copyright held by the owner/author(s). leave performance on the ground [2]. There are potential ACM ISBN 978-1-4503-4338-1/16/04. performance benefits to vendor-optimized libraries, includ- DOI: http://dx.doi.org/10.1145/2909437.2909442 ing the ability to preemptively add optimizations for hard- ware that is not yet released. Table 1: Test Platforms AMD Nvidia In summary, existing sparse BLAS libraries offer either GPU RadeonTM Fury X GeForce GTX TITAN X limited closed-source support with high performance or broad CPU Intel Core i5-4690K Intel Core i7-5960X open source support that is not necessarily completely op- 16 GB Dual-channel 64GB Quad-channel timized. This work sets out to solve these problems. We DRAM describe and test clSPARSE, a vendor-optimized, permis- DDR3-2133 DDR4-2133 sively licensed, open-source sparse BLAS library built using OS Ubuntu 14.04.4 LTS OpenCLTM. clSPARSE was built in a collaboration between Driver fglrx 15.302 352.63 Advanced Micro Devices, Inc. and Vratis Ltd., and it is SDK AMD APP SDK 3.0 CUDA 7.5 ViennaCL v1.7.1 cuSPARSE v7.5 available at: Library http://github.com/clMathLibraries/clSPARSE/ clSPARSE v0.11 3. PERFORMANCE COMPARISON 2. CLSPARSE This section shows performance comparisons of our library As part of AMD's GPUOpen initiative to bring strong against both a popular proprietary library, cuSPARSE, and open-source library support to GPUs, we recently released a popular open-source library, ViennaCL. Our test platforms clSPARSE, an OpenCLTM sparse BLAS library optimized are described in Table 1. We test ViennaCL and clSPARSE for GPUs. Like the clBLAS, clFFT, and clRNG libraries, on an AMD RadeonTM Fury X GPU, currently AMD's top- clSPARSE is permissively licensed, works on GPUs from of-the-line consumer device. To show that clSPARSE works multiple vendors, and includes numerous performance opti- well on GPUs from multiple vendors, we also test an Nvidia mizations, both from engineers within AMD and elsewhere. GeForce GTX TITAN X, currently Nvidia's top-of-the-line clSPARSE is built to work on both Linux R and Microsoft consumer device. Due to space constraints, we only test this Windows R operating systems. GPU on clSPARSE and cuSPARSE. clSPARSE started as a collection of OpenCL routines de- The inputs used in our tests are shown in Tables 2 and 3. veloped by AMD and Vratis Ltd. to perform format conver- Some of the matrices include explicit zero values from the sions, level 1 vector operations, general SpMV, and solvers input files. We measure runtime by taking a timestamp be- such as conjugate-gradient and biconjugate gradient. Fur- fore calling the library's API, waiting for the work to finish, ther development led to the integration of AMD's CSR- and then take another timestamp. We execute each kernel Adaptive algorithm for SpMV [4, 2], a CSR-Adaptive-based 1000 times on each input and take an average of the runtime. sparse matrix-dense matrix multiplication algorithm, and GFLOPs are calculated using the widely accepted metric of a high-performance sparse matrix-sparse matrix multiplica- estimating 2 FLOPs for every output value. tion developed by the University of Copenhagen [9]. Figure 1 shows the performance benefit of the clSPARSE clSPARSE has numerous benefits over proprietary libraries SpMV compared to the cuSPARSE csrmv() algorithm. This such as Nvidia's cuSPARSE. First and foremost, it works data shows that clSPARSE on the AMD GPU yields an av- across devices from multiple vendors. As we will demon- erage performance increase of 5.4× over cuSPARSE running strate in Section 3, clSPARSE achieves high performance on the Nvidia GPU. While this demonstrates the benefits of on GPUs from both AMD and Nvidia, and the algorithms AMD-optimized software on AMD hardware, it's also inter- are portable to other OpenCL devices. In addition, the al- esting to note that clSPARSE results in a performance in- gorithms in clSPARSE, due to their open-source nature, are crease of 4.5× on the Nvidia GPU when compared against visible to users and easy to change. Even if users do not di- the proprietary Nvidia library. This shows that clSPARSE rectly link against clSPARSE, these implementations can be is beneficial across vendors. directly used and modified by application developers. Such Figure 2 illustrates the performance of the SpMV algo- problem-specific kernel modifications has proven beneficial rithm in clSPARSE versus the ViennaCL SpMV algorithm for techniques such as kernel fusion [11]. on the AMD GPU. It is worth noting that ViennaCL uses Compared to other open-source libraries, such as Vien- an earlier version of the CSR-Adaptive algorithm used in naCL, clSPARSE offers significant performance benefits. For clSPARSE. However, clSPARSE includes both new algo- example, while ViennaCL uses an early version of the CSR- rithmic changes [2] and numerous minor performance tun- Adaptive algorithm for SpMV, the newer version used in ing changes. Through these changes, clSPARSE is able to clSPARSE performs better across a wider range of matri- achieve a 2.5× higher performance than ViennaCL.
Recommended publications
  • Heterogeneous Computing for Advanced Driver Assistance Systems
    TECHNISCHE UNIVERSITAT¨ MUNCHEN¨ Lehrstuhl fur¨ Robotik, Kunstliche¨ Intelligenz und Echtzeitsysteme Heterogeneous Computing for Advanced Driver Assistance Systems Xiebing Wang Vollstandiger¨ Abdruck der von der Fakultat¨ der Informatik der Technischen Universitat¨ Munchen¨ zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaen (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: Prof. Dr. Daniel Cremers Prufer¨ der Dissertation: 1. Prof. Dr.-Ing. habil. Alois Knoll 2. Assistant Prof. Xuehai Qian, Ph.D. 3. Prof. Dr. Kai Huang Die Dissertation wurde am 25.04.2019 bei der Technischen Universitat¨ Munchen¨ eingereicht und durch die Fakultat¨ fur¨ Informatik am 17.09.2019 angenommen. Abstract Advanced Driver Assistance Systems (ADAS) is an indispensable functionality in state-of- the-art intelligent cars and the deployment of ADAS in automated driving vehicles would become a standard in the near future. Current research and development of ADAS still faces several problems. First of all, the huge amount of perception data captured by mas- sive vehicular sensors have posed severe computation challenge for the implementation of real-time ADAS applications. Secondly, conventional automotive Electronic Control Units (ECUs) have to cope with the knoy issues such as technology discontinuation and the consequent tedious hardware/soware (HW/SW) maintenance. Lastly, ADAS should be seamlessly shied towards a mixed and scalable system in which safety, security, and real-time critical components must coexist with the less critical counterparts, while next- generation computation resources can still be added exibly so as to provide sucient computing capacity. is thesis gives a systematic study of applying the emerging heterogeneous comput- ing techniques to the design of an automated driving module and the implementation of real-time ADAS applications.
    [Show full text]
  • AMD APP SDK V2.8.1
    AMD APP SDK v2.8.1 FAQ 1 General Questions 1. Do I need to use additional software with the SDK? To run an OpenCL™ application, you must have an OpenCL™ runtime on your system. If your system includes a recent AMD discrete GPU, or an APU, you also should install the latest Catalyst™ drivers, which can be downloaded from AMD.com. Information on supported devices can be found at developer.amd.com/appsdk. If your system does not include a recent AMD discrete GPU, or APU, the SDK installs a CPU-only OpenCL™ run-time. Also, we recommend using the debugging profiling and analysis tools contained in the AMD CodeXL heterogeneous compute tools suite. 2. Which versions of the OpenCL™ standard does this SDK support? AMD APP SDK 2.8.1 supports the development of applications using the OpenCL™ Specification v 1.2. 3. Will applications developed to execute on OpenCL™ 1.1 still operate in an OpenCL™ 1.2 environment? OpenCL™ is designed to be backwards compatible. The OpenCL™ 1.2 run-time delivered with the AMD Catalyst drivers run any OpenCL™ 1.1-compliant application. However, an OpenCL™ 1.2-compliant application will not execute on an OpenCL™ 1.1 run-time if APIs only supported by OpenCL™ 1.2 are used. 4. Does AMD provide any additional OpenCL™ samples, other than those contained within the SDK? The most recent versions of all of the samples contained within the SDK are also available for individual download from the developer.amd.com/appsdk “Samples & Demos” page. This page also contains additional samples that either were too large to include in the SDK, or which have been developed since the most recent SDK release.
    [Show full text]
  • AMD APP SDK V2.9.1 Getting Started
    AMD APP SDK v2.9.1 Getting Started 1 Overview The AMD APP SDK is provided to the developer community to accelerate the programming in a heterogeneous environment by enabling AMD GPUs to work in concert with the system's x86 CPU cores. The SDK provides samples, documentation, and other materials to quickly get you started leveraging accelerated compute using OpenCL™, Bolt, OpenCV, C++ AMP for your C/C++ application, or Aparapi for your Java application. This document provides instructions on using the AMD APP SDK. The necessary prerequisite installations, environment settings, build and execute instructions for the samples are provided. Review the following quick links to the important sections: Section 2, “APP SDK on Windows” Section 2.1, “Installation” Section 2.2, “General Prerequisites” Section 2.3, “OpenCL” Section 2.4, “BOLT” Section 2.5, “C++ AMP” Section 2.6, “Aparapi” Section 2.7, “OpenCV” Section 3, “APP SDK on Linux” Section 3.1, “Installation” Section 3.2, “General prerequisites” Section 3.3, “OpenCL” Section 3.4, “BOLT” Section 3.5, “Aparapi” Section 3.6, “OpenCV” Section Appendix A, “Important Notes” Section Appendix C, “CMAKE” Section Appendix D, “Building OpenCV from sources” Getting Started 1 of 19 2 APP SDK on Windows 2.1 Installation The AMD APP SDK 2.9.1 installer is delivered as a self-extracting installer for 32-bit and 64-bit systems on Windows. For details on how to install the APP SDK on Windows, see the AMD APP SDK Installation Notes document. The default installation path is C:\Users\<userName>\AMD APP SDK\<appSdkVersion>\.
    [Show full text]
  • AMD APP SDK V3.0 Beta
    AMD APP SDK v3.0 Beta FAQ 1 General Questions 1. Do I need to use additional software with the SDK? For information about the additional software to be used with the AMD APP SDK, see the AMD APP SDK Getting Started Guide. Also, we recommend using the debugging profiling and analysis tools contained in the AMD CodeXL heterogeneous compute tools suite. 2. Which versions of the OpenCL™ standard does this SDK support? AMD APP SDK version 3.0 Beta supports the development of applications using the OpenCL™ Specification version 2.0. 3. Will applications developed to execute on OpenCL™ 1.2 still operate in an OpenCL™ 2.0 environment? OpenCL™ is designed to be backwards compatible. The OpenCL™ 2.0 run-time delivered with the AMD Catalyst drivers run any OpenCL™ 1.2-compliant application. However, an OpenCL™ 2.0-compliant application will not execute on an OpenCL™ 1.2 run-time if APIs only supported by OpenCL™ 2.0 are used. 4. Does AMD provide any additional OpenCL™ samples, other than those contained within the SDK? The most recent versions of all of the samples contained within the SDK are also available for individual download from the developer.amd.com/appsdk “Samples & Demos” page. This page also contains additional samples that either were too large to include in the SDK, or which have been developed since the most recent SDK release. Check the AMD APP SDK web page for new, updated, or large samples. 5. How often can I expect to get AMD APP SDK updates? Developers can expect that the AMD APP SDK may be updated two to three times a year.
    [Show full text]
  • AMD APP SDK Developer Release Notes
    AMD APP SDK v3.0 Beta Developer Release Notes 1 What’s New in AMD APP SDK v3.0 Beta 1.1 New features in AMD APP SDK v3.0 Beta AMD APP SDK v3.0 Beta includes the following new features: OpenCL 2.0: There are 20 samples that demonstrate various features of OpenCL 2.0 such as Shared Virtual Memory, Platform Atomics, Device-side Enqueue, Pipes, New workgroup built-in functions, Program Scope Variables, Generic Address Space, and OpenCL 2.0 image features. For the complete list of the samples, see the AMD APP SDK Samples Release Notes (AMD_APP_SDK_Release_Notes_Samples.pdf) document. Support for Bolt 1.3 library. 6 additional samples that demonstrate various APIs in the Bolt C++ AMP library. One new sample that demonstrates the consumption of SPIR 1.2 binary. Enhancements and bug fixes in several samples. A lightweight installer that supports the following features: Customized online installation Ability to download the full installer for install and distribution 1.2 New features for AMD CodeXL version 1.6 The following new features in AMD CodeXL version 1.6 provide the following improvements to the developer experience: GPU Profiler support for OpenCL 2.0 API-level debugging for OpenCL 2.0 Power Profiling For information about CodeXL and about how to use CodeXL to gather performance data about your OpenCL application, such as application traces and timeline views, see the CodeXL home page. Developer Release Notes 1 of 4 2 Important Notes OpenCL 2.0 runtime support is limited to 64-bit applications running on 64-bit Windows and Linux operating systems only.
    [Show full text]
  • Press Release
    Wednesday, December 5, 2012 12:54:49 PM Eastern Standard Time Subject: AMD Paves Ease-of-Programming Path to Heterogeneous System Architecture with New APP SDK 2.8 and Unified Developer Tool Suite Date: Tuesday, December 4, 2012 8:04:53 AM Eastern Standard Time From: AMD Communications To: All AMD The following news releases cleared Market Wire at 8:00 a.m. (ET), Tuesday, December 4, 2012. Due to incompatible email and PC platforms, some symbols may not translate and spacing may be off. PRESS RELEASE Contact: Travis Williams AMD Public Relations (512) 602-4863 [email protected] AMD Paves Ease-of-Programming Path to Heterogeneous System Architecture with New APP SDK 2.8 and Unified Developer Tool Suite – New Array of Developer Tool Kits, Suites and Libraries Makes Heterogeneous Compute Programming More Accessible on AMD Platforms – SUNNYVALE, Calif. — Dec. 4, 2012 — AMD (NYSE: AMD) today announced availability of the AMD APP SDK 2.8 and the AMD CodeXL unified tool suite to provide developers the tools and resources needed to accelerate applications with AMD accelerated processing units (APUs) and graphics processing units (GPUs). The APP SDK 2.8 and CodeXL tool suite provides access to code samples, white papers, libraries and tools to leverage the processing power of heterogeneous compute with OpenCL™, C++, DirectCompute and more. “With CodeXL and APP SDK 2.8, our highest performing SDK to date which leaps past the competition in performance on standard benchmarks1, AMD continues to empower developers with the resources they need for greater performance and power-efficient applications,” said Manju Hegde, corporate vice president, Heterogeneous Applications and Developer Solutions, AMD.
    [Show full text]
  • AMD APP SDK 3.0 Samples Release Notes Document
    AMD APP SDK 3.0 Samples Release Notes 1 What’s New Installer The AMD APP SDK can be installed on Linux with root as well as non-root permissions. New Samples HeatPDE: This sample demonstrates simulating heat field using shared virtual memory feature of OpenCL. The host and the device interactively access the same virtual memory and use platform atomics for ensuring memory consistency. The sample also graphically shows the simulation of heat field using OpenGL APIs. AdvancedConvolution: This sample demonstrates the optimal implementation of separable and non-separable filters. The sample implements 3X3 and 5X5 Sobel, Box, and Gaussian Filters. Updated Samples SVMAtomicsBinaryTreeInsert: The sample has been enhanced to show visualization of the nodes inserted. It shows nodes in different colors depending on whether it contains child nodes inserted by host only (blue), device only (Red), both host and device (Green) or a leaf node (Gray). SimpleConvolution: The sample has been extended to include separable filter in addition to the existing non-separable filter. NBody: The sample now prints performance metric in GFLOPS. Restructured OpenCL Samples Directory OpenCL samples have now been restructured such that directory in which they are located is based on the OpenCL version features that they use. The samples that use 2.0 features are located in the \\samples\opencl\cl\2.0 directory. The samples that use 1.x features are located in the \\samples\opencl\cl\1.x directory and the \\samples\opencl\cpp_cl\1.x directory. APARAPI Samples Excluded The Aparapi samples are not included in the latest APP SDK. For reference on usage of Aparapi, developers can see earlier versions of the AMD APP SDK (now AMD APP SDK) or visit https://code.google.com/p/aparapi/.
    [Show full text]
  • Regression Modelling of Power Consumption for Heterogeneous Processors
    Regression Modelling of Power Consumption for Heterogeneous Processors by Tahir Diop A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department of Departement of Electrical and Computer Engineering University of Toronto c Copyright 2013 by Tahir Diop Abstract Regression Modelling of Power Consumption for Heterogeneous Processors Tahir Diop Master of Applied Science Graduate Department of Departement of Electrical and Computer Engineering University of Toronto 2013 This thesis is composed of two parts, that relate to both parallel and heterogeneous processing. The first describes DistCL, a distributed OpenCL framework that allows a cluster of GPUs to be programmed like a single device. It uses programmer-supplied meta-functions that associate work-items to memory. DistCL achieves speedups of up to 29× using 32 peers. By comparing DistCL to SnuCL, we determine that the compute-to-transfer ratio of a benchmark is the best predictor of its performance scaling when distributed. The second is a statistical power model for the AMD Fusion heterogeneous processor. We present a systematic methodology to create a representative set of compute micro-benchmarks using data collected from real hardware. The power model is created with data from both micro-benchmarks and application benchmarks. The model showed an average predictive error of 6.9% on heterogeneous workloads. The Multi2Sim heterogeneous simulator was modified to support configurable power modelling. ii Dedication To my wife and best
    [Show full text]
  • Matrox Imaging Library (MIL) 9.0
    ------------------------------------------------------------------------------- Matrox Imaging Library (MIL) 9.0. MIL 9.0 Update 35 (GPU Processing) Release Notes (July 2011) (c) Copyright Matrox Electronic Systems Ltd., 1992-2011 ------------------------------------------------------------------------------- Main table of contents Section 1 : Differences between MIL 9.0 Update 35 and MIL 9.0 Update 30 Section 2 : Differences between MIL 9.0 Update 30 and MIL 9.0 Update 14 Section 3 : Differences between MIL 9.0 Update 14 and MIL 9.0 Update 3 Section 4 : Differences between MIL 9.0 Update 3 and MIL 9.0 Section 5 : MIL 9.0 GPU (Graphics Processing Unit) accelerations ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Section 1: Differences between MIL 9.0 Update 35 and MIL 9.0 Update 30 Table of Contents for Section 1 1. Overview 2. GPU acceleration restrictions 3. New GPU functionalities and improvements 3.1. GPU accelerated image processing operations 3.1.1. MimHistogram 4. GPU specific examples 4.1. MilInteropWithCUDA (Updated) 4.2. MilInteropWithDX (Updated) 4.3. MilInteropWithOpenCL (New) 5. Is my application running on the GPU? 5.1. Deactivate MIL Host processing compensation 5.2. Windows SysInternals Process Explorer v15.0 (c) tool 6. Do MIL GPU results and precision change between updates? 6.1. MIL GPU algortihms 6.2. Graphics card firmware 7. Fixed bugs 7.1. All modules (DX9, DX10, DX11) 8. GPU boards 8.1. List of tested GPUs 8.2. GPU board limitations ------------------------------------------------------------------------------- 1. Overview - New DirectX 11 and DirectCompute support (shader models 5.0) - Minimal requirement: Microsoft Windows Vista with SP2 (or later), Windows 7 Note: Make sure that Windows Update 971512 (Windows Graphics, Imaging, and XPS Library) is installed to use DirectX 11 and Direct Compute in Windows Vista with SP2.
    [Show full text]
  • Design and Evaluation of Register Allocation on Gpus
    Design and Evaluation of Register Allocation on GPUs A Thesis Presented by Charu Kalra to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering Northeastern University Boston, Massachusetts November 2015 ⃝c Copyright 2015 by Charu Kalra All Rights Reserved i Abstract The rapid adoption of Graphics Processing Unit (GPU) computing has led to the develop- ment of workloads which can leverage the massive parallelism offered by GPUs. The use of GPUs is no longer limited to graphics, but has been extended to support compute-intensive applications from various scientific and commercial domains. Significant effort and expertise are required to optimize applications targeted for a specific GPU architecture. One of the keys to improving the performance of the applications is to utilize the available hardware resources efficiently. The archi- tectural complexity of the GPU makes it challenging to optimize performance at both the source level and the compiler level. In this thesis, we develop a transformation pass (i.e., a register allocator) that optimizes both vector and scalar register usage on GPUs. We introduce an open-source compiler framework, Multi2C, which converts OpenCL kernel into AMD Southern Islands binaries. The register alloca- tor is integrated as a part of the Multi2C optimization framework. Register allocation for each class of registers is run as a separate pass in the compiler. We also highlight the challenges faced when performing register allocation targeting a GPU, including issues with thread divergence and proper handling of contiguous register series. We evaluate the performance of our allocator for applications taken from the AMD APP SDK.
    [Show full text]
  • Parallel for Loops on Heterogeneous Resources
    University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 12-2012 Parallel For Loops on Heterogeneous Resources Frederick Edward Weber University of Tennessee - Knoxville, [email protected] Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Part of the Computer and Systems Architecture Commons, Numerical Analysis and Scientific Computing Commons, and the Programming Languages and Compilers Commons Recommended Citation Weber, Frederick Edward, "Parallel For Loops on Heterogeneous Resources. " PhD diss., University of Tennessee, 2012. https://trace.tennessee.edu/utk_graddiss/1570 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a dissertation written by Frederick Edward Weber entitled "Parallel For Loops on Heterogeneous Resources." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, with a major in Computer Engineering. Gregory D. Peterson, Major Professor We have read this dissertation and recommend its acceptance: Robert J. Harrison, Micah Beck, Robert Hettich Accepted for the Council: Carolyn R. Hodges Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) Parallel For Loops on Heterogeneous Resources A Dissertation Presented for the The Doctor of Philosophy Degree The University of Tennessee, Knoxville Frederick Edward Weber December 2012 c by Frederick Edward Weber, 2012 All Rights Reserved.
    [Show full text]
  • Cross Layer Reliability Estimation for Digital Systems
    Politecnico di Torino Porto Institutional Repository [Doctoral thesis] Cross layer reliability estimation for digital systems Original Citation: Vallero, Alessandro (2017). Cross layer reliability estimation for digital systems. PhD thesis Availability: This version is available at : http://porto.polito.it/2673865/ since: June 2017 Published version: DOI:10.6092/polito/porto/2673865 Terms of use: This article is made available under terms and conditions applicable to Open Access Policy Article ("Public - All rights reserved") , as described at http://porto.polito.it/terms_and_conditions. html Porto, the institutional repository of the Politecnico di Torino, is provided by the University Library and the IT-Services. The aim is to enable open access to all the world. Please share with us how this access benefits you. Your story matters. (Article begins on next page) Doctoral Dissertation Doctoral Program in Computer Engineering (29thcycle) Cross layer reliability estimation for digital systems By Alessandro Vallero ****** Supervisor(s): Prof. Stefano Di Carlo, Supervisor Doctoral Examination Committee: Prof. Regis Leveugle , Referee, TIMA Grenoble Prof. Marco Ottavi, Referee, University of “Rome Tor Vergata” Prof. Ramon Canal, UPC Barcelona Prof. Lorena Angel, TIMA Grenoble Prof. Dimitris Gizopoulos, University of Athens Politecnico di Torino 2017 Declaration I hereby declare that, the contents and organization of this dissertation constitute my own original work and does not compromise in any way the rights of third parties, including those relating to the security of personal data. Alessandro Vallero 2017 * This dissertation is presented in partial fulfillment of the requirements for Ph.D. degree in the Graduate School of Politecnico di Torino (ScuDo). I would like to dedicate this thesis to my grandparents who always open my mind toward new horizons Acknowledgements I would like to acknowledge Prof.
    [Show full text]