Introduction to GPGPU for AI
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Lecture 7 CUDA
Lecture 7 CUDA Dr. Wilson Rivera ICOM 6025: High Performance Computing Electrical and Computer Engineering Department University of Puerto Rico Outline • GPU vs CPU • CUDA execution Model • CUDA Types • CUDA programming • CUDA Timer ICOM 6025: High Performance Computing 2 CUDA • Compute Unified Device Architecture – Designed and developed by NVIDIA – Data parallel programming interface to GPUs • Requires an NVIDIA GPU (GeForce, Tesla, Quadro) ICOM 4036: Programming Languages 3 CUDA SDK GPU and CPU: The Differences ALU ALU Control ALU ALU Cache DRAM DRAM CPU GPU • GPU – More transistors devoted to computation, instead of caching or flow control – Threads are extremely lightweight • Very little creation overhead – Suitable for data-intensive computation • High arithmetic/memory operation ratio Grids and Blocks Host • Kernel executed as a grid of thread Device blocks Grid 1 – All threads share data memory Kernel Block Block Block space 1 (0, 0) (1, 0) (2, 0) • Thread block is a batch of threads, Block Block Block can cooperate with each other by: (0, 1) (1, 1) (2, 1) – Synchronizing their execution: For hazard-free shared Grid 2 memory accesses Kernel 2 – Efficiently sharing data through a low latency shared memory Block (1, 1) • Two threads from two different blocks cannot cooperate Thread Thread Thread Thread Thread (0, 0) (1, 0) (2, 0) (3, 0) (4, 0) – (Unless thru slow global Thread Thread Thread Thread Thread memory) (0, 1) (1, 1) (2, 1) (3, 1) (4, 1) • Threads and blocks have IDs Thread Thread Thread Thread Thread (0, 2) (1, 2) (2, -
ATI Radeon™ HD 4870 Computation Highlights
AMD Entering the Golden Age of Heterogeneous Computing Michael Mantor Senior GPU Compute Architect / Fellow AMD Graphics Product Group [email protected] 1 The 4 Pillars of massively parallel compute offload •Performance M’Moore’s Law Î 2x < 18 Month s Frequency\Power\Complexity Wall •Power Parallel Î Opportunity for growth •Price • Programming Models GPU is the first successful massively parallel COMMODITY architecture with a programming model that managgped to tame 1000’s of parallel threads in hardware to perform useful work efficiently 2 Quick recap of where we are – Perf, Power, Price ATI Radeon™ HD 4850 4x Performance/w and Performance/mm² in a year ATI Radeon™ X1800 XT ATI Radeon™ HD 3850 ATI Radeon™ HD 2900 XT ATI Radeon™ X1900 XTX ATI Radeon™ X1950 PRO 3 Source of GigaFLOPS per watt: maximum theoretical performance divided by maximum board power. Source of GigaFLOPS per $: maximum theoretical performance divided by price as reported on www.buy.com as of 9/24/08 ATI Radeon™HD 4850 Designed to Perform in Single Slot SP Compute Power 1.0 T-FLOPS DP Compute Power 200 G-FLOPS Core Clock Speed 625 Mhz Stream Processors 800 Memory Type GDDR3 Memory Capacity 512 MB Max Board Power 110W Memory Bandwidth 64 GB/Sec 4 ATI Radeon™HD 4870 First Graphics with GDDR5 SP Compute Power 1.2 T-FLOPS DP Compute Power 240 G-FLOPS Core Clock Speed 750 Mhz Stream Processors 800 Memory Type GDDR5 3.6Gbps Memory Capacity 512 MB Max Board Power 160 W Memory Bandwidth 115.2 GB/Sec 5 ATI Radeon™HD 4870 X2 Incredible Balance of Performance,,, Power, Price -
AMD Accelerated Parallel Processing Opencl Programming Guide
AMD Accelerated Parallel Processing OpenCL Programming Guide November 2013 rev2.7 © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur. -
AMD Opencl User Guide.)
AMD Accelerated Parallel Processing OpenCLUser Guide December 2014 rev1.0 © 2014 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur. -
Novel Methodologies for Predictable CPU-To-GPU Command Offloading
Novel Methodologies for Predictable CPU-To-GPU Command Offloading Roberto Cavicchioli Università di Modena e Reggio Emilia, Italy [email protected] Nicola Capodieci Università di Modena e Reggio Emilia, Italy [email protected] Marco Solieri Università di Modena e Reggio Emilia, Italy [email protected] Marko Bertogna Università di Modena e Reggio Emilia, Italy [email protected] Abstract There is an increasing industrial and academic interest towards a more predictable characterization of real-time tasks on high-performance heterogeneous embedded platforms, where a host system offloads parallel workloads to an integrated accelerator, such as General Purpose-Graphic Processing Units (GP-GPUs). In this paper, we analyze an important aspect that has not yet been considered in the real-time literature, and that may significantly affect real-time performance if not properly treated, i.e., the time spent by the CPU for submitting GP-GPU operations. We will show that the impact of CPU-to-GPU kernel submissions may be indeed relevant for typical real-time workloads, and that it should be properly factored in when deriving an integrated schedulability analysis for the considered platforms. This is the case when an application is composed of many small and consecutive GPU com- pute/copy operations. While existing techniques mitigate this issue by batching kernel calls into a reduced number of persistent kernel invocations, in this work we present and evaluate three other approaches that are made possible by recently released versions of the NVIDIA CUDA GP-GPU API, and by Vulkan, a novel open standard GPU API that allows an improved control of GPU com- mand submissions. -
Comparison of Technologies for General-Purpose Computing on Graphics Processing Units
Master of Science Thesis in Information Coding Department of Electrical Engineering, Linköping University, 2016 Comparison of Technologies for General-Purpose Computing on Graphics Processing Units Torbjörn Sörman Master of Science Thesis in Information Coding Comparison of Technologies for General-Purpose Computing on Graphics Processing Units Torbjörn Sörman LiTH-ISY-EX–16/4923–SE Supervisor: Robert Forchheimer isy, Linköpings universitet Åsa Detterfelt MindRoad AB Examiner: Ingemar Ragnemalm isy, Linköpings universitet Organisatorisk avdelning Department of Electrical Engineering Linköping University SE-581 83 Linköping, Sweden Copyright © 2016 Torbjörn Sörman Abstract The computational capacity of graphics cards for general-purpose computing have progressed fast over the last decade. A major reason is computational heavy computer games, where standard of performance and high quality graphics con- stantly rise. Another reason is better suitable technologies for programming the graphics cards. Combined, the product is high raw performance devices and means to access that performance. This thesis investigates some of the current technologies for general-purpose computing on graphics processing units. Tech- nologies are primarily compared by means of benchmarking performance and secondarily by factors concerning programming and implementation. The choice of technology can have a large impact on performance. The benchmark applica- tion found the difference in execution time of the fastest technology, CUDA, com- pared to the slowest, OpenCL, to be twice a factor of two. The benchmark applica- tion also found out that the older technologies, OpenGL and DirectX, are compet- itive with CUDA and OpenCL in terms of resulting raw performance. iii Acknowledgments I would like to thank Åsa Detterfelt for the opportunity to make this thesis work at MindRoad AB. -
Adaptive GPU Tessellation with Compute Shaders Jad Khoury, Jonathan Dupuy, and Christophe Riccio
i i i i Adaptive GPU Tessellation with Compute Shaders Jad Khoury, Jonathan Dupuy, and Christophe Riccio 1.1 Introduction GPU rasterizers are most efficient when primitives project into more than a few pixels. Below this limit, the Z-buffer starts aliasing, and shad- ing rate decreases dramatically [Riccio 12]; this makes the rendering of geometrically-complex scenes challenging, as any moderately distant poly- gon will project to sub-pixel size. In order to minimize such sub-pixel pro- jections, a simple solution consists in procedurally refining coarse meshes as they get closer to the camera. In this chapter, we are interested in deriving such a procedural refinement technique for arbitrary polygon meshes. Traditionally, mesh refinement has been computed on the CPU via re- cursive algorithms such as quadtrees [Duchaineau et al. 97, Strugar 09] or subdivision surfaces [Stam 98, Cashman 12]. Unfortunately, CPU-based refinement is now fundamentally bottlenecked by the massive CPU-GPU streaming of geometric data it requires for high resolution rendering. In order to avoid these data transfers, extensive work has been dedicated to implement and/or emulate these recursive algorithms directly on the GPU by leveraging tessellation shaders (see, e.g., [Niessner et al. 12,Cash- man 12,Mistal 13]). While tessellation shaders provide a flexible, hardware- accelerated mechanism for mesh refinement, they remain limited in two respects. First, they only allow up to log2(64) = 6 levels of subdivision. Second, their performance drops along with subdivision depth [AMD 13]. In the following sections, we introduce a GPU-based refinement scheme that is free from the limitations incurred by tessellation shaders. -
A Qualitative Comparison Study Between Common GPGPU Frameworks
A Qualitative Comparison Study Between Common GPGPU Frameworks. Adam Söderström Department of Computer and Information Science Linköping University This dissertation is submitted for the degree of M. Sc. in Media Technology and Engineering June 2018 Acknowledgements I would like to acknowledge MindRoad AB and Åsa Detterfelt for making this work possible. I would also like to thank Ingemar Ragnemalm and August Ernstsson at Linköping University. Abstract The development of graphic processing units have during the last decade improved signif- icantly in performance while at the same time becoming cheaper. This has developed a new type of usage of the device where the massive parallelism available in modern GPU’s are used for more general purpose computing, also known as GPGPU. Frameworks have been developed just for this purpose and some of the most popular are CUDA, OpenCL and DirectX Compute Shaders, also known as DirectCompute. The choice of what framework to use may depend on factors such as features, portability and framework complexity. This paper aims to evaluate these concepts, while also comparing the speedup of a parallel imple- mentation of the N-Body problem with Barnes-hut optimization, compared to a sequential implementation. Table of contents List of figures xi List of tables xiii Nomenclature xv 1 Introduction1 1.1 Motivation . .1 1.2 Aim . .3 1.3 Research questions . .3 1.4 Delimitations . .4 1.5 Related work . .4 1.5.1 Framework comparison . .4 1.5.2 N-Body with Barnes-Hut . .5 2 Theory9 2.1 Background . .9 2.1.1 GPGPU History . 10 2.2 GPU Architecture . -
Gpgpu Processing in Cuda Architecture
Advanced Computing: An International Journal ( ACIJ ), Vol.3, No.1, January 2012 GPGPU PROCESSING IN CUDA ARCHITECTURE Jayshree Ghorpade 1, Jitendra Parande 2, Madhura Kulkarni 3, Amit Bawaskar 4 1Departmentof Computer Engineering, MITCOE, Pune University, India [email protected] 2 SunGard Global Technologies, India [email protected] 3 Department of Computer Engineering, MITCOE, Pune University, India [email protected] 4Departmentof Computer Engineering, MITCOE, Pune University, India [email protected] ABSTRACT The future of computation is the Graphical Processing Unit, i.e. the GPU. The promise that the graphics cards have shown in the field of image processing and accelerated rendering of 3D scenes, and the computational capability that these GPUs possess, they are developing into great parallel computing units. It is quite simple to program a graphics processor to perform general parallel tasks. But after understanding the various architectural aspects of the graphics processor, it can be used to perform other taxing tasks as well. In this paper, we will show how CUDA can fully utilize the tremendous power of these GPUs. CUDA is NVIDIA’s parallel computing architecture. It enables dramatic increases in computing performance, by harnessing the power of the GPU. This paper talks about CUDA and its architecture. It takes us through a comparison of CUDA C/C++ with other parallel programming languages like OpenCL and DirectCompute. The paper also lists out the common myths about CUDA and how the future seems to be promising for CUDA. KEYWORDS GPU, GPGPU, thread, block, grid, GFLOPS, CUDA, OpenCL, DirectCompute, data parallelism, ALU 1. INTRODUCTION GPU computation has provided a huge edge over the CPU with respect to computation speed. -
Near Data Processing: Are We There Yet?
NEAR DATA PROCESSING: ARE WE THERE YET? MAYA GOKHALE LAWRENCE LIVERMORE NATIONAL LABORATORY This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344. LLNL-PRES-665279 OUTLINE Why we need near memory computing Niche application Data reorganization engines Computing near storage FPGAs for computing near memory WHY AREN’T WE THERE YET? § Processing near memory is attractive for high bandwidth, low latency, massive parallelism § 90’s era designs closely integrated logic and DRAM transistor § Expensive § Slow § Niche § Is 3D packaging the answer? § Expensive § Slow § Niche § What are the technology and commercial incentives? EXASCALE POWER PROBLEM: DATA MOVEMENT IS A PRIME CULPRIT • Cost of a double precision flop is negligible compared to the cost of reading and writing memory • Cache-unfriendly applications waste memory bandwidth, energy • Number of nodes needed to solve problem is inflated due to poor memory bw utilization • Similar phenomenon in disk Sources: Horst Simon, LBNL Greg Astfalk, HP MICRON HYBRID MEMORY CUBE OFFERS OPPORTUNITY FOR PROCESSING NEAR MEMORY Fast logic layer • Customized processors • Through silicon vias (TSV) for high bandwidth access to memory Vault organization yields enormous bandwidth in the package Best case latency is higher than traditional DRAM due to packetized abstract memory interface HMC LATENCY: LINK I TO VAULT I, 128B, 50/50 HMC LATENCY: LINK I TO VAULT J, 128B, 50/50 One link active, showing effect of -
Integrated Framework for Heterogeneous Embedded Platforms Using Opencl Author: Kulin Seth Department: Electrical and Computer Engineering
NORTHEASTERN UNIVERSITY Graduate School of Engineering Thesis Title: Integrated framework for heterogeneous embedded platforms using OpenCL Author: Kulin Seth Department: Electrical and Computer Engineering Approved for Thesis Requirements of the Master of Science Degree: Thesis Advisor: Prof. David Kaeli Date Thesis Committee: Prof. Gunar Schirner Date Thesis Committee: Dr. Rafael Ubal Date Head of Department: Date Graduate School Notified of Acceptance: Associate Dean of Engineering: Yaman Yener Date INTEGRATED FRAMEWORK FOR HETEROGENEOUS EMBEDDED PLATFORMS USING OPENCL A Thesis Presented by Kulin Seth to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering Northeastern University Boston, Massachusetts March 2011 c Copyright 2011 by Kulin Seth All Rights Reserved iii Abstract The technology community is rapidly moving away from the age of computers and laptops, and is entering the emerging era of hand-held devices. With the rapid de- velopment of smart phones, tablets, and pads, there has been widespread adoption of Graphic Processing Units (GPUs) in the embedded space. The hand-held market is now seeing an ever-increasing rate of development of computationally intensive ap- plications, which require significant amounts of processing resources. To meet this challenge, GPUs can be used for general-purpose processing. We are moving towards a future where devices will be more connected and integrated. This will allow appli- cations to run on handheld devices, while offloading computationally intensive tasks to other compute units available. There is a growing need for a general program- ming framework which can utilize heterogeneous processing units such as GPUs and DSPs on embedded platforms. -
Virtual GPU Software User Guide Is Organized As Follows: ‣ This Chapter Introduces the Capabilities and Features of NVIDIA Vgpu Software
Virtual GPU Software User Guide DU-06920-001 _v13.0 Revision 02 | August 2021 Table of Contents Chapter 1. Introduction to NVIDIA vGPU Software..............................................................1 1.1. How NVIDIA vGPU Software Is Used....................................................................................... 1 1.1.2. GPU Pass-Through.............................................................................................................1 1.1.3. Bare-Metal Deployment.....................................................................................................1 1.2. Primary Display Adapter Requirements for NVIDIA vGPU Software Deployments................2 1.3. NVIDIA vGPU Software Features............................................................................................. 3 1.3.1. GPU Instance Support on NVIDIA vGPU Software............................................................3 1.3.2. API Support on NVIDIA vGPU............................................................................................ 5 1.3.3. NVIDIA CUDA Toolkit and OpenCL Support on NVIDIA vGPU Software...........................5 1.3.4. Additional vWS Features....................................................................................................8 1.3.5. NVIDIA GPU Cloud (NGC) Containers Support on NVIDIA vGPU Software...................... 9 1.3.6. NVIDIA GPU Operator Support.......................................................................................... 9 1.4. How this Guide Is Organized..................................................................................................10