High Performance Computing with CUDA Tutorial Contents for Today [118 Slides] Department of Computer Science

High Performance Computing with CUDA Tutorial Contents for Today [118 Slides] Department of Computer Science

GPGPU 2015: High Performance Computing with CUDA Tutorial contents for today [118 slides] Department of Computer Science. University of Cape Town April 20th-24th, 2015 1. Introduction. [17 slides] 2. Architecture. [21] 1. CUDA hardware model. [3] 2. The first generation: Tesla (2007-2009). [3] 3. The second generation: Fermi (2010-2011). [3] 4. The third generation: Kepler (2012-2014). [6] 5. The fourth generation: Maxwell (2015-?). [5] 6. Summary by generation. [1] 3. Programming. [15] 4. Syntax. [16] 1. Basic elements. [10] Manuel Ujaldón 2. A couple of preliminary examples. [6] Associate Professor @ Univ. of Malaga (Spain) 5. Compilation and tools [12] Conjoint Senior Lecturer @ Univ. of Newcastle (Australia) 6. Examples: VectorAdd, Stencil, MxM. [25] CUDA Fellow @ Nvidia 7. Bibliography, resources and tools. [12] 2 Prerequisites for this tutorial You (probably) need experience with C. You do not need parallel programming background (but it helps if you have it). You do not need knowledge about the GPU architecture: We will start with the basic pillars. You do not need graphics experience. Those were the old times (shaders, Cg). With CUDA, it is not required any knowledge about vertices, pixels, textures, ... I. Introduction 3 The characters of this story: Welcome to the GPU world The CUDA family picture 5 6 Worldwide distribution The impressive evolution of CUDA of CUDA university courses Year 2008 Year 2015 100.000.000 CUDA-capable GPUs CUDA-capable GPUs 600.000.000 (6.000 Teslas only) (and 450.000 Tesla high-end GPUs) 150.000 3.000.000 CUDA downloads per year CUDA downloads (that is, one every 9 seconds) 1 75 supercomputers supercomputer in TOP500.org in top500.org (aggregate 54.000 TFLOPS) (77 TFLOPS) 60 840 university courses university courses 4.000 60.000 academic papers academic papers 7 8 The 3 features which have made Summary of GPU evolution the GPU such a unique processor 2001: First many-cores (vertex and pixel processors). Simplified. 2003: Those processor become programmable (with Cg). The control required for one thread is amortized by 31 more (warp). 2006: Vertex and pixel processors unify. Scalability. 2007: CUDA emerges. Makes use of the huge data volume handled by applications to define a sustainable parallelization model. 2008: Double precision floating-point arithmetic. Productivity. 2010: Operands are IEEE-normalized and memory is ECC. Endowed with efficient mechanisms for switching immediately to 2012: Wider support for irregular computing. another thread whenever the one being executed suffers from stalls. 2014: The CPU-GPU memory space is unified. Still pending: Reliability in clusters and connection to disk. CUDA essential keywords: Warp, SIMD, latency hiding, free context switch. 9 10 What is CUDA? CUDA C at a glance “Compute Unified Device Architecture” A platform designed jointly at software and hardware levels to Essentially, it is C language with minimal extensions: make use of the GPU computational power in general-purpose Programmer writes the program for a single thread, and the code is applications at three levels: automatically instanciated over hundreds of threads. Software: It allows to program the GPU with minimal but CUDA defines: powerful SIMD extensions to enable heterogeneous An architectural model: programming and attain an efficient and scalable execution. With many processing cores grouped in multiprocessors who share a SIMD control unit. A programming model: Firmware: It offers a driver oriented to GPGPU Based on massive data parallelism and fine-grained parallelism. programming, which is compatible with that used for Scalable: The code is executed on a different number of cores without recompiling it. rendering. Straightforward APIs to manage devices, memory, A memory management model: etc. More explicit to the programmer, where caches are not transparent anymore. Hardware: It exposes GPU parallelism for general-purpose Goals: computing via a number of multiprocessors endowed with Build a code which scales to hundreds of cores in a simple way, allowing cores and a memory hierarchy. us to declare thousands of threads. Allow heterogeneous computing (between CPUs and GPUs). 11 12 Heterogeneous Computing (1/4) Heterogeneous Computing (2/4) Terminology: CUDA executes a program on a device (the GPU), which is seen as a co- processor for the host (the CPU). Host: The CPU and the memory on motherboard [DDR3 as of 2013]. CUDA can be seen as a library of functions which contains 3 types of Device: The graphics card [GPU + video memory]: components: GPU: Nvidia GeForce/Tesla. Host: Control and access to devices. Video memory: GDDR5 as of 2015. Device: Specific functions for the devices. All: Vector data types and a set of routines supported on both sides. CPU (host) GPU Cores Caches 50 GB/s. (device) 3-channel (192 bits = 24 bytes) 384 bits @ 3 GHz 144 GB/s. @ 1.333 GHz 32 GB/s. System Memory Video memory Host Device (DDR3) PCI-e 3.0: 8 GB/s. (GDDR5) 13 14 Heterogeneous Computing (3/4) Heterogeneous Computing (4/4) #include <iostream> #include <algorithm> using namespace std; #define N 1024 #define RADIUS 3 #define BLOCK_SIZE 16 DEVICE CODE: __global__ void stencil_1d(int *in, int *out) { __shared__ int temp[BLOCK_SIZE + 2 * RADIUS]; int gindex = threadIdx.x + blockIdx.x * blockDim.x; int lindex = threadIdx.x + RADIUS; Parallel function // Read input elements into shared memory temp[lindex] = in[gindex]; if (threadIdx.x < RADIUS) { temp[lindex - RADIUS] = in[gindex - RADIUS]; written in CUDA. temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE]; } // Synchronize (ensure all the data is available) __syncthreads(); // Apply the stencil int result = 0; for (int offset = -RADIUS ; offset <= RADIUS ; offset++) result += temp[lindex + offset]; // Store the result out[gindex] = result; } void fill_ints(int *x, int n) { fill_n(x, n, 1); } int main(void) { int *in, *out; // host copies of a, b, c int *d_in, *d_out; // device copies of a, b, c int size = (N + 2*RADIUS) * sizeof(int); // Alloc space for host copies and setup values in = (int *)malloc(size); fill_ints(in, N + 2*RADIUS); HOST CODE: out = (int *)malloc(size); fill_ints(out, N + 2*RADIUS); // Alloc space for device copies cudaMalloc((void **)&d_in, size); cudaMalloc((void **)&d_out, size); // Copy to device - Serial code. cudaMemcpy(d_in, in, size, cudaMemcpyHostToDevice); cudaMemcpy(d_out, out, size, cudaMemcpyHostToDevice); // Launch stencil_1d() kernel on GPU stencil_1d<<<N/BLOCK_SIZE,BLOCK_SIZE>>>(d_in + RADIUS, d_out + RADIUS); - Parallel code. // Copy result back to host cudaMemcpy(out, d_out, size, cudaMemcpyDeviceToHost); // Cleanup free(in); free(out); cudaFree(d_in); cudaFree(d_out); - Serial code. return 0; The code to be written in CUDA can be lower than 5%, } but exceed 50% of the execution time if remains on CPU. 15 16 Simple Processing Flow (1/3) Simple Processing Flow (2/3) PCI Bus PCI Bus 1.Copy input data from CPU 1.Copy input data from CPU memory to GPU memory. memory to GPU memory. 2.Load GPU program and execute, caching data on chip for performance. 17 18 Simple Processing Flow (3/3) The classic example int main(void) { Salida: printf("Hello World!\n"); PCI Bus $ nvcc hello.cu return 0; $ a.out } Hello World! $ 1.Copy input data from CPU memory to GPU memory. Standard C that runs on the host. 2.Load GPU program and execute, caching data on NVIDIA compiler (nvcc) can be used to compile programs chip for performance. with no device code. 3.Transfer results from GPU memory to CPU memory. 19 20 Hello World! with device code (1/2) Hello World! with device code (2/2) __global__ void mykernel(void) Two new syntactic elements: __global__ void mykernel(void) { { Output: The CUDA C keyword __global__ } } indicates a function that runs on the $ nvcc hello.cu int main(void) device and is called from host code. { int main(void) { $ a.out mykernel<<<1,1>>> is a CUDA Hello World! mykernel<<<1,1>>>(); kernel launch from the host code. mykernel<<<1,1>>>(); $ printf("Hello World!\n"); printf("Hello World!\n"); return 0; That's all that is required to return 0; } execute a function on the GPU! } nvcc separates source code into host and device. mykernel() does nothing this time. Device functions (like mikernel()) are procesed by Triple angle brackets mark a call from host code to device code. NVIDIA compiler. Also called a “kernel launch”. Host functions (like main()) are processed by host Parameters <<<1,1>>> describe CUDA parallelism (blocks and threads). compiler (gcc for Unix, cl.exe for Windows). 21 22 ``... and if sofware people wants good machine, tey mus learn more abot hardware to influence tat way hardware deigners ...´´ David A. Patterson & John Hennessy Organization and Computer Design Mc-Graw-Hill (1995) Chapter 9, page 569 II. Architecture 24 Overview of CUDA hardware generations 24 22 20 Pascal 3D Memory 18 NVLink 16 14 12 Maxwell Unified memory 10 DX12 8 Kepler 6 Dynamic Parallelism 4 II.1. CUDA hardware model Fermi 2 Tesla FP64 CUDA GFLOPS GFLOPS in double precision eachfor watt consumed 2008 2010 2012 2014 2016 26 The CUDA hardware model: SIMD processors Memory hierarchy structured, a tale of hardware scalability GPU A GPU consists of: GPU Each multiprocessor has: N multiprocessors (or SMs), each Multiprocessor N A register file. Multiprocessor N Multiprocessor 2 containing M cores (or stream procs). Shared memory. Multiprocessor 2 Multiprocessor 1 Massive parallelism: A constant cache and a texture Multiprocessor 1 Core 1 Core 2 Core M Control Unit cache, both read-only. Shared memory Applied to thousands of threads. … (SIMD) Registers Registers Registers Sharing data at different levels. Global memory is the actual Control Unit Processor 1 Processor 2 … Processor M (SIMD) Heterogeneous computing: video memory (GDDR5): Three times faster than the Constant GPU: G80 GT200 GF100 GK110 cache (Tesla) (Tesla) (Fermi) (Kepler) DDR3 used by the CPU, but... Data intensive. Texture cache Fine-grain parallelism. Time period 2006-07 2008-09 2010-11 2012-13 ..

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    38 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us