Gpgpus: Overview Pedagogically Precursor Concepts UNIVERSITY of ILLINOIS at URBANA-CHAMPAIGN

Gpgpus: Overview Pedagogically Precursor Concepts UNIVERSITY of ILLINOIS at URBANA-CHAMPAIGN

GPGPUs: Overview Pedagogically precursor concepts UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN © 2018 L. V. Kale at the University of Illinois Urbana-Champaign Precursor Concepts: Pedagogical • Architectural elements that, at least pedagogically, are precursors to understanding GPGPUs • SIMD and vector units. • We have already seen those • Large scale “Hyperthreading” for latency tolerance • Scratchpad memories • High BandWidth memory L.V.Kale 2 Tera MTA, Sun UltraSPARC T1 (Niagara) • Tera computers, With Burton Smith as a co-founder (1987) • Precursor: HEP processor (Denelcor Inc.), 1982 • First machine Was MTA • MTA-1, MTA-2, MTA-3 • Basic idea: • Support a huge number of hardWare threads (128) • Each With its oWn registers • No Cache! • Switch among threads on every cycle, thus tolerating DRAM latency • These threads could be running different processes • Such processors are called “barrel processors” in literature • But they sWitched to the “next thread” alWays.. So your turn is 127 clocks aWay, always • Was especially good for highly irregular accesses L.V.Kale 3 Scratchpad memory • Caches are complex and can cause unpredictable impact on performance • Scratchpad memories are made from SRAM on chip • Separate part of the address space • As fast or faster than caches, because no tag-matching or associative search • Need explicit instructions to bring data into scratchpad from memory • Load/store instructions exist betWeen registers and scratchpad • Example: IBM/Toshiba/Sony cell processor used in PS/3 • 1 PPE, and 8 SPE cores • Each SPE core has 256 KiB scratchpad • DMA is mechanism for moving data betWeen scratchpad and external DRAM L.V.Kale 4 High BandWidth Memory • As you increase compute capacity of processor chips, the bandWidth to DRAM comes under pressure • Many past improvements (e.g. Intel’s Nehalem) Were the result of improving bandWidth, including integration of memory related hardWare into processor chip • Recall: BandWidth is a matter of resources (here With some inherent limits on number of pins per chip) • GPUs have historically used higher bandWidth DRAM configurations • GDDR3 SDRAM, GDDR4, GGDR5, GGDR5X (10-ish Gbits/s per pin) • Recent advance: High BandWidth memory via 3D stacking • Multiple DRAM dies are stacked vertically and connected through-silicon vias (tsv) • NVIDIA, • MCDRAM (Intel), Hybrid Memory Cube (Micron) L.V.Kale 5 GPGPUs General Purpose Graphics Processing Units (GPUs) UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN © 2018 L. V. Kale at the University of Illinois Urbana-Champaign GPUs and General Purposing of GPUs : I • Graphics Processing Unit (GPU) • Drive the displays connected to a computer • Wikipedia: “.[…] designed to manipulate .. memory to accelerate creation of images in a frame buffer.” • Original purpose: high speed rendering i.e. video games, etc. • Due to need for speed in usages like video games, there Was pressure to increase their speed and capabilities • Especially in the 1990’s With the rise of 3D graphics • Helped along by APIs of OpenGL, DirectX, Direct3D • Nvidia’s GeForce 3 had enough hardWare support to do programmable shading L.V.Kale 7 GPUs and General Purposing of GPUs : II • GPGPUs Were dedicated units, With a specialized function, but • They Were getting faster and faster • They Were getting more programmable in order to support graphics capabilities • Many individuals and researchers in high performance computing started noticing that these devices could be used for computations • At least for a feW, but commonly needed, patterns, such as data parallel loops! L.V.Kale 8 GPUs and General Purposing of GPUs : III • Graphics Processing Unit (GPU) • Original purpose: high speed rendering(?) i.e. video games, etc • Optimized for being good at math • Result: High memory BW and many “cores” In this paper, We present Brook for GPUs, a • Brook Streaming Language from Stanford system for general-purpose computation • Ian Buck et al paper is Worth a read on programmable graphics hardWare. • The idea of specialized kernels Brook extends C to include simple data- parallel constructs, enabling the use of the • Running on specialized devices GPU as a streaming coprocessor. • NVIDIA and AMD (and Intel’s integrated graphics) • Programming: CUDA, OpenCL, and OpenMP L.V.Kale 9 SM SM SM Each SM is Streaming Multiprocessor like a Vector Warps Core ALU Registers GPGPU Chip Scratchpad Memory Cache for Constant Memory AKA Shared Memory Fast DRAM Holds Global Memory and The Device Constant Memory Schematic GPGPUs L.V.Kale 10 CPU vs GPGPU Comparison CPU (Host) GPU (Device) • Latency optimized • Throughput optimized • 10s of Cores • 1000s of “Cores” (CUDA cores) • 10s of Threads • 10,000s of Threads • Memory • Memory (on board) • Cache hierarchy • Simplified cache • Perf. via fast memory/large • No cpu-like consistency support caches • Perf. via massive (regular) parallelism L.V.Kale 12 SIMT • Single Instruction Multiple Threads Control Control Control Control • Programming Model ALU ALU ALU ALU Control • Execution Model ALU ALU ALU ALU L.V.Kale 13 GPGPU: Programming with CUDA GPGPU SoftWare Architecture UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN © 2018 L. V. Kale at the University of Illinois Urbana-Champaign CUDA • We Will present a very simple, over-simplified, overvieW • Explicit resource-aware programming • What you specify • Data transfers • Data parallel kernel/s, expressed in form of threads • Each thread does the action specified by the kernel • The total number of threads are grouped into teams called “blocks” • Kernel calls specify the number of blocks , and number of threads per block L.V.Kale 16 Programming Model OvervieW • Host (serial) • Launches device functions • Serial (parallel) • Control can return asynchronously • Parallel • Memory? • Device memory • Serial • “Unified” memory • Overlap • It is possible to overlap data transfer of one kernel with computation of another L.V.Kale 17 Simple CUDA Program $ gcc hello.c #include <stdio.h> $ ./a.out Hello, world! void hello() { printf(“Hello, world!\n”); } int main() { hello(); } L.V.Kale 18 Simple CUDA Program $ gcc hello.c #include <stdio.h> $ ./a.out __global__ Hello, world! void hello() { printf(“Hello, world!\n”); } $ nvcc hello.cu $ ./a.out int main() { Hello, world! hello<<<1,1>>>(); } L.V.Kale 19 Blocks • Basic parallel unit int main() { • Threads in a block can hello<<<128,1>>>(); assume access to a common } shared memory region (scratchpad). • Analogous to processes $ ./a.out • Blocks grouped into grid Hello, world! • Asynchronous Hello, world! … Hello, world! L.V.Kale 20 Threads • Sub-division of a block int main() { (shared memory) hello<<<1,128>>>(); • Analogous to OpenMP } threads • Grouped into warps (shared execution) $ a./out • Level of synchronization and Hello, world! communication Hello, world! … Hello, world! L.V.Kale 21 Warps • Groupings of threads • All execute same instruction (SIMT) • One miss, all miss • Thread divergence, No-Ops • Analogous to vector instructions • Scheduling unit 22 L.V.Kale Combining Blocks, Warps, and Threads Number of Blocks Number of Threads per Block For this picture, assume a Warp KernelFunc<<<3,6>>>(…); has 3 threads.. (in reality, its almost always 32.. It’s a device Block Dimension = 6 dependent parameter) Block 1 Block 2 Block 3 Warp 1 Warp 2 Thread Index 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 Global Index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 If you specify blocksize that’s not a multiple of warpsize, the system Will leave some cuda cores in a Warp idle) L.V.Kale 23 Illustrative Example __global__ void vecAdd(int* A, int* B, int* C) { int i = blockIdx.x * blockDim.x + threadIdx.x; C[i] = A[i] + B[i]; } blockIdx.x is my blockDim.x is the threadIdx.x is my … block’s serial number number of threads thread’s id in my per block block int main() { // Unified memory allocation vecAdd<<<VEC_SZ/512,512>>>(A,VEC_SZ/512 B, C); } Number of Threads Number of Blocks per Block L.V.Kale 24 Using OpenMP for GPU programming A Simpler Way of using GPGPUs UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Background: OpenMP support for GPGPUs • Traditional solution: use specialized languages (CUDA/OpenCL) • Need to reWrite lots of code • Target only subset of device types • e.g.: CUDA code can’t run on AMD GPUs, OpenCL is sloW on Nvidia • OpenMP 4.0+ has support for offloading computation to accelerators • Lots of overlap With (earlier) OpenACC standard • OpenMP already Widely used for multicore parallelization • Mixing OpenACC and OpenMP is difficult • Can target different types of devices (Nvidia GPUs, AMD GPUs, Xeon Phi, …) • OpenMP standard only describes interface, not implementation • Each compiler needs to implement support for different devices L.V.Kale 26 General overvieW – ZAXPY in OpenMP Multicore double x[N], y[N], z[N], a; //calculate z[i]=a*x[i]+y[i] #pragma omp parallel for for (int i=0; i<N; i++) z[i] = a*x[i] + y[i]; L.V.Kale 27 General overvieW – ZAXPY in OpenMP Offloading Compiler: Generate code double x[N], y[N], z[N], a; //calculatefor GPU z=a*x+y #pragma omp target { Runtime: Run code on for (int i=0; i<N; i++) device if possible, copy z[i] = a*x[i] + y[i]; data from/to GPU } • Code is unmodified except for the pragma • Data (x, y, z, a) is implicitly copied • Calculation done on device if available • Runs on CPU otherwise L.V.Kale 28 OpenMP offloading – distributing Work • By default, code in target region runs sequentially on accelerator double x[N], y[N], z[N], a; //calculate z=a*x+y #pragma omp target { #pragma omp teams distribute parallel for for (int i=0; i<N;

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    29 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us