Multicore Processors and Gpus: Programming Models and Compiler Optimizations

Multicore Processors and Gpus: Programming Models and Compiler Optimizations

Multicore Processors and GPUs: Programming Models and Compiler Optimizations J. “Ram” Ramanujam P. “Saday” Sadayappan Louisiana State University The Ohio State University http://www.ece.lsu.edu/jxr/pact12tut/ 1 Tutorial Outline • Technology Trends => Multi-core Ubiquity • Parallel Programming Models • Introduction to Compiler Optimizations • Optimizations for General-Purpose Multicore Architectures • Compiler Optimizations for GPUs Louisiana State University 2 The Ohio State University The Good Old Days for Software Source: J. Birnbaum • Single-processor performance experienced dramatic improvements from clock, and architectural improvement (Pipelining, Instruction- Level-Parallelism) • Applications experienced automatic performance improvement Louisiana State University The Ohio State University Power Density Trends 1000 Power doubles every 4 years 5-year projection: 200W total, 125 W/cm2 ! RocketRocket Nozzle NuclearNuclear ReactorReactor Nozzle 2 100 Pentium® 4 Hot plate Pentium® III Pentium® II Watts/cm 10 Pentium® Pro i386 Pentium® P=VI: 75W @ 1.5V = 50 A! i486 1 * “New Microarchitecture Challenges in the Coming Generations of CMOS Process Technologies” – Fred Pollack, Intel Corp. Micro32 conference key note - 1999. Courtesy Avi Mendelson, Intel. Louisiana State University 4 The Ohio State University Processor Clock Frequency Trends Source: Deszo Sima • Processor clock rates have increased over 100x in two decades, but have now flattened out Louisiana State University 5 The Ohio State University Single-Processor Performance Trends 100000 Source: Hennessy and Patterson, 10000 Computer Architecture: ) A Quantitative Approach, 4th Ed., 2006 1000 52%/year 100 Performance (vs. VAX-11/780 (vs. Performance 10 25%/year 1 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 • Single-processor performance improved by about 50% per year for almost two decades, but has now flattened out: limited further upside from clock or ILP Louisiana State University 6 The Ohio State University Moore’s Law: Still Going Strong 100000 1,000,000,000 Source: Saman Amarasinghe Itanium 2 Itanium Number ofTransistors ) 10000 100,000,000 P4 ??%/year P3 1000 10,000,000 52%/yearP2 Pentium 100 486 1,000,000 386 Performance (vs. VAX-11/780 (vs. Performance 10 100,000 28625%/year 8086 1 10,000 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 • But transistor density continues to rise unabated • Multiple cores are now the best option for sustained performance growth Louisiana State University 7 The Ohio State University The Only Option: Many Cores • Chip density is continuing increase ~2x every 2 years – Clock speed is not – Number of processor cores may double • There is little or no more hidden parallelism (ILP) to be found • Parallelism must be exposed to and managed by software Source: Intel, Microsoft (Sutter) and Stanford (Olukotun, Hammond) Louisiana State University The Ohio State University8 GPU vs. CPU Performance Source: NVIDIA Louisiana State University The Ohio State University GPU vs. CPU Performance Source: NVIDIA Energy difference: 200pJ/instruction (GPU) vs. 2000pJ/instruction (CPU) Louisiana State University The Ohio State University CPU vs. GPU (Source: Nvidia CUDA Programming Guide) • GPU’s higher performance and energy efficiency due to different allocation of chip area – High degree of SIMD parallelism, simple in-order cores, less control/sync. logic, lower cache/scratchpad capacity • But SIMD parallelism is not well suited for all algorithms – Dynamic search, Sparse computations, Quad-tree/Oct-tree data structures, Graph algorithms, etc. • Future systems will be heterogeneous Louisiana State University The Ohio State University Intel’s Vision of Architecture Evolution Scalar plus many core for highly threaded workloads Large, Scalar cores for high single-thread Many-core array performance • CMP with 10s-100s low power cores • Scalar cores • Capable of TFLOPS+ • Full System-on-Chip Multi-core array • Servers, workstations, • CMP with ~10 cores embedded… Dual core • Symmetric multithreading on oluti Presentation: Paul Petersen, Ev Sr. Principal Engineer, Intel Micro2015: Evolving Processor Architecture, Intel® Developer Forum, March 2005 Louisiana State University The Ohio State University Tutorial Outline • Technology Trends => Multi-core Ubiquity • Parallel Programming Models • Introduction to Compiler Optimizations • Optimizations for General-Purpose Multicore Architectures • Compiler Optimizations for GPUs Louisiana State University 13 The Ohio State University Multi-Threaded Programming Models • Parallel program is comprised of multiple concurrent threads of computation • Work is partitioned amongst the threads • Data communication between the threads via shared memory or messages – Shared memory: More convenient than explicit messages, but danger of race conditions – Message passing: More tedious than use of shared memory, but lower likelihood of races – Examples: OpenMP, MPI, CUDA Louisiana State University 14 The Ohio State University OpenMP • Open standard for Multi-Processing • Standard, portable API for shared-memory multiprocessors • Fork-Join model of parallelism; program alternates between sequential execution by “master” thread and parallel regions executed by many threads • Primarily directive (pragma) based specification of parallelism Master Thread Parallel Regions Louisiana State University 15 The Ohio State University OpenMP: Pi Program #include <omp.h> int num_steps = 100000; double step; #define N_THRDS 2 void main () { double x, step, pi, partial_sum[N_THRDS]; step = 1.0/num_steps; omp_set_num_threads(N_THRDS) #pragma omp parallel { double x,s; int i,id; id = omp_get_thread_num(); for (i=id, s=0.0; i< num_steps;i=i+N_THRDS) { x = (i+0.5)*step; s += 4.0/(1.0+x*x); } partial_sum[id] = s; } for(i=0, pi=0.0;i<N_THRDS;i++) pi += partial_sum[i] * step; } Louisiana State University 16 The Ohio State University OpenMP: Pi Program (version 2) #include <omp.h> static long num_steps = 100000; double step; #define N_THRDS 2 void main () { int i; double x, pi, partial_sum[N_THRDS]; step = 1.0/num_steps; omp_set_num_threads(NUM_THREADS) #pragma omp parallel { double s,x; int id; id = omp_get_thread_num(); s = 0; #pragma omp for for (i=0;i< num_steps; i++) { x = (i+0.5)*step; s += 4.0/(1.0+x*x); } partial_sum[id] = s; } for(i=0, pi=0.0;i<N_THRDS;i++ ) pi += partial_sum[i] * step; } Louisiana State University 17 The Ohio State University NVIDIA GPU Architecture Device Multiprocessor N Multiprocessor 2 Multiprocessor 1 Shared Memory Host Registers Registers Registers Instruction Unit Processor 1 Processor 2 … Processor M CPU Host memory Device memory Louisiana State University 18 The Ohio State University The CUDA Model Source: Cliff Woolley, Nvidia Louisiana State University 19 The Ohio State University CUDA Model - 2 (Device) Grid Block (0, 0) Block (1, 0) Shared Memory Shared Memory Registers Registers Registers Registers Thread (0, 0) Thread (1, 0) Thread (0, 0) Thread (1, 0) Host Global Memory Louisiana State University 20 The Ohio State University Threads • Threads are organized into a three-level hierarchy • Each kernel creates a single grid that consists of many thread blocks each containing many threads • Threads in the same thread block can cooperate with each other – share data through the on-chip shared memory – can synchronizae their execution using the __syncthreads primitive • Threads are otherwise independent; no direct support for synchronization between threads in different thread blocks • Threads and blocks have IDs: allows a thread to decide what data to compute on and simplifies memory addressing for multidimensional data – BlockID can be one or two dimensional (blockIdx.x, blockIdx.y) – ThreadID can be 1-, 2- or 3-dimensional (threadIdx.x, threadIdx.y, threadIdx.z) • Threads within a block are organized into warps of 32 threads • Each warp executes in SIMD fashion, issuing in one to four cycles, depending on the number of cores on each SM (Streaming Multiprocessor) Louisiana State University The Ohio State University Threads and Memory Spaces Grid Each thread can access: Block (0, 0) Block (1, 0) • per-thread registers Shared Memory Shared Memory • per-block shared memory Registers Registers Registers Registers • per-grid global memory Thread (0, 0) Thread (1, 0) Thread (0, 0) Thread (1, 0) Host Global Memory Louisiana State University The Ohio State University CUDA Extensions to C • API along with minimal extensions to C • Declaration specs – _device, _global, _shared, _local, _constant • Special Variables – blockIdx, threadIdx • Intrinsics – __syncthreads • Runtime API – cudaMalloc(…), cudaMemCpy(…) • Kernel execution Louisiana State University The Ohio State University CUDA Function Declarations Executed on the: Only callable from the: __device__ float DeviceFunc() device device __global__ void KernelFunc() device host __host__ float HostFunc() host host • __global__ defines a kernel function – Must return void • __device__ and __host__ can be used together • __device__ functions cannot have their address taken Louisiana State University 24 The Ohio State University CUDA Variable Type Qualifiers Memory Scope Lifetime __device__ __local__ int LocalVar; local thread thread __device__ __shared__ int SharedVar; shared block block __device__ int GlobalVar; global grid application • __device__ is optional when used with __local__ or __shared__ • Automatic variables without any qualifier reside in a

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    79 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us