Quick viewing(Text Mode)

Flynn's Taxonomy

Flynn's Taxonomy

The University of Adelaide, School of Science 12 November 2013

Computer Architecture A Quantitative Approach, Fifth Edition

Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures

Copyright © 2012, Elsevier Inc. All rights reserved. 1 Classes of of Computers Classes Flynn’s Taxonomy

n SISD: Single instruction stream, single data stream n SIMD: Single instruction stream, multiple data streams

n Vector architectures

n Multimedia extensions

n Graphics units n MIMD: Multiple instruction streams, multiple data streams

n Tightly-coupled MIMD

n Loosely-coupled MIMD

n MISD: Multiple instruction streams, single data stream

n No commercial implementation

Which is what we normally do? Which of the others is easiest? Why? Threaded programming fits where?

Copyright © 2012, Elsevier Inc. All rights reserved. 2

Chapter 2 — Instructions: Language of the Computer 1 The University of Adelaide, School of Computer Science 12 November 2013 Introduction Introduction Introduction

n SIMD architectures can exploit significant data-level parallelism for:

n matrix-oriented scientific

n media-oriented image and sound processors

n SIMD is more energy efficient than MIMD

n Only needs to fetch one instruction per data operation

n Makes SIMD attractive for personal mobile devices

n SIMD allows programmer to continue to think sequentially

Copyright © 2012, Elsevier Inc. All rights reserved. 3 Introduction Introduction SIMD Parallelism

n Vector architectures n SIMD extensions n Graphics Processor Units (GPUs)

n For processors:

n Expect two additional cores per chip per year

n SIMD width to double every four years

n Potential from SIMD to be twice that from MIMD!

Copyright © 2012, Elsevier Inc. All rights reserved. 4

Chapter 2 — Instructions: Language of the Computer 2 The University of Adelaide, School of Computer Science 12 November 2013 Vector Architectures Vector Architectures

n Basic idea:

n Read sets of data elements into “vector registers”

n Operate on those registers

n Disperse the results back into memory

n Registers are controlled by compiler

n Used to hide memory latency

n Leverage memory bandwidth

Copyright © 2012, Elsevier Inc. All rights reserved. 5 Vector Architectures VMIPS

n Example architecture: VMIPS

n Loosely based on -1

n Vector registers

n Each register holds a 64-element, 64 bits/element vector

n has 16 read ports and 8 write ports (why 2:1?)

n Vector functional units

n Fully pipelined (why?)

n Data and control hazards are detected (Explain!)

n Vector load-store unit

n Fully pipelined (why?)

n One word per clock cycle after initial latency (How can that be?)

n Scalar registers (what is a scalar?)

n 32 general-purpose registers

n 32 floating-point registers

Copyright © 2012, Elsevier Inc. All rights reserved. 6

Chapter 2 — Instructions: Language of the Computer 3 The University of Adelaide, School of Computer Science 12 November 2013 Vector Architectures VMIPS Instructions

n ADDVV.D: add two vectors n ADDVS.D: add vector to a scalar n LV/SV: vector load and vector store from address

n Example: DAXPY (What is that?) L.D F0,a ; load scalar a LV V1,Rx ; load vector X MULVS.D V2,V1,F0 ; vector-scalar multiply LV V3,Ry ; load vector Y ADDVV V4,V2,V3 ; add SV Ry,V4 ; store the result n Requires 6 instructions vs. almost 600 for MIPS (why?)

Copyright © 2012, Elsevier Inc. All rights reserved. 7 Vector Architectures Vector Execution Time

n Execution time depends on three factors:

n Length of operand vectors explain n Structural hazards

n Data dependencies

n VMIPS functional units consume one element per clock cycle

n Execution time is approximately the vector length

why approximately? n Convey

n Set of vector instructions that could potentially execute together

Copyright © 2012, Elsevier Inc. All rights reserved. 8

Chapter 2 — Instructions: Language of the Computer 4 The University of Adelaide, School of Computer Science 12 November 2013 Vector Architectures Chimes

n Sequences with read-after-write dependency hazards can be in the same convey via chaining

n Chaining

n Allows a vector operation to start as soon as the individual elements of its vector source operand become available (sound familiar?)

n Chime

n Unit of time to execute one convey

n m conveys executes in m chimes

n For vector length of n, requires m x n clock cycles

Copyright © 2012, Elsevier Inc. All rights reserved. 9 Vector Architectures Example

LV V1,Rx ;load vector X MULVS.D V2,V1,F0 ;vector-scalar multiply LV V3,Ry ;load vector Y ADDVV.D V4,V2,V3 ;add two vectors SV Ry,V4 ;store the sum

Convoys: 1 LV MULVS.D 2 LV ADDVV.D 3 SV

3 chimes, 2 FP ops per result, cycles per FLOP = 1.5 For 64 element vectors, requires 64 x 3 = 192 clock cycles Hazards: where are they and how handled?

Copyright © 2012, Elsevier Inc. All rights reserved. 10

Chapter 2 — Instructions: Language of the Computer 5 The University of Adelaide, School of Computer Science 12 November 2013 Vector Architectures Challenges

n Start up time

n Latency of vector functional unit

n Assume the same as Cray-1

n Floating-point add => 6 clock cycles

n Floating-point multiply => 7 clock cycles

n Floating-point divide => 20 clock cycles

n Vector load => 12 clock cycles

n Improvements:

n > 1 element per clock cycle

n Non-64 wide vectors

n IF statements in vector code (How handled?)

n Memory system optimizations to support vector processors (Such as?)

n Multiple dimensional matrices

n Sparse matrices (Impact?)

n Programming a vector computer

Copyright © 2012, Elsevier Inc. All rights reserved. 11 Vector Architectures Multiple Lanes

Element n of vector register A is “hardwired” to element n of vector register B

n Allows for multiple hardware lanes

Copyright © 2012, Elsevier Inc. All rights reserved. 12

Chapter 2 — Instructions: Language of the Computer 6 The University of Adelaide, School of Computer Science 12 November 2013 Vector Architectures Vector Length Register

n Vector length not known at compile time?

n Use Vector Length Register (VLR)

n Use strip mining for vectors over the maximum length (MVL): low = 0; VL = (n % MVL); /*find odd-size piece using modulo op % */ for (j = 0; j <= (n/MVL); j=j+1) { /*outer loop*/ for (i = low; i < (low+VL); i=i+1) /*runs for length VL*/ Y[i] = a * X[i] + Y[i] ; /*main operation*/ low = low + VL; /*start of next vector*/ VL = MVL; /*reset the length to maximum vector length*/ }

Copyright © 2012, Elsevier Inc. All rights reserved. 13 Vector Architectures Vector Mask Registers

n Consider: for (i = 0; i < 64; i=i+1) if (X[i] != 0) X[i] = X[i] – Y[i];

n Use vector mask register to “disable” elements: LV V1,Rx ;load vector X into V1 LV V2,Ry ;load vector Y L.D F0,#0 ;load FP zero into F0 SNEVS.D V1,F0 ;sets VM(i) to 1 if V1(i)!=F0 SUBVV.D V1,V1,V2 ;subtract under vector mask SV Rx,V1 ;store the result in X

n GFLOPS rate decreases!

Copyright © 2012, Elsevier Inc. All rights reserved. 14

Chapter 2 — Instructions: Language of the Computer 7 The University of Adelaide, School of Computer Science 12 November 2013 Vector Architectures Memory Banks

n Memory system must be designed to support high bandwidth for vector loads and stores

n Spread accesses across multiple banks

n Control bank addresses independently

n Load or store non-sequential words (!!!)

n Support multiple vector processors sharing the same memory

n Example:

n 32 processors, each generating 4 loads and 2 stores/cycle

n Processor cycle time is 2.167 ns, SRAM cycle time is 15 ns

n How many memory banks needed?

Copyright © 2012, Elsevier Inc. All rights reserved. 15

Non-sequentially stored data

n Load non-sequential words

n Store non-sequential words

Both load and store work with vector registers so the execution units are dealing with full registers even if the corresponding memory is not contiguous.

The latency of dealing with non-sequential data in memory is hidden!

Consider matrix multiplication where rows are contiguous in memory, but columns are not: execution units simply see full vector registers.

Huge performance impact! (why?)

Copyright © 2012, Elsevier Inc. All rights reserved. 16

Chapter 2 — Instructions: Language of the Computer 8 The University of Adelaide, School of Computer Science 12 November 2013

Non-sequentially stored data: Support

Stride: distance between words in memory. Usually fixed for a matrix operation, e.g. multiply

Implementation:

n Load non-sequential words

n LVWS: Load Vector With Stride

n Store non-sequential words

n SVWS: Store Vector With Stride

Copyright © 2012, Elsevier Inc. All rights reserved. 17 Vector Architectures Stride

n Consider: for (i = 0; i < 100; i=i+1) for (j = 0; j < 100; j=j+1) { A[i][j] = 0.0; for (k = 0; k < 100; k=k+1) A[i][j] = A[i][j] + B[i][k] * D[k][j]; }

n Must vectorize multiplication of rows of B with columns of D

n Use non-unit stride

n Bank conflict (stall) occurs when the same bank is hit faster than bank busy time:

n #banks / LCM(stride,#banks) < bank busy time

Copyright © 2012, Elsevier Inc. All rights reserved. 18

Chapter 2 — Instructions: Language of the Computer 9 The University of Adelaide, School of Computer Science 12 November 2013

Gather-Scatter

n Sparse arrays: indices are arrays: A[M[i]]

n Support LVI: Load Vector Indexed (“gather”) SVI: Store Vector Indexed(“scatter”)

Gather-Scatter can be slow, but memory systems can be designed to speed it up significantly (expensive!)

Copyright © 2012, Elsevier Inc. All rights reserved. 19 Vector Architectures Scatter-Gather

Consider sparse matrices (note indices are arrays): for (i = 0; i < n; i=i+1) A[K[i]] = A[K[i]] + C[M[i]];

n Use index vector: LV Vk, Rk ;load K LVI Va, (Ra+Vk) ;load A[K[]] LV Vm, Rm ;load M LVI Vc, (Rc+Vm) ;load [M[]] ADDVV.D Va, Va, Vc ;add them SVI (Ra+Vk), Va ;store A[K[]]

Copyright © 2012, Elsevier Inc. All rights reserved. 20

Chapter 2 — Instructions: Language of the Computer 10 The University of Adelaide, School of Computer Science 12 November 2013

Programmer-Compiler conversation

n Compiler can inform programmer when code cannot be vectorized. So programmer can adjust code.

n Programmer can provide hints to compiler that sections of code are independent so they can be vectorized.

Copyright © 2012, Elsevier Inc. All rights reserved. 21 Vector Architectures Programming Vec. Architectures

n Compilers can provide feedback to programmers

n Programmers can provide hints to compiler

Copyright © 2012, Elsevier Inc. All rights reserved. 22

Chapter 2 — Instructions: Language of the Computer 11 The University of Adelaide, School of Computer Science 12 November 2013 SIMD Instruction Set Extensions for Multimedia for Multimedia SIMD Set Instruction Extensions SIMD Extensions

n Media applications operate on data types narrower than the native word size

n Example: disconnect carry chains to “partition” (Explain)

n Limitations, compared to vector instructions:

n Number of data operands encoded into op code

n No sophisticated addressing modes (e.g. stride, scatter-gather)

n No mask registers (but we can do masking)

Copyright © 2012, Elsevier Inc. All rights reserved. 23 SIMD Instruction Set Extensions for Multimedia for Multimedia SIMD Set Instruction Extensions SIMD Implementations

n Implementations:

n Intel MMX (1996)

n Eight 8-bit integer ops or four 16-bit integer ops

n Streaming SIMD Extensions (SSE) (1999)

n Eight 16-bit integer ops

n Four 32-bit integer/fp ops or two 64-bit integer/fp ops

n Advanced Vector Extensions (AVX) (2010)

n Four 64-bit integer/fp ops

n 256-bit registers

n 3-operand SIMD instructions

n only works on newest Oss (snow leopard, win 7)

n Operands must be consecutive and aligned memory locations

Copyright © 2012, Elsevier Inc. All rights reserved. 24

Chapter 2 — Instructions: Language of the Computer 12 The University of Adelaide, School of Computer Science 12 November 2013 SIMD Instruction Set Extensions for Multimedia for Multimedia SIMD Set Instruction Extensions Example SIMD Code

n Example DXPY: L.D F0,a ;load scalar a MOV F1, F0 ;copy a into F1 for SIMD MUL MOV F2, F0 ;copy a into F2 for SIMD MUL MOV F3, F0 ;copy a into F3 for SIMD MUL DADDIU R4,Rx,#512 ;last address to load Loop:L.4D F4,0[Rx] ;load X[i], X[i+1], X[i+2], X[i+3] MUL.4D F4,F4,F0 ;a×X[i],a×X[i+1],a×X[i+2],a×X[i+3] L.4D F8,0[Ry] ;load Y[i], Y[i+1], Y[i+2], Y[i+3] ADD.4D F8,F8,F4 ;a×X[i]+Y[i], ..., a×X[i+3]+Y[i+3] S.4D 0[Ry],F8 ;store into Y[i], Y[i+1], Y[i+2], Y[i+3] DADDIU Rx,Rx,#32 ;increment index to X DADDIU Ry,Ry,#32 ;increment index to Y DSUBU R20,R4,Rx ;compute bound BNEZ R20,Loop ;check if done

Copyright © 2012, Elsevier Inc. All rights reserved. 25 SIMD Instruction Set Extensions for Multimedia for Multimedia SIMD Set Instruction Extensions Roofline Performance Model

n Basic idea:

n Plot peak floating-point throughput as a function of arithmetic intensity

n Ties together floating-point performance and memory performance for a target machine

n Arithmetic intensity

n Floating-point operations per byte read

Copyright © 2012, Elsevier Inc. All rights reserved. 26

Chapter 2 — Instructions: Language of the Computer 13 The University of Adelaide, School of Computer Science 12 November 2013

Roofline

note log-log scale

n Low arithmetic intensity problems are bound by memory bandwidth.

n Stream benchmark measures delivered memory bandwidth.

n For a given Arithmetic Intensity on the x-axis, find the Stream benchmark value.

n High arithmetic intensity problems are bound by maximum GFLOPS/sec of hardware.

n Roofline bends where the two meet.

Copyright © 2012, Elsevier Inc. All rights reserved. 27 SIMD Instruction Set Extensions for Multimedia for Multimedia SIMD Set Instruction Extensions Examples Attainable GFLOPs/sec = Min (Peak Memory BW × Arithmetic Intensity, Peak Floating Point Performance)

“ridge point”: if to the left, then almost all code reaches max potential if to the right, then only a few, high-arithmetic-intensity code does well

Copyright © 2012, Elsevier Inc. All rights reserved. 28

Chapter 2 — Instructions: Language of the Computer 14 The University of Adelaide, School of Computer Science 12 November 2013 Graphical Processing Units Units Processing Graphical Graphical Processing Units

n Given the hardware invested to do graphics well, how can we supplement it to improve performance of a wider range of applications?

n Basic idea:

n Heterogeneous execution model

n CPU is the host, GPU is the device

n Develop a C-like programming language for GPU

n Unify all forms of GPU parallelism as CUDA

n Programming model is “Single Instruction Multiple Thread”

Copyright © 2012, Elsevier Inc. All rights reserved. 29 Graphical Processing Units Units Processing Graphical Threads and Blocks

n A thread is associated with each data element

n Threads are organized into blocks

n Blocks are organized into a grid

n GPU hardware handles thread management, not applications or OS

Copyright © 2012, Elsevier Inc. All rights reserved. 30

Chapter 2 — Instructions: Language of the Computer 15 The University of Adelaide, School of Computer Science 12 November 2013 Graphical Processing Units Units Processing Graphical NVIDIA GPU Architecture

n Similarities to vector machines:

n Works well with data-level parallel problems

n Scatter-gather transfers

n Mask registers

n Large register files

n Differences:

n No (nearby)

n Uses multithreading to hide memory latency

n Has many functional units, as opposed to a few deeply pipelined units like a

Copyright © 2012, Elsevier Inc. All rights reserved. 31 Graphical Processing Units Units Processing Graphical Example

n Multiply two vectors of length 8192

n Code that works over all elements is the grid

n Thread blocks break this down into manageable sizes

n 512 threads per block

n SIMD instruction executes 32 elements at a time

n Thus grid size = 16 blocks

n Block is analogous to a strip-mined vector loop with vector length of 32

n Block is assigned to a multithreaded SIMD processor by the thread block scheduler

n Current-generation GPUs (Fermi) have 7-15 multithreaded SIMD processors

Copyright © 2012, Elsevier Inc. All rights reserved. 32

Chapter 2 — Instructions: Language of the Computer 16 The University of Adelaide, School of Computer Science 12 November 2013

Copyright © 2012, Elsevier Inc. All rights reserved. 33 Graphical Processing Units Units Processing Graphical Terminology

n Threads of SIMD instructions

n Each has its own PC

n Thread scheduler uses scoreboard to dispatch (What is a “scoreboard”?)

n No data dependencies between threads! (why?)

n Keeps track of up to 48 threads of SIMD instructions

n Hides memory latency (how?)

n Thread block scheduler schedules blocks to SIMD processors

n Within each SIMD processor:

n 32 SIMD lanes

n Wide and shallow compared to vector processors

Copyright © 2012, Elsevier Inc. All rights reserved. 34

Chapter 2 — Instructions: Language of the Computer 17 The University of Adelaide, School of Computer Science 12 November 2013 Graphical Processing Units Units Processing Graphical Example

n NVIDIA GPU has 32,768 registers

n Divided into lanes

n Each SIMD thread is limited to 64 registers

n SIMD thread has up to:

n 64 vector registers of 32 32-bit elements

n 32 vector registers of 32 64-bit elements

n Fermi has 16 physical SIMD lanes, each containing 2048 registers

Copyright © 2012, Elsevier Inc. All rights reserved. 35 Graphical Processing Units Units Processing Graphical NVIDIA Instruction Set Arch.

ISA is an abstraction of the hardware instruction set

n “Parallel Thread Execution (PTX)”

n Uses virtual registers

n Translation to machine code is performed in

n Example: shl.s32 R8, blockIdx, 9 ; Thread Block ID * Block size (512 or 29) add.s32 R8, R8, threadIdx ; R8 = i = my CUDA thread ID ld.global.f64 RD0, [X+R8] ; RD0 = X[i] ld.global.f64 RD2, [Y+R8] ; RD2 = Y[i] mul.f64 R0D, RD0, RD4 ; Product in RD0 = RD0 * RD4 (scalar a) add.f64 R0D, RD0, RD2 ; Sum in RD0 = RD0 + RD2 (Y[i]) st.global.f64 [Y+R8], RD0 ; Y[i] = sum (X[i]*a + Y[i])

Copyright © 2012, Elsevier Inc. All rights reserved. 36

Chapter 2 — Instructions: Language of the Computer 18 The University of Adelaide, School of Computer Science 12 November 2013 Graphical Processing Units Units Processing Graphical Conditional Branching

n Like vector architectures, GPU branch hardware uses internal masks

n Also uses

n Branch synchronization stack

n Entries consist of masks for each SIMD lane

n I.e. which threads commit their results (all threads execute)

n Instruction markers to manage when a branch diverges into multiple execution paths

n Push on divergent branch

n …and when paths converge

n Act as barriers

n Pops stack

n Per-thread-lane 1-bit predicate register, specified by programmer

Copyright © 2012, Elsevier Inc. All rights reserved. 37 Graphical Processing Units Units Processing Graphical Example

if (X[i] != 0) X[i] = X[i] – Y[i]; else X[i] = Z[i];

ld.global.f64 RD0, [X+R8] ; RD0 = X[i] setp.neq.s32 P1, RD0, #0 ; P1 is predicate register 1 @!P1, bra ELSE1, *Push ; Push old mask, set new mask bits ; if P1 false, go to ELSE1 ld.global.f64 RD2, [Y+R8] ; RD2 = Y[i] sub.f64 RD0, RD0, RD2 ; Difference in RD0 st.global.f64 [X+R8], RD0 ; X[i] = RD0 @P1, bra ENDIF1, *Comp ; complement mask bits ; if P1 true, go to ENDIF1 ELSE1: ld.global.f64 RD0, [Z+R8] ; RD0 = Z[i] st.global.f64 [X+R8], RD0 ; X[i] = RD0 ENDIF1: , *Pop ; pop to restore old mask

Copyright © 2012, Elsevier Inc. All rights reserved. 38

Chapter 2 — Instructions: Language of the Computer 19 The University of Adelaide, School of Computer Science 12 November 2013 Graphical Processing Units Units Processing Graphical NVIDIA GPU Memory Structures

n Each SIMD Lane has private section of off-chip DRAM

n “Private memory”

n Contains stack frame, spilling registers, and private variables

n Each multithreaded SIMD processor also has local memory

n Shared by SIMD lanes / threads within a block

n Memory shared by SIMD processors is GPU Memory

n Host can read and write GPU memory

Copyright © 2012, Elsevier Inc. All rights reserved. 39 Graphical Processing Units Units Processing Graphical Fermi Architecture Innovations

n Each SIMD processor has

n Two SIMD thread schedulers, two instruction dispatch units

n 16 SIMD lanes (SIMD width=32, chime=2 cycles), 16 load-store units, 4 special function units

n Thus, two threads of SIMD instructions are scheduled every two clock cycles

n Fast double precision

n Caches for GPU memory

n 64-bit addressing and unified address space

n Error correcting codes

n Faster context switching

n Faster atomic instructions

Copyright © 2012, Elsevier Inc. All rights reserved. 40

Chapter 2 — Instructions: Language of the Computer 20 The University of Adelaide, School of Computer Science 12 November 2013 Graphical Processing Units Units Processing Graphical Fermi Multithreaded SIMD Proc.

Copyright © 2012, Elsevier Inc. All rights reserved. 41

NVIDIA Kepler

The GPUs feature 192 cores using NVIDIA's CUDA platform, up from 32 in Fermi. Kepler also uses various methods to increase utilization, preventing wasted processor cycles. A technology named Hyper-Q will allow GPUs to work on 32 processes at once, whereas Fermi could only handle one workload at a time. "Hyper-Q enables multiple CPU cores to launch work on a single GPU simultaneously, thereby dramatically increasing GPU utilization and slashing CPU idle times," NVIDIA says in a product data sheet. "This feature increases the total number of connections between the host and the Kepler GK110 GPU by allowing 32 simultaneous, hardware-managed connections, compared to the single connection available with Fermi."

Copyright © 2012, Elsevier Inc. All rights reserved. 42

Chapter 2 — Instructions: Language of the Computer 21 The University of Adelaide, School of Computer Science 12 November 2013

NVIDIA Kepler

n SMX Streaming Multiprocessor -- The basic building block of every GPU, the SMX streaming multiprocessor was redesigned from the ground up for high performance and energy efficiency. It delivers up to three times more than the Fermi streaming multiprocessor, making it possible to build a that delivers one petaflop of computing performance in just 10 server racks. SMX's energy efficiency was achieved by increasing its number of CUDA® architecture cores by four times, while reducing the clock speed of each core, power-gating parts of the GPU when idle and maximizing the GPU area devoted to parallel-processing cores instead of control logic.

n Dynamic Parallelism -- This capability enables GPU threads to dynamically spawn new threads, allowing the GPU to adapt dynamically to the data. It greatly simplifies parallel programming, enabling GPU acceleration of a broader set of popular , such as adaptive mesh refinement, fast multipole methods and multigrid methods.

n Hyper-Q -- This enables multiple CPU cores to simultaneously use the CUDA architecture cores on a single Kepler GPU. This dramatically increases GPU utilization, slashing CPU idle times and advancing programmability. Hyper-Q is ideal for cluster applications that use MPI.

Copyright © 2012, Elsevier Inc. All rights reserved. 43

Copyright © 2012, Elsevier Inc. All rights reserved. 44

Chapter 2 — Instructions: Language of the Computer 22 The University of Adelaide, School of Computer Science 12 November 2013

Copyright © 2012, Elsevier Inc. All rights reserved. 45

2 of top 10 use NVIDIA

http://www.top500.org/lists/2013/06/

Copyright © 2012, Elsevier Inc. All rights reserved. 46

Chapter 2 — Instructions: Language of the Computer 23 The University of Adelaide, School of Computer Science 12 November 2013 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Loop-Level Parallelism

n Focuses on determining whether data accesses in later iterations are dependent on data values produced in earlier iterations

n Loop-carried dependence

n Example 1: for (i=999; i>=0; i=i-1) x[i] = x[i] + s;

n No loop-carried dependence

Copyright © 2012, Elsevier Inc. All rights reserved. 47 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Loop-Level Parallelism

n Example 2: for (i=0; i<100; i=i+1) { A[i+1] = A[i] + C[i]; /* S1 */ B[i+1] = B[i] + A[i+1]; /* S2 */ }

n S1 and S2 use values computed by S1 in previous iteration

n S2 uses value computed by S1 in same iteration

Copyright © 2012, Elsevier Inc. All rights reserved. 48

Chapter 2 — Instructions: Language of the Computer 24 The University of Adelaide, School of Computer Science 12 November 2013 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Loop-Level Parallelism

n Example 3: for (i=0; i<100; i=i+1) { A[i] = A[i] + B[i]; /* S1 */ B[i+1] = C[i] + D[i]; /* S2 */ }

n S1 uses value computed by S2 in previous iteration but dependence is not circular so loop is parallel

Copyright © 2012, Elsevier Inc. All rights reserved. 49 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Loop-Level Parallelism

n Example 3: for (i=0; i<100; i=i+1) { A[i] = A[i] + B[i]; /* S1 */ B[i+1] = C[i] + D[i]; /* S2 */ } n S1 uses value computed by S2 in previous iteration but dependence is not circular so loop is parallel n Transform to put S2 before “next” S1: A[0] = A[0] + B[0]; /* prologue */ for (i=0; i<99; i=i+1) { B[i+1] = C[i] + D[i]; A[i+1] = A[i+1] + B[i+1]; } B[100] = C[99] + D[99]; /* epilogue */

Copyright © 2012, Elsevier Inc. All rights reserved. 50

Chapter 2 — Instructions: Language of the Computer 25 The University of Adelaide, School of Computer Science 12 November 2013 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Loop-Level Parallelism

n Example 4: for (i=0;i<100;i=i+1) { A[i] = B[i] + C[i]; D[i] = A[i] * E[i]; }

n Example 5: for (i=1;i<100;i=i+1) { Y[i] = Y[i-1] + Y[i]; }

Copyright © 2012, Elsevier Inc. All rights reserved. 51 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Finding dependencies

n Assume indices are affine:

n a x i + b (i is loop index)

n Assume:

n Store to a x i + b, then

n Load from c x i + d

n i runs from m to n

n Dependence exists if:

n Given j, k such that m ≤ j ≤ n, m ≤ k ≤ n

n Store to a x j + b, load from a x k + d, and a x j + b = c x k + d

Copyright © 2012, Elsevier Inc. All rights reserved. 52

Chapter 2 — Instructions: Language of the Computer 26 The University of Adelaide, School of Computer Science 12 November 2013 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Finding dependencies

n Generally cannot determine at compile time

n Test for absence of a dependence:

n GCD test:

n If a dependency exists, GCD(c,a) must evenly divide (d-b)

n Example: for (i=0; i<100; i=i+1) { X[2*i+3] = X[2*i] * 5.0; }

Copyright © 2012, Elsevier Inc. All rights reserved. 53 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Finding dependencies

n Example 2: for (i=0; i<100; i=i+1) { Y[i] = X[i] / c; /* S1 */ X[i] = X[i] + c; /* S2 */ Z[i] = Y[i] + c; /* S3 */ Y[i] = c - Y[i]; /* S4 */ }

n Watch for antidependencies and output dependencies

Copyright © 2012, Elsevier Inc. All rights reserved. 54

Chapter 2 — Instructions: Language of the Computer 27 The University of Adelaide, School of Computer Science 12 November 2013 Detecting and Enhancing Loop-Level Parallelism Parallelism Loop-Level Enhancing and Detecting Reductions

n Reduction Operation: for (i=9999; i>=0; i=i-1) sum = sum + x[i] * y[i];

n Transform to… for (i=9999; i>=0; i=i-1) sum [i] = x[i] * y[i]; for (i=9999; i>=0; i=i-1) finalsum = finalsum + sum[i];

n Do on p processors: for (i=999; i>=0; i=i-1) finalsum[p] = finalsum[p] + sum[i+1000*p];

n Note: assumes associativity!

Copyright © 2012, Elsevier Inc. All rights reserved. 55

Chapter 2 — Instructions: Language of the Computer 28