CS 152 and Engineering

Lecture 13 - VLIW Machines and Statically Scheduled ILP

John Wawrzynek Electrical Engineering and Computer Sciences University of California at Berkeley

http://www.eecs.berkeley.edu/~johnw http://inst.eecs.berkeley.edu/~cs152

10/17/2016 CS152, Fall 2016 CS152 Administrivia

§Quiz 3, Thursday Oct 21 – L10-L12, PS3, Lab 3 directed portion §Make sure to understand PS3! §Lab 3 due Friday Oct 28

10/17/2016 CS152, Fall 2016 2 Last time in Lecture 12 § Unified physical machines remove data values from ROB – All values only read and written during execution – Only register tags held in ROB – Allocate resources (ROB slot, destination physical register, memory reorder queue location) during decode – Issue window can be separated from ROB and made smaller than ROB (allocate in decode, free after instruction completes) – Free resources on commit § Speculative store buffer holds store values before commit to allow load-store forwarding § Can execute later loads past earlier stores when addresses known, or predicted no dependence

10/17/2016 CS152, Fall 2016 3 Superscalar Control Logic Scaling Issue Width W Issue Group

Previously Issued Lifetime L Instructions § Each issued instruction must somehow check against W*L instructions, i.e., growth in hardware ∝ W*(W*L) § For in-order machines, L is related to pipeline latencies and check is done during issue (interlocks or scoreboard) § For out-of-order machines, L also includes time spent in instruction buffers (instruction window or ROB), and check is done by broadcasting tags to waiting instructions at write back (completion) § As W increases, larger instruction window is needed to find enough parallelism to keep machine busy => greater L => Out-of-order control logic grows faster than W2 (~W3)

10/17/2016 CS152, Fall 2016 4 Out-of-Order Control Complexity: MIPS R10000

Control Logic

[ SGI/MIPS Technologies Inc., 1995 ]

10/17/2016 CS152, Fall 2016 5 Sequential ISA Bottleneck Sequential Superscalar compiler Sequential source code machine code a = foo(b); for (i=0, i<

Find independent Schedule operations operations

Superscalar

Check instruction Schedule dependencies execution 10/17/2016 CS152, Fall 2016 6 VLIW: Very Long Instruction Word

Int Op 1 Int Op 2 Mem Op 1 Mem Op 2 FP Op 1 FP Op 2

Two Integer Units, Single Cycle Latency Two Load/Store Units, Three Cycle Latency Two Floating-Point Units, Four Cycle Latency § Multiple operations packed into one instruction § Each operation slot is for a fixed function § Constant operation latencies are specified § Architecture requires guarantee of: – Parallelism within an instruction => no cross-operation RAW check – No data use before data ready => no data interlocks

10/17/2016 CS152, Fall 2016 7 Early VLIW Machines

§ FPS AP120B (1976) – scientific attached array processor – first commercial wide instruction machine – hand-coded vector math libraries using pipelining and loop unrolling § MultiflowTrace (1987) – commercialization of ideas from Fisher’s Yale group including “trace scheduling” – available in configurations with 7, 14, or 28 operations/instruction – 28 operations packed into a 1024-bit instruction word § Cydrome Cydra-5 (1987) – 7 operations encoded in 256-bit instruction word – rotating register file

10/17/2016 CS152, Fall 2016 8 VLIW Compiler Responsibilities §Schedule operations to maximize parallel execution

§Guarantees intra-instruction parallelism

§Schedule to avoid data hazards (no interlocks) – Typically separates operations with explicit NOPs

10/17/2016 CS152, Fall 2016 9 Loop Execution

for (i=0; i

How many FP ops/cycle? 1 fadd / 8 cycles = 0.125

10/17/2016 CS152, Fall 2016 10 Loop Unrolling for (i=0; i

Unroll 4 ways loop: fld f1, 0(x1) Int1 Int 2 M1 M2 FP+ FPx fld f2, 8(x1) fld f3, 16(x1) loop: fld f1 fld f4, 24(x1) fld f2 add x1, 32 fld f3 fadd f5, f0, f1 add x1 fld f4 fadd f5 fadd f6, f0, f2 Schedule fadd f6 fadd f7, f0, f3 fadd f7 fadd f8, f0, f4 fadd f8 fsd f5, 0(x2) fsd f5 fsd f6, 8(x2) fsd f6 fsd f7, 16(x2) fsd f7 fsd f8, 24(x2) add x2 bne fsd f8 add x2, 32 bne x1, x3, loop How many FLOPS/cycle? 4 fadds / 11 cycles = 0.36 10/17/2016 CS152, Fall 2016 12 Software Pipelining Unroll 4 ways first Int1 Int 2 M1 M2 FP+ FPx loop: fld f1, 0(x1) fld f1 fld f2, 8(x1) fld f2 fld f3, 16(x1) fld f3 fld f4, 24(x1) add x1 fld f4 prolog add x1, 32 fld f1 fadd f5 fadd f5, f0, f1 fld f2 fadd f6 fadd f6, f0, f2 fld f3 fadd f7 fadd f7, f0, f3 add x1 fld f4 fadd f8 fadd f8, f0, f4 loop: fld f1 fsd f5 fadd f5 iterate fsd f5, 0(x2) fld f2 fsd f6 fadd f6 fsd f6, 8(x2) add x2 fld f3 fsd f7 fadd f7 fsd f7, 16(x2) add x1bne fld f4 fsd f8 fadd f8 add x2, 32 fsd f5 fadd f5 fsd f8, -8(x2) fsd f6 fadd f6 bne x1, x3, loop epilog add x2 fsd f7 fadd f7 bne fsd f8 fadd f8 How many FLOPS/cycle? fsd f5 4 fadds / 4 cycles = 1 10/17/2016 CS152, Fall 2016 13 Software Pipelining vs. Loop Unrolling Loop Unrolled Wind-down overhead performance

Startup overhead

Loop Iteration time Software Pipelined

performance

Loop Iteration time Software pipelining pays startup/wind-down costs only once per loop, not once per iteration 10/17/2016 CS152, Fall 2016 14 What if there are no loops?

§ Branches limit basic block size in control-flow intensive irregular code Basic block § Difficult to find ILP in individual basic blocks

10/17/2016 CS152, Fall 2016 15 Trace Scheduling [ Fisher,Ellis]

§ Pick string of basic blocks, a trace, that represents most frequent branch path § Use profiling feedback or compiler heuristics to find common branch paths § Schedule whole “trace” at once § Add fixup code to cope with branches jumping out of trace

10/17/2016 CS152, Fall 2016 16 Problems with “Classic” VLIW

§ Object-code compatibility – have to recompile all code for every machine, even for two machines in same generation § Object code size – instruction padding wastes instruction memory/ – loop unrolling/software pipelining replicates code § Scheduling variable latency memory operations – caches and/or memory bank conflicts impose statically unpredictable variability § Knowing branch probabilities – optimal schedule varies with branch path – Profiling requires an significant extra step in build

10/17/2016 CS152, Fall 2016 17 VLIW Instruction Encoding

Group 1 Group 2 Group 3

§ Schemes to reduce effect of unused fields – Compressed format in memory, expand on I-cache refill • used in Multiflow Trace • introduces instruction addressing challenge – Mark parallel groups • used in TMS320C6x DSPs, Intel IA-64 – Provide a single-op VLIW instruction • Cydra-5 UniOp instructions

10/17/2016 CS152, Fall 2016 18 Intel , EPIC IA-64

§ EPIC is the style of architecture (cf. CISC, RISC) – Explicitly Parallel Instruction Computing (really just VLIW) § IA-64 is Intel’s chosen ISA (cf. , MIPS) – IA-64 = Intel Architecture 64-bit – An object-code-compatible VLIW § Merced was first Itanium implementation (cf. 8086) – First customer shipment expected 1997 (actually 2001) – McKinley, second implementation shipped in 2002 – Recent version, Poulson, eight cores, 32nm, announced 2011

10/17/2016 CS152, Fall 2016 19 Eight Core Itanium “Poulson” [Intel 2011]

§ 8 cores § Cores are 2-way multithreaded § 1-cycle 16KB L1 I&D caches § 6 instruction/cycle fetch § 9-cycle 512KB L2 I-cache – Two 128-bit bundles § 8-cycle 256KB L2 D-cache § Up to 12 insts/cycle execute § 32 MB shared L3 cache § 544mm2 in 32nm CMOS § Over 3 billion transistors 10/17/2016 CS152, Fall 2016 20 IA-64 Instruction Format

Instruction 2 Instruction 1 Instruction 0 Template

128-bit instruction bundle § Template bits describe grouping of these instructions with others in adjacent bundles § Each group contains instructions that can execute in parallel

bundle j-1 bundle j bundle j+1bundle j+2

group i-1 group i group i+1 group i+2

10/17/2016 CS152, Fall 2016 21 IA-64 Registers

§ 128 General Purpose 64-bit Integer Registers § 128 General Purpose 64/80-bit Floating Point Registers § 64 1-bit Predicate Registers

§ GPRs “rotate” to reduce code size for software pipelined loops – Rotation is a simple form of allowing one instruction to address different physical registers on each iteration

10/17/2016 CS152, Fall 2016 22 IA-64 Predicated Execution Problem: Mispredicted branches limit ILP 20-30% of performance goes to branch mispredictions [Intel98] Solution: Eliminate hard to predict branches with predicated execution – Almost all IA-64 instructions can be executed conditionally under predicate – Instruction becomes NOP if predicate register false

b0: Inst 1 if Inst 1 Inst 2 Inst 2 br a==b, b2 p1,p2 <- cmp(a==b) Predication (p1) Inst 3 || (p2) Inst 5 b1: Inst 3 else (p1) Inst 4 || (p2) Inst 6 Inst 4 Inst 7 br b3 Inst 8 b2: Inst 5 then Single basic block Inst 6 § Simplifies scheduling b3: Inst 7 – Turn “control flow” into “data Inst 8 flow” § Less code is needed Four basic blocks 10/17/2016 CS152, Fall 2016 23 IA-64 Data Speculation

Problem: Possible memory hazards limit code scheduling Solution: Hardware to check pointer hazards

Data speculative load Inst 1 adds address to Inst 2 address check table Store Load.a r1 Inst 1 Store invalidates any Load r1 Inst 2 Store matching loads in Use r1 address check table Inst 3 Load.c Use r1 Can’t move load above store Inst 3 Check if load invalid (or because store might be to same missing), jump to fixup address code if so

Requires associative hardware in address check table

10/17/2016 CS152, Fall 2016 26 Limits of Static Scheduling

§ Unpredictable branches § Variable memory latency (unpredictable cache misses) § Code size explosion § Compiler complexity Despite several attempts, VLIW has failed in general- purpose computing arena (so far). – More complex VLIW architectures close to in-order superscalar in complexity, no real advantage on large complex apps. Successful in embedded DSP market – Simpler VLIWs with more constrained environment, friendlier code.

10/17/2016 CS152, Fall 2016 27 Acknowledgements

§ These slides contain material developed and copyright by: – Arvind (MIT) – Krste Asanovic (MIT/UCB) – Joel Emer (Intel/MIT) – James Hoe (CMU) – John Kubiatowicz (UCB) – David Patterson (UCB)

§ MIT material derived from course 6.823 § UCB material derived from course CS252

10/17/2016 CS152, Fall 2016 28