<<

361.1.4201, 381.1.0107 Pipelining I

Dr. Guy Tel-Zur

Based on slides by Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 1/30/2015 Agenda for Today & Next Few Lectures

 Single-cycle

 Multi-cycle and Microprogrammed Microarchitectures

 Pipelining

 Issues in Pipelining: Control & Data Dependence Handling, State Maintenance and Recovery, …

 Out-of-Order Execution

 Issues in OoO Execution: Load-Store Handling, …

2 Recap of Last Lecture  Multi-cycle and Microprogrammed Microarchitectures  Benefits vs. Design Principles  When to Generate Control Signals  Microprogrammed Control: uInstruction, uSequencer,  LC-3b MIPS State Machine, Datapath, Control Structure

 Microprogramming benefits  Power of abstraction (for the HW designer)  Advantages of uProgrammed Control  Update of Machine Behavior

3 Review: The Power of Abstraction

 The concept of a control store of microinstructions enables the hardware designer with a new abstraction: microprogramming

 The designer can translate any desired operation to a sequence of microinstructions  All the designer needs to provide is  The sequence of microinstructions needed to implement the desired operation  The ability for the control logic to correctly sequence through the microinstructions ( micro-sequencer )  Any additional datapath elements and control signals needed (no need if the operation can be “translated” into existing control signals)

4 Review: Advantages of Microprogrammed Control

 Allows a very simple design to do powerful computation by controlling the datapath (using a sequencer)  High-level ISA translated into (sequence of u-instructions)  Microcode (u-code) enables a minimal datapath to emulate an ISA  Microinstructions can be thought of as a user-invisible ISA (u-ISA)

 Enables easy extensibility of the ISA  Can support a new instruction by changing the microcode  Can support complex instructions as a sequence of simple microinstructions

 Enables update of machine behavior  A buggy implementation of an instruction can be fixed by changing the microcode in the field

5 הספריה הלאומית 1975

במחשב הזה, בשונה מהמקובל, ניתן היה לראות את ה u-ISA ואף ניתן היה לשנות את המיקרוקוד בצורה דינמית בעת שהמערכת רצה. אין שקיפות כזו כיום. זו גם אפשרות טובה אבל יש גם צד שלילי והוא סיכוני אבטחת מידע. Wrap Up Microprogrammed Control

 Horizontal vs. Vertical Microcode  Nanocode vs. Microcode vs.  Microprogrammed MIPS: An Example

7 Horizontal Microcode

 A single control store provides the control signals t u p t u o

Microcode ALUSrcA l

storage a

IorD n

Datapath g IRWrite i Outputs control s

ouPCWritetputs l o

PCWriteCond r MIPS design t …. n From o c n-bit PC input

Input t P&H, Appendix D i

1 b - k Sequencing Microprogram control Microprogram Address select logic counter

Inputs from opcode field

[Based on original figure from P&H CO&D, COPYRIGHT n 8 2004 Elsevier. ALL RIGHTS RESERVED.] Control Store: 2  k bit (not including sequencing) Vertical Microcode  Two-level control store: the first specifies abstract operations 1-bit signal means do this RT (or combination of RTs) Abstract Microcode “PC  PC+4” storage operations. “PC  ALUOut” Register D“PCata pa tPC[h 31:28 ],IR[ 25:0 ],2’b00” Outputs control “IR  MEM[ PC ]” Transfer Level outputs Operations “A  RF[ IR[ 25:21 ] ]” “B  RF[ IR[ 20:16 ] ]” …………. ……. n-bit InpPCut input 1 m-bit input

Sequencing Microprogram counter control ROM Adder

Address select logic k-bit control signal output

[Based on original figure from P&H CO&D, COPYRIGHT

2004 Elsevier. ALL RIGHTS RESERVED.] Inputs from instruction … P P I I A

R o

C

C

L

.

r

W

U

W W

register opcode field D

S

r

r

r

i

r

t

i

i

c

t

t

e

e e

A

C

o

n If done right (i.e., m<

 Millicode: a level above traditional microcode  ISA-level subroutines that can be called by the to handle complicated operations and system functions  E.g., Heller and Farrell, “Millicode in an IBM zSeries ,” IBM JR&D, May/Jul 2004. מאמר רשות מוטיבציה:  In both cases, we avoid complicating the main u-controller  You can think of these as “microcode” at different levels of abstraction 10 Nanocode Concept Illustrated

a “coded” processor implementation

ROM processor datapath PC תכנון יותר מודולרי

We refer to this a “coded” FPU implementation as “nanocode” when a coded ROM arithmetic subsystem is embedded datapath in a coded system PC 11 Microcoded Multi-Cycle MIPS Design

 Any ISA can be implemented with a microprogrammed

 P&H, CO&D, Appendix D: Microprogrammed MIPS design

 We will not cover this in class  We covered this in class  However, you can do an extra credit assignment for Lab 2

כזכור קריאת חובה :Guy https://booksite.elsevier.com/9780124077263/downloads/ad vance_contents_and_appendices/appendix_D.pdf

12 Microcoded Multi-Cycle MIPS Design

[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 13 Control Logic for MIPS FSM

[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 14 Microprogrammed Control for MIPS FSM

[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 15 Multi-Cycle vs. Single-Cycle uArch

 Advantages

 Disadvantages

 You should be very familiar with this right now

16 Microprogrammed vs. Hardwired Control

 Advantages

 Disadvantages

 You should be very familiar with this right now

17 Can We Do Better?

 What limitations do you see with the multi-cycle design?

 Limited concurrency  Some hardware resources are idle during different phases of instruction processing cycle  “Fetch” logic is idle when an instruction is being “decoded” or “executed”  Most of the datapath is idle when a memory access is happening

18 Can We Use the Idle Hardware to Improve Concurrency?

 Goal: More concurrency  Higher instruction throughput (i.e., more “work” completed in one cycle)

 Idea: When an instruction is using some resources in its processing phase, other instructions on idle resources not needed by that instruction  E.g., when an instruction is being decoded, fetch the next instruction  E.g., when an instruction is being executed, decode another instruction  E.g., when an instruction is accessing data memory (ld/st), execute the next instruction  E.g., when an instruction is writing its result into the , access data memory for the next instruction

19 Pipelining

20 Pipelining: Basic Idea  More systematically:  Pipeline the execution of multiple instructions דוגמה אנלוגית: קו ייצור – Analogy: “Assembly line processing” of instructions

 Idea:  Divide the instruction processing cycle into distinct “stages” of processing  Ensure there are enough hardware resources to process one instruction in each stage  Process a different instruction in each stage  Instructions consecutive in program order are processed in consecutive stages

 Benefit: Increases instruction processing throughput =IPC (=1/CPI) 21 ???רעיונות … Downside: Start thinking about this Example: Execution of Four Independent ADDs

 Multi-cycle: 4

F D E W F D E W F D E W F D E W Time  Pipelined: 4 cycles per 4 instructions (steady state)

F D E W F D E W Is life always this beautiful? F D E W F D E W

Time

22 The Laundry Analogy

6 PM 7 8 9 10 11 12 1 2 AM Time Task order A

B

C

D

 “place one dirty load of clothes in the washer”  “when the washer is finished, place the wet load in the dryer”

 6 PM 7 8 9 10 11 12 1 2 AM “when theT imdryere is finished, take out the dry load and fold”  “when folding is finished, ask your roommate (??) to put the clothes away” Task order A - steps to do a load are sequentially dependent

B - no dependence between different loads

C - different steps do not share resources

D 23 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] Pipelining Multiple Loads of Laundry 6 PM 7 8 9 10 11 12 1 2 AM Time Task order 6 PM 7 8 9 10 11 12 1 2 AM TimeA Task B order AC

DB

C

D

6 PM 7 8 9 10 11 12 1 2 AM Time

Task order 6 PM 7 8 9 10 11 12 1 2 AM Time A - 4 loads of laundry in parallel Task order B - no additional resources A C - throughput increased by 4 B D - latency per load is the same C

D 24 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 6 PM 7 8 9 10 11 12 1 2 AM PipeliningT imMultiplee Loads of Laundry: In Practice Task order A 6 PM 7 8 9 10 11 12 1 2 AM Time B Task order C A D B

C

D

6 PM 7 8 9 10 11 12 1 2 AM Time

Task order 6 PM 7 8 9 10 11 12 1 2 AM TimeA

Task B order C A

D B

C the slowest step decides throughput D 25 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] Pipelining Multiple6 PM 7 Loads8 of9 Laundry:10 11 In1 2Practice1 2 AM Time Task order

A6 PM 7 8 9 10 11 12 1 2 AM Time Task B order AC

D B

C

D

6 PM 7 8 9 10 11 12 1 2 AM Time

Task order 6 PM 7 8 9 10 11 12 1 2 AM Time A A Task B order B C A A DB B C throughput restored (2 loads per hour) using 2 dryers D 26 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] מה נדרש למיקבול אידאלי בצנרת? An Ideal Pipeline

 Goal: Increase throughput with little increase in cost (hardware cost, in case of instruction processing)

 Repetition of identical operations  The same operation is repeated on a large number of different inputs (e.g., all laundry loads go through the same steps)  Repetition of independent operations  No dependencies between repeated operations חלוקה אחידה לחבילות זמן  Uniformly partitionable suboperations  Processing can be evenly divided into uniform-latency suboperations (that do not share resources)

 Fitting examples: automobile assembly line, doing laundry  What about the instruction processing “cycle”? 27 Ideal Pipelining

throughput (F,D,E,M,W) BW=~(1/T) T psec

T/2 ps (F,D,E) T/2 ps (M,W) BW=~(2/T)

T/3 T/3 T/3 BW=~(3/T) ps (F,D) ps (E,M) ps (M,W)

28 More Realistic Pipeline: Throughput  Nonpipelined version with delay T T=combinational logic delay BW = 1/(T+S) where S = latch delay S=latch delay

T ps

 k-stage pipelined version

BWk-stage = 1 / (T/k +S ) Latch delay reduces throughput (switching overhead b/w stages) BWmax = 1 / (1 gate delay + S ) כאשר k הכי גדול שאפשר

T/k T/k ps ps

29 More Realistic Pipeline: Cost  Nonpipelined version with combinational cost G Cost = G+L where L = latch cost חסרונות: -עלות -אנרגיה G gates

 k-stage pipelined version Latches increase hardware cost Costk-stage = G + Lk

G/k G/k

מה המספר האידאלי של דרגות מיקבול? 30 Pipelining Instruction Processing

31 Remember: The Instruction Processing Cycle

1. Instruction Fetch fetch (IF) 2. Instruction Decode decode and register Evaluate operand Address fetch (ID/RF) 3. Execute/Evaluate memory address (EX/AG)  Fetch Operands 4. Memory operand fetch (MEM)  5. Store/writebackExecute result (WB)  Store Result

AG=Address Generation 32 Remember the Single-Cycle Uarch

Instruction [25– 0] Shift Jump address [31– 0] left 2 26 28 0 PCSrc1 1=Jump PC+4 [31– 28] M M u u x x ALU Add result 1 0 Add RegDst Shift Jump left 2 4 Branch MemRead Instruction [31– 26] MemtoReg Control PCSrc2=Br Taken ALUOp MemWrite ALUSrc RegWrite

Instruction [25– 21] Read Read register 1 PC address Read data 1 Instruction [20– 16] Read register 2 Zero Instruction 0 Registers Read ALU [31– 0] 0 ALU M Write data 2 result Address Read 1 Instruction register M data u M memory x u Instruction [15– 11] Write x u 1 Data x data 1 memory 0 Write bcond data 16 32 Instruction [15– 0] Sign extend ALU control

Instruction [5– 0]

ALU operation

T BW=~(1/T) Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 33 Dividing Into Stages 200ps 100ps 200ps 200ps 100ps IF: Instruction fetch ID: Instruction decode/ EX: Execute/ MEM: Memory access WB: Write back register file read address calculation 0 M u x 1 ignore for now

Add

4 Add Add result Register file Shift left 2

Read PC Address register 1 Read data 1 Read register 2 Zero Instruction Registers ALU Read 0 ALU Write data 2 result Address Read 1 data Instruction register M u Data M RF memory Write x u memory x data 1 0 write Write data 16 32 Sign extend

Is this the correct partitioning? Why not 4 or 6 stages? Why not different boundaries? 34 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] Instruction Pipeline Throughput Program execution 2 2004 400 6 600 8 800 1 0 1000 1 12002 140014 160016 180018 order Time (in instructions) Instruction Data lw $1, 100($0) Reg ALU Reg fetch access

Instruction Data lw $2, 200($0) Reg ALU Reg 800ps8 ns fetch access

Instruction lw $3, 300($0) 8 ns 800ps fetch ... 800ps8 ns 1000פיקו-שניות

Program 200 400 600 800 1000 1200 1400 execution 2 4 6 8 10 12 14 Time order (in instructions) Instruction Data lw $1, 100($0) Reg ALU Reg fetch access

Instruction Data lw $2, 200($0) 2 ns Reg ALU Reg 200ps fetch access Instruction Data lw $3, 300($0) Reg ALU Reg 200ps2 ns fetch access

200ps2 ns 200ps2 ns 2200ps ns 200ps2 ns 200ps2 ns

5-stage speedup is 4, not 5 as predicted by the ideal model. Why?

35 Enabling Pipelined Processing: Pipeline Registers

IF: Instruction fetch ID: Instruction decode/ EX: Execute/ MEM: Memory access WB: Write back register file read address calculation 0 0 MM No resource is used by more than 1 stage! u u x x 1 1

IF/ID ID/EX EX/MEM MEM/WB 4 4 M + AdAddd + D E C P C C Add 4 4 P P AdAddd Add n rerseuslut lt ShSifhtift leflet f2t 2 Next PC

n ReRaedad o E i F PC Address t rergeisgtiesrte 1r 1 PC Address c ReRaedad C A u M

r data 1 t

data 1 t P Read s Read

n Zero u

I Zero Instruction rergeisgtiesrte 2r 2 W

Registers ALU o Instruction D Registers Read ALU ALU R memory Write Read 0 0 ALU Read A Address Read R Write data 2 1 data 2 rerseuslut lt Address D 1

I data rergeisgtiesrter MM data Instruction MM

u Data M E u Data u memory WWriterite x x memory u data B memory xx data 1 1 00 WWritreite dadtaata W E M 1616 3232 t Sign B u Sign m

exetextnednd o m I A Pipeline registers

T/k T/k ps T ps 36 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] lw Instruction fetch lw 0 0 M Memory M u u x x 1 1

IF/ID ID/EX EX/MEM MEM/WB IF/ID ID/EX EX/MEM MEM/WB

Add Add Add 4 Add 4 Add Add result result Shift Shift left 2 left 2

n Read o n Read i

t register 1 PC Address o i

c Read PC Address t register 1 u c Read

r data 1 u t

r Read data 1 s t Read n Zero s

I register 2

Instruction n Zero

I register 2 Instruction Registers Read ALU ALU memory Registers Read 0ALU Read memory Write data 2 0 ALU result Address Read 1 Write register data 2 M result Address data 1 register M Ddaattaa M u M Write u x Data memory u Write x u x data 1 memory x data 1 0 Write 0 Write data data 16 32 16 32Sign Sign extend Pipelined Operationextend Example

lw All instruction classes must follow the same path Instruction fetch and timing through the pipeline stages. lw lw lw 0 000 Instruction decode lw MM Execution Memory uuu Any performance impact? xxx Write back 111

IF/ID ID/EX EX/MEM MEM/WB IIIIFIFF///I/IIDIDD IIDIIDD//E//EEXXX EEEXXX//M//MMEEEMM MMMEEEMMM///W/W/WBBBB

AAAdddddd

AAAdddddd 444 AAAdddddd Add rrrereessssuuuulltltltt Shift SSShhhiififitffttt left 2 lleleleefftfft tt 2 222

n Read n Read n

n Read n Read

n Read

o Read o i o o o i t i

o register 1 i i

t register 1

Address i PC t register 1 t Address t rreeggisisteterr 11

PC c PC AAddddrreessss t register 1 Read PPCC c Read PC Address c Read c Read c Read c Read u u u r u u

r data 1 u t r data 1 r r

t data 1

r data 1

t data 1

t Read

t data 1 t s Read

s RReeaadd s

s s Read s n Zero

n Zero I n register 2 Zero n n ZZeerroo I register 2 n I register 2 Zero I Instruction I rreeggisisteterr 22 IInnssttrruuccttiiioonn I register 2 IInsttrruucttiion Registers Read ALU ALU memory RReeggiisisistteterrrsss RRReeeaaaaddd 0 AAALLLLUU AAALLLUU mmeeemmooorrrryryy WWrriittee data 2 000 reAsuLlUt Address RRReeeeaaaadddd 1 WWrrriititete dddaaattattaa 2 22 rrrereesssuuulltltltt AAdddddrrreeesssss Rdeaatad 1111 rreeggiisstteerr MM ddddaaaatttaataa rrreeggiisisstteterrr MM DDaatata MM uuu DDaattaa MM Write ux meDmaotary uuuu WWWrrrriiitittetee xxx mmeemmoorryy ux data mmeemmoorryy xxxx dddaaatttataa 111 1 0000 WWWrrririitititteetee 0 Wdartiate ddddaaaattattaaa 16 32 1166 Sign 333222 SSSiiigiiggnnn extend eeexxxttetteennnddd

lw 0 0 M Instruction decode lw M u u x 37 Based on original figure from [P&Hx CO&D,1 COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] Write back 1

IF/ID ID/EX EX/MEM MEM/WB IF/ID ID/EX EX/MEM MEM/WB

Add Add

4 Add Add 4 Add Add result result Shift 97108/Patterson Shift left 2 Figure 06.15 left 2

n Read o i

n Read PC Address t register 1 o c Read i

t register 1 PC Address u r c Read data 1 t

u Read s r data 1 t

n Zero

I Read register 2 Instruction s

n Zero Instruction I register 2 Registers Read ALU ALU memory ReWgisritters 0ALU Read memory Read data 2 0 ALU result Address 1 Write registerdata 2 M result Address Read data 1 register M u data M Data Data M u Write u x memory Write x memory u x data 1 x data 1 0 Write 0 Write data data 16 32 16 32Sign Sign extend extend

97108/Patterson Figure 06.15 lw $10, 20($1) Instruction fetch sub $11, $2, $3 lw $10, 20($1)

0 Instruction decode Execution M u sub $11, $2, $3 lw $10, 20($1) 00x 1MM uu Memory Write back xx 11 IF/ID ID/EX EX/MEM MEM/WB

IF/ID ID/EX EX/MEM MEM/WB Add IF/ID ID/EX EX/MEM MEM/WB

Add Add 4 Add Add result Shift Add 44 AAdddd Add left 2 rreessuulltt SShhiifftt

n Read lleefftt 22 o i PC Address t register 1

c Read u

r data 1 t

n Read n Read s o o i

n Zero i

t register 1 Address I register 2 PC t register 1

PC AddresInsstruction c Read

c Read

u Registers ALU u r dRaetaa 1d ALU r memory t data 1 t Read 0 Read s WReriated Address s data 2 result 1

n Zero

n Zero data I rreeggisistteerr 2 M IInnssttrruuccttiioonn I register 2 M RReeggiisstteerrss u AALLUU memory RReeaadd 0 AALLUU Data u memory WWrriittee data 2 0x result Address RReeaadd 1 data 2 result Addressmemory data 1x drreeaggtaiisstteerr 1MM data u 0MM u Write Datta u WWrriittee xx u data memorry xx ddaattaa 11 00 16 32 WWrriittee Sign ddaattaa extend 1166 3322 SSiiggnn Pipelined Operationextend Example extend Clock 1 ClocCk l5ock 3

sluwb $$1101,, 2$02(,$ $13) slwub $ $1101, ,2 $02($, 1$)3 lw $10, 20($1) IInnssttrruuccttiioonn ffeettcchh IInnssttrruuccttiioonn ddeeccooddee Execution sub $11, $2, $3 sluwb $ $1101, ,2 $02(,$ $13) slwub $ $1101, ,2 $02($, 1$)3 00 MM uu Execution Memory WWrriittee bbaacckk xx 11

IIFF////IIIIDD IIIDD///EEXX EEXX///MMEEMM MMEEMM//W/WBB

AAdddd

AAdddd 44 AAdddd Add rrreessuulltlttt SShhiiiififfttt llleleffftttt 22

n Read n n n

n Read n RReeaadd o o o i o o o i i i t i register 1 i t t

t regiister 1 PC Address t rreeggiisstteerr 11 Address t register 1 PPCC AAddddrreessss c Read c c

c Read c Read c RReeaadd u u u u u r u data 1 r r r t r r ddaatttaa 11 t t data 1 t t

t Read

s RReeaadd

s Read s s s s

n Zero n n

n Zero I n register 2 Zero n Zerro I I I regiister 2 I register 2 Instruction I register 2 IIInnsstttrrruucctttiioonn Registers ALU RReeggiiiissttteerrss RReeaadd AALLUU AALLUU mmeemmoorryy Write Read 00 ALU RReeaadd WWrriiitttee ddaattttaa 22 rrreessuullltlttt AAddddddrrreeessssss Read 111 rrreeggiiiisstteerrr MM ddaattataa register M MM uu DDaattaa M DDaattata uuu WWrriiittttee xx mmeemmoorryy data mmeemmoorryry xxx ddaattaa 11 000 Is life always this beautiful? WWrrrriiititete ddaattttaa 1166 3322 SSiiiiggnn eexxtttteenndd

Clock 3 CCClCloollccooCkkcc lkk56o c21k 4

sub $11, $2, $3 lw $10, 20($1) 38 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] sub $11, $2, $3 lw $10, 20($1) sub $11, $2, $3 Instruction00 fetch Instruction decode M uu Execution Memory Write back 0xx 11M u x 1 IIFF//IIDD IID//EEXX EEXX//MEMM MEEM//WWBB

AAdddd IF/ID ID/EX EX/MEM MEM/WB

Add 44 AAdddd Add Add rreessuulltt Shhiifftt 4 Add Add lleefftt 22 result Shift

n Read n Read

o left 2 o i i

t register 1 PPCC AAddddrreessss t register 1 c Read

c Read u u r

r data 1

t data 1 t Reeaadd s n

s Read

n Zero o n Zero I

i register 2

I register 2 PC AddreIsInsssttrruuccttiioon t register 1 c Reegiisstteerrss RReeaadd ALLUU ALU memory u Read ALU memory r data 1 00 Read t WRerriiatteed ddaattaa 22 rreesuulltt AAddddrreessss Read 11 s ddaattaa n rreggiisstteerr M Zero I register 2 M Instruction uu M Registers Read ALU ALU Daattaa u memory Wrriittee 0xx Read u Write data 2 result Addresms eemoorry x1x dreaagttaister 11M data 00M u Wrriittee Data Write x u ddaattaa memory x data 1 116 332 0 SSiiggnn Write data exxttennd 16 32 Sign extend ClockC l6ock 4 Clock 2 Illustrating Pipeline Operation: Operation View

t0 t1 t2 t3 t4 t5

Inst0 IF ID EX MEM WB

Inst1 IF ID EX MEM WB

Inst2 IF ID EX MEM WB

Inst3 IF ID EX MEM WB

Inst4 IF ID EX MEM IF ID EX IF ID steady state (full pipeline) IF

39 Illustrating Pipeline Operation: Resource View

t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10

IF I0 I1 I2 I3 I4 I5 I6 I7 I8 I9 I10

ID I0 I1 I2 I3 I4 I5 I6 I7 I8 I9

EX I0 I1 I2 I3 I4 I5 I6 I7 I8

MEM I0 I1 I2 I3 I4 I5 I6 I7

WB I0 I1 I2 I3 I4 I5 I6

40 Control Points in a Pipeline

PCSrc

0 M u x 1

IF/ID ID/EX EX/MEM MEM/WB

Add

Add Add 4 result Branch Shift RegWrite left 2

n Read MemWrite o i PC Address t register 1

c Read u

r data 1 t Read ALUSrc s MemtoReg

n ZeZreoro Instruction I register 2 Registers Read ALU ALU memory 0 Read Write data 2 result Address 1 register M data M u Data Write x u memory x data 1 0 Write data Instruction [15– 0] 16 32 6 Sign ALU extend control MemRead

Instruction [20– 16] 0 M ALUOp Instruction u [15– 11] x Based on original figure from [P&H CO&D, 1 COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] RegDst

Identical set of control points as the single-cycle datapath!! 41 Control Signals in a Pipeline

 For a given instruction  same control signals as single-cycle, but  control signals required at different cycles, depending on stage  Option 1: decode once using the same logic as single-cycle and buffer signals until consumed WB Option 1: Decode once Instruction and propagate Control M WB

EX M WB

Option 2: Decode on demand IF/ID ID/EX EX/MEM MEM/WB  Option 2: carry relevant “instruction word/field” down the pipeline and decode locally within each or in a previous stage Which one is better?

This option may reduce the cost of the latches – reduce the number of control latches 42 Pipelined Control Signals WB which propagates

PCSrc

ID/EX 0 M u WB x EX/MEM 1 Control M WB MEM/WB

EX M WB IF/ID

Add

Add 4 Add result e t i r Branch

W Shift g e e left 2 t i R ALUSrc r W m g

n Read e e o i t PC Address register 1 M R

c Read o t u

r data 1 t Read m s e

n Zero

I register 2 Instruction M Registers Read ALU memory 0 ALU Write data 2 result Address Read 1 register M data u Data M Write x memory u x data 1 0 Write data

Instruction 16 32 6 [15– 0] Sign ALU MemRead extend control

Instruction [20– 16] 0 ALUOp M Instruction u [15– 11] x 1 RegDst

Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS 43 RESERVED.] החזון Remember: An Ideal Pipeline  Goal: Increase throughput with little increase in cost (hardware הגדלת כמות החישובים (cost, in case of instruction processing ללא תוספת עלות

חזרתיות על צעדים קבועים Repetition of identical operations  The same operation is repeated on a large number of different inputs (e.g., all laundry loads go through the same steps) יכולת ביצוע פעולות שונות  Repetition of independent operations אי תלות בין הפעולות No dependencies between repeated operations השהיות אחידות  Uniformly partitionable suboperations  Processing an be evenly divided into uniform-latency suboperations (that do not share resources)  Fitting examples: automobile assembly line, doing laundry  What about the instruction processing “cycle”? 44 Instruction Pipeline: Not An Ideal Pipeline המציאות ! Identical operations ... NOT  different instructions  not all need the same stages Forcing different instructions to go through the same pipe stages  external fragmentation (some pipe stages idle for some instructions)

 Uniform suboperations ... NOT!  different pipeline stages  not the same latency Need to force each stage to be controlled by the same clock  internal fragmentation (some pipe stages are too fast but all take the same clock cycle time)

 Independent operations ... NOT!  instructions are not independent of each other Need to detect and resolve inter-instruction dependencies to ensure the pipeline provides correct results  pipeline stalls (pipeline is not always moving) 45 Issues in Pipeline Design לא נדון בכך בקורס שלנו –  Balancing work in pipeline stages How many stages and what is done in each stage

 Keeping the pipeline correct, moving, and full in the presence of events that disrupt pipeline flow – Handling dependences • Data • Control למשל תחרות על הזכרון Handling resource contention – מה קורה כאשר Handling long-latency (multi-cycle) operations – הוראה לוקחת מאות מחזורי שעון בכל אלה נעסוק  Handling exceptions, interrupts בהרצאות הבאות  Advanced: Improving pipeline throughput – Minimizing stalls 46 Causes of Pipeline Stalls

 Stall: A condition when the pipeline stops moving

 Resource contention

 Dependences (between instructions)  Data  Control

 Long-latency (multi-cycle) operations

47 Dependences and Their Types

 Also called “dependency” or less desirably “hazard”

 Dependences dictate ordering requirements between instructions

 Two types  Data dependence  Control dependence

 Resource contention is sometimes called resource dependence  However, this is not fundamental to (dictated by) program semantics, so we will treat it separately קשור יותר בחומרה ופחות בתכנית המחשב

48 Handling Resource Contention

 Happens when instructions in two pipeline stages need the same resource

יש מחיר  לפתרונות Solution 1: Eliminate the cause of contention אלה: הגדלת  Duplicate the resource or increase its throughput העלות/הגדלת שטח הרכיב ( E.g., use separate instruction and data memories (caches  E.g., use multiple ports for memory structures High bandwidth memories  Solution 2: Detect the resource contention and stall one of the contending stages  Which stage do you stall?  Example: What if you had a single read and write port for the לעצור את זה שכותב או את זה שקורא ?register file -את זה שקורא 49 Data Dependences מצבים לטיפול כשעובדים בפייפליין  Types of data dependences החמור ( Flow dependence (true data dependence – read after write מהשלושה  Output dependence (write after write)  Anti dependence (write after read)

 Which ones cause stalls in a pipelined machine?  For all of them, we need to ensure semantics of the program is פייפליין מפר את המודל של פון נוימן correct  Flow dependences always need to be obeyed because they constitute true dependence on a value  Anti and output dependences exist due to limited number of architectural registers מצוקת רגיסטרים, לא בגלל תוכן  They are dependence on a name, not a value  We will later see what we can do about them 50 Data Dependence Types

Flow dependence producer מייצר תוכן לr3- r3  r1 op r2 Read-after-Write consumer צורך תוכן מr5  r3 op r4 (RAW) r3-

פה לכאורה אין בעיה Anti dependence

r3  r1 op r2 Write-after-Read לו היו לנו המון רגיסטרים, לא היתה (r  r op r (WAR באמת סיבה לכתוב דוקא לr1-. כלומר 5 4 1 זו לא בעיה של ערך אלא בעיה של שימוש באותו שם של רגיסטר. כנ"ל Output-dependence

r3  r1 op r2 Write-after-Write באותו אופן גם כאן, הכתיבה השניה  לתוך r3 היא שרירותית. היא יכלה (r5 r3 op r4 (WAW באותה מידה להכתב לתוך r1000... r3  r6 op r7 51 lw $10, 20($1) Instruction fetch sub $11, $2, $3 lw $10, 20($1)

0 Instruction decode Execution M u sub $11, $2, $3 lw $10, 20($1) 00x 1MM uu Memory Write back xx 11 IF/ID ID/EX EX/MEM MEM/WB

IF/ID ID/EX EX/MEM MEM/WB Add IF/ID ID/EX EX/MEM MEM/WB

Add Add 4 Add Add result Shift Add 44 AAdddd Add left 2 rreessuulltt SShhiifftt

n Read lleefftt 22 o i PC Address t register 1

c Read u

r data 1 t

n Read n Read s o o i

n Zero i

t register 1 Address I register 2 PC t register 1

PC AddresInsstruction c Read

c Read

u Registers ALU u r dRaetaa 1d ALU r memory t data 1 t Read 0 Read s WReriated Address s data 2 result 1

n Zero

n Zero data I rreeggisistteerr 2 M IInnssttrruuccttiioonn I register 2 M RReeggiisstteerrss u AALLUU memory RReeaadd 0 AALLUU Data u memory WWrriittee data 2 0x result Address RReeaadd 1 data 2 result Addressmemory data 1x drreeaggtaiisstteerr 1MM data u 0MM u Write Datta u WWrriittee xx u data memorry xx ddaattaa 11 00 16 32 WWrriittee Sign ddaattaa extend 1166 3322 SSiiggnn Pipelined Operationextend Example extend Clock 1 ClocCk l5ock 3

sluwb $$1101,, 2$02(,$ $13) slwub $ $1101, ,2 $02($, 1$)3 lw $10, 20($1) IInnssttrruuccttiioonn ffeettcchh IInnssttrruuccttiioonn ddeeccooddee Execution sub $11, $2, $3 sluwb $ $1101, ,2 $02(,$ $13) slwub $ $1101, ,2 $02($, 1$)3 00 MM uu Execution Memory WWrriittee bbaacckk xx 11

IIFF////IIIIDD IIIDD///EEXX EEXX///MMEEMM MMEEMM//W/WBB

AAdddd

AAdddd 44 AAdddd Add rrreessuulltlttt SShhiiiififfttt llleleffftttt 22

n Read n n n

n Read n RReeaadd o o o i o o o i i i t i register 1 i t t

t regiister 1 PC Address t rreeggiisstteerr 11 Address t register 1 PPCC AAddddrreessss c Read c c

c Read c Read c RReeaadd u u u u u r u data 1 r r r t r r ddaatttaa 11 t t data 1 t t

t Read

s RReeaadd

s Read s s s s

n Zero n n

n Zero I n register 2 Zero n Zerro I I I regiister 2 I register 2 Instruction I register 2 IIInnsstttrrruucctttiioonn Registers ALU RReeggiiiissttteerrss RReeaadd AALLUU AALLUU mmeemmoorryy Write Read 00 ALU RReeaadd WWrriiitttee ddaattttaa 22 rrreessuullltlttt AAddddddrrreeessssss Read 111 rrreeggiiiisstteerrr MM ddaattataa register M MM uu DDaattaa M DDaattata uuu WWrriiittttee xx mmeemmoorryy data mmeemmoorryry xxx ddaattaa 11 000 What if the SUB were dependent on WWLW?rrrriiititete ddaattttaa 1166 3322 SSiiiiggnn eexxtttteenndd

Clock 3 CCClCloollccooCkkcc lkk56o c21k 4

52מה היה קורה לו sub היה כותב לr10-? ($1))במקוםlw $ 1r30, 2 0היה כתוב sub $11,) $r102, $3 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] sub $11, $2, $3 lw $10, 20($1) sub $11, $2, $3 Instruction00 fetch Instruction decode M uu Execution Memory Write back 0xx 11M u x 1 IIFF//IIDD IID//EEXX EEXX//MEMM MEEM//WWBB

AAdddd IF/ID ID/EX EX/MEM MEM/WB

Add 44 AAdddd Add Add rreessuulltt Shhiifftt 4 Add Add lleefftt 22 result Shift

n Read n Read

o left 2 o i i

t register 1 PPCC AAddddrreessss t register 1 c Read

c Read u u r

r data 1

t data 1 t Reeaadd s n

s Read

n Zero o n Zero I

i register 2

I register 2 PC AddreIsInsssttrruuccttiioon t register 1 c Reegiisstteerrss RReeaadd ALLUU ALU memory u Read ALU memory r data 1 00 Read t WRerriiatteed ddaattaa 22 rreesuulltt AAddddrreessss Read 11 s ddaattaa n rreggiisstteerr M Zero I register 2 M Instruction uu M Registers Read ALU ALU Daattaa u memory Wrriittee 0xx Read u Write data 2 result Addresms eemoorry x1x dreaagttaister 11M data 00M u Wrriittee Data Write x u ddaattaa memory x data 1 116 332 0 SSiiggnn Write data exxttennd 16 32 Sign extend ClockC l6ock 4 Clock 2 Data Dependence Handling

53 Readings for Next Few Lectures

 P&H Chapter 4.9-4.11

 Smith and Sohi, “The Microarchitecture of Superscalar Processors,” Proceedings of the IEEE, 1995  More advanced pipelining  Interrupt and exception handling כתובת להורדה  Out-of-order and superscalar execution concepts ftp://ftp.cs.wisc.edu/sohi/papers/1995/ieee-proc.superscalar.pdf

James Smith Gurindar S. Sohi 54 תקציר המאמר

Abstract Superscalar processing is the latest in a long series of innovations aimed at producing ever-faster . By exploiting instruction-level parallelism, superscalar processors are capable of executing more than one instruction in a clock cycle. This paper discusses the microarchitecture of superscalar processors. We begin with a discussion of the general problem solved by superscalar processors: converting an ostensibly sequential program into a more parallel one. The principles underlying this process, and the constraints that must be met, are discussed. The paper then provides a description of the specific implementation techniques used in the important phases of superscalar processing. The major phases include: i) instruction fetching and conditional branch processing, ii) the determination of data dependences involving register values, iii) the initiation, or issuing, of instructions for parallel execution, iv) the communication of data values through memory via loads and stores, and v) committing the process state in correct order so that precise interrupts can be supported. Examples of recent superscalar microprocessors, the MIPS R10000, the DEC 21164, and the AMD K5 are used to illustrate a variety of superscalar methods. How to Handle Data Dependences

 Anti and output dependences are easier to handle  write to the destination in one stage and in program order

 Flow dependences are more interesting

 אפשר לפתור Five fundamental ways of handling flow dependences חלק מהבעיות  Detect and wait until value is available in register file בתוכנה )קומפיילר(  Detect and forward/bypass data to dependent instruction אם הקומפיילר  לא יכול לעזור Detect and eliminate the dependence at the software level הוא יכניס  No need for the hardware to detect dependence no-op  Predict the needed value(s), execute “speculatively”, and verify  Do something else (fine-grained multithreading) Value prediction  No need to detect GPU... 56 Last Value Predicted - עשיית תחזית

confidence Last loaded PC value Interlocking

 Detection of dependence between instructions in a pipelined processor to guarantee correct execution

 Software based interlocking vs.  Hardware based interlocking

 MIPS acronym?

58 Credit: Wikipedia, https://en.wikipedia.org/wiki/Delay_slot

ב- MIPS Load delay slot[edit]

A load delay slot is an instruction which executes immediately after a load (of a register from memory) but does not see, and need not wait for, the result of the load. Load delay slots are very uncommon because load delays are highly unpredictable on modern hardware. A load may be satisfied from RAM or from a , and may be slowed by resource contention. Load delays were seen on very early RISC processor designs. The MIPS I ISA (implemented in the R2000 and R3000 microprocessors) suffers from this problem.

The following example is MIPS I assembly code, showing both a load delay slot and a branch delay slot.

lw v0,4(v1) # load word from address v1+4 into v0 nop # wasted load delay slot jr v0 # jump to the address specified by v0 nop # wasted branch delay slot Credit: "See MIPS run” by Dominic Sweetman

Late data from load (load delay slot): Another consequence of the pipeline is that a load instruction’s data arrives from the cache/memory system after the next instruction’s ALU phase starts - so it is not possible to use the data from a load in the following instruction. (See Figure 1.4 for how this works.) The instruction position immediately after the is called the load delay slot, and an optimizing compiler will try to do something useful with it. The assembler will hide this from you but may end up putting to a nop. Approaches to Dependence Detection (I)

See next slide  Each register in register file has a Valid bit associated with it  An instruction that is writing to the register resets the Valid bit  An instruction in Decode stage checks if all its source and destination registers are Valid  Yes: No need to stall… No dependence  No: Stall the instruction (the register in RF is not ready yet)

בשלב ה WB ה VALID BIT נכתב מחדש )אור ירוק ברמזור( : Advantage GO = 0  Simple. 1 bit per register STALL = 1

נאלצים להכיל את זה לגבי כל סוגי התלויות : Disadvantage  Need to stall for all types of dependences, not only flow dep.

61 מתגברים את RF בוקטור

1-bit per register Approaches to Dependence Detection (II)

 Combinational dependence check logic  Special logic that checks if any instruction in later stages is supposed to write to any source register of the instruction that is being decoded  Yes: stall the instruction/pipeline  No: no need to stall… no flow dependence

 Advantage:  No need to stall on anti and output dependences

 Disadvantage:  Logic is more complex than a scoreboard  Logic becomes more complex as we make the pipeline deeper and wider (flash-forward: think superscalar execution)

63 אלטרנטיבה ל scoreboard, לא צריך לעצור עבור anti dependence consumer

producer Once You Detect the Dependence in Hardware

 What do you do afterwards?

 Observation: Dependence between two instructions is detected before the communicated data value becomes available

 Option 1: Stall the dependent instruction right away  Option 2: Stall the dependent instruction only when necessary  data forwarding/bypassing  Option 3: …

צריכים להגדיר תחילה את המושגים הללו

65 Forwarding (aka Bypassing)

Use result when it is computed Don’t wait for it to be stored in a register Requires extra connections in the datapath

Credit: P&H, CO&D Chapter 4 — The Processor — 66 עד כאן מצגת זו We did not cover the following slides in lecture. These are for your preparation for the next lecture. Data Forwarding/Bypassing

 Problem: A consumer (dependent) instruction has to wait in decode stage until the producer instruction writes its value in the register file

 Goal: We do not want to stall the pipeline unnecessarily

 Observation: The data value needed by the consumer instruction can be supplied directly from a later stage in the pipeline (instead of only from the register file)

 Idea: Add additional dependence check logic and data forwarding paths (buses) to supply the producer’s value to the consumer right after the value is available

 Benefit: Consumer can move in the pipeline until the point the value can be supplied  less stalling 69 A Special Case of Data Dependence

 Control dependence  Data dependence on the Instruction Pointer /

70 Control Dependence

 Question: What should the fetch PC be in the next cycle?  Answer: The address of the next instruction  All instructions are control dependent on previous ones. Why?

 If the fetched instruction is a non-control-flow instruction:  Next Fetch PC is the address of the next-sequential instruction  Easy to determine if we know the size of the fetched instruction

 If the instruction that is fetched is a control-flow instruction:  How do we determine the next Fetch PC?

 In fact, how do we know whether or not the fetched instruction is a control-flow instruction? 71