18-447 Lecture 8: Microprogramming and Pipelined

Prof. Onur Mutlu Carnegie Mellon University Spring 2012, 2/13/2012

Reminder: Homeworks

 Homework 2

 Due today

 ISA concepts, ISA vs. , microcoded machines

 Homework 3

 Will be out tomorrow

2 Homework 1 Grades

35

30 25 20 15 10

5 Number of Students of Number 0 50 60 70 80 90 100 110 Grade Average 100 Median 103 Max 110 Min 55 Max Possible Points 110 Total number of students 56

3 Reminder: Lab Assignments

 Getting your Lab 1 fully correct

 We will allow resubmission once, just for the purposes of testing the correctness of your revised code (no regrading)

 Lab Assignment 2

 Due Friday, Feb 17, at the end of the lab

 Individual assignment

 No collaboration; please respect the honor code

 Lab Assignment 3

 Will be out Wednesday

4 Reminder: Extra Credit for Lab Assignment 2

 Complete your normal (single-cycle) implementation first, and get it checked off in lab.

 Then, implement the MIPS core using a microcoded approach similar to what we are discussing in class.

 We are not specifying any particular details of the format or the microarchitecture; you should be creative.

 For the extra credit, the microcoded implementation should execute the same programs that your ordinary implementation does, and you should demo it by the normal lab deadline.

5 Readings for Today

 Pipelining

 P&H Chapter 4.5-4.8

6 Readings for Next Lecture

 Required

 Pipelined LC-3b Microarchitecture Handout

 Optional

 Hamacher et al. book, Chapter 6, “Pipelining”

7 Announcement: Discussion Sessions

 Lab sessions are really discussion sessions

 TAs will lead recitations

 Go over past lectures

 Answer and ask questions

 Solve problems and homeworks

 Please attend any session you wish

 Tue 10:30am-1:20pm (Chris)

 Thu 1:30-4:20pm (Lavanya)

 Fri 6:30-9:20pm (Abeer)

8 An Exercise in Microcoding

9 A Simple LC-3b Control and

10 C.2. THE STATE MACHINE 5

18, 19 MAR

33 MDR

R R 35 IR

32 1011 RTI BEN

9 12 To 18 DR

To 18 15 4 To 18 MAR

0 1 28 20 MDR

To 18 29 25 23 24 NOTES MDR

Figure C.2: A state machine for the LC-3b

State Machine for LDW 10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE COND1 COND0

BEN R IR[11]

Branch Ready Addr. Mode J[5] J[4] J[3] J[2] J[1] J[0]

0,0,IR[15:12] 6

IRD

6

Address of Next State

Figure C.5: The microsequencer of the LC-3b base machine

unused opcodes, the microarchitecture would execute a sequence of microinstructions, starting at state 10 or state 11, depending on which illegal opcode was being decoded. In both cases, the sequence of microinstructions would respond to the fact that an instruction with an illegal opcode had been fetched. Several signals necessary to control the data path and the microsequencer are not among those listed in Tables C.1 and C.2. They are DR, SR1, BEN, and R. Figure C.6 State 18 (010010) shows the additional logic needed to generate DR, SR1, and BEN. The remaining signal, R, is a signal generated by the memory in order to allow the State 33 (100001) State 35 (100011) State 32 (100000) State 6 (000110) State 25 (011001) State 27 (011011) C.4. THE CONTROL STRUCTURE 11

IR[11:9] IR[11:9] DR SR1 111 IR[8:6]

DRMUX SR1MUX

(a) (b)

IR[11:9]

N Logic BEN Z P

(c)

Figure C.6: Additional logic required to provide control signals

LC-3b to operate correctly with a memory that takes multiple clock cycles to read or store a value. Suppose it takes memory five cycles to read a value. That is, once MAR contains the address to be read and the microinstruction asserts READ, it will take five cycles before the contents of the specified location in memory are available to be loaded into MDR. (Note that the microinstruction asserts READ by means of three control signals: MIO.EN/YES, R.W/RD, and DATA.SIZE/WORD; see Figure C.3.) Recall our discussion in Section C.2 of the function of state 33, which accesses an instruction from memory during the fetch phase of each . For the LC-3b to operate correctly, state 33 must execute five times before moving on to state 35. That is, until MDR contains valid data from the memory location specified by the contents of MAR, we want state 33 to continue to re-execute. After five clock cycles, the memory has completed the “read,” resulting in valid data in MDR, so the can move on to state 35. What if the microarchitecture did not wait for the memory to complete the read operation before moving on to state 35? Since the contents of MDR would still be garbage, the microarchitecture would put garbage into IR in state 35. The ready signal (R) enables the memory read to execute correctly. Since the mem- ory knows it needs five clock cycles to complete the read, it asserts a ready signal (R) throughout the fifth clock cycle. Figure C.2 shows that the next state is 33 (i.e., 100001) if the memory read will not complete in the current clock cycle and state 35 (i.e., 100011) if it will. As we have seen, it is the job of the microsequencer (Figure C.5) to produce the next state address.

C.4. THE CONTROL STRUCTURE 9

R IR[15:11]

BEN

Microsequencer

6

Control Store

2 6 x 35

35

Microinstruction

9 26

(J, COND, IRD)

Figure C.4: The control structure of a microprogrammed implementation, overall block diagram on the LC-3b instruction being executed during the current instruction cycle. This state carries out the DECODE phase of the instruction cycle. If the IRD control signal in the microinstruction corresponding to state 32 is 1, the output MUX of the microsequencer (Figure C.5) will take its source from the six bits formed by 00 concatenated with the four opcode bits IR[15:12]. Since IR[15:12] specifies the opcode of the current LC- 3b instruction being processed, the next address of the will be one of 16 addresses, corresponding to the 14 opcodes plus the two unused opcodes, IR[15:12] = 1010 and 1011. That is, each of the 16 next states is the first state to be carried out after the instruction has been decoded in state 32. For example, if the instruction being processed is ADD, the address of the next state is state 1, whose microinstruction is stored at location 000001. Recall that IR[15:12] for ADD is 0001. If, somehow, the instruction inadvertently contained IR[15:12] = 1010 or 1011, the 10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE

COND1 COND0

BEN R IR[11]

Branch Ready Addr. Mode J[5] J[4] J[3] J[2] J[1] J[0]

0,0,IR[15:12] 6

IRD

6

Address of Next State

Figure C.5: The microsequencer of the LC-3b base machine

unused opcodes, the microarchitecture would execute a sequence of microinstructions, starting at state 10 or state 11, depending on which illegal opcode was being decoded. In both cases, the sequence of microinstructions would respond to the fact that an instruction with an illegal opcode had been fetched. Several signals necessary to control the data path and the microsequencer are not among those listed in Tables C.1 and C.2. They are DR, SR1, BEN, and R. Figure C.6 shows the additional logic needed to generate DR, SR1, and BEN. The remaining signal, R, is a signal generated by the memory in order to allow the 14APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE

X U X X E U U M X Z R R I R R U F X M M U N G D A X X 1 2 N .S A D C L H U 1 E E C C U U R R M K E A d R P M A M S M . F .M .M .I .B .R .C .P e e e e e M D D R U T D n t t t t t M 1 O H o a a a a a R D D A L I .W A R D D D D D D D C R S I C J L L L L L L L G G G G G P D S A A M A M R D L

000000 (State 0) 000001 (State 1) 000010 (State 2) 000011 (State 3) 000100 (State 4) 000101 (State 5) 000110 (State 6) 000111 (State 7) 001000 (State 8) 001001 (State 9) 001010 (State 10) 001011 (State 11) 001100 (State 12) 001101 (State 13) 001110 (State 14) 001111 (State 15) 010000 (State 16) 010001 (State 17) 010010 (State 18) 010011 (State 19) 010100 (State 20) 010101 (State 21) 010110 (State 22) 010111 (State 23) 011000 (State 24) 011001 (State 25) 011010 (State 26) 011011 (State 27) 011100 (State 28) 011101 (State 29) 011110 (State 30) 011111 (State 31) 100000 (State 32) 100001 (State 33) 100010 (State 34) 100011 (State 35) 100100 (State 36) 100101 (State 37) 100110 (State 38) 100111 (State 39) 101000 (State 40) 101001 (State 41) 101010 (State 42) 101011 (State 43) 101100 (State 44) 101101 (State 45) 101110 (State 46) 101111 (State 47) 110000 (State 48) 110001 (State 49) 110010 (State 50) 110011 (State 51) 110100 (State 52) 110101 (State 53) 110110 (State 54) 110111 (State 55) 111000 (State 56) 111001 (State 57) 111010 (State 58) 111011 (State 59) 111100 (State 60) 111101 (State 61) 111110 (State 62) 111111 (State 63)

Figure C.7: Specification of the control store 10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE

COND1 COND0

BEN R IR[11]

Branch Ready Addr. Mode J[5] J[4] J[3] J[2] J[1] J[0]

0,0,IR[15:12] 6

IRD

6

Address of Next State

Figure C.5: The microsequencer of the LC-3b base machine

unused opcodes, the microarchitecture would execute a sequence of microinstructions, starting at state 10 or state 11, depending on which illegal opcode was being decoded. In both cases, the sequence of microinstructions would respond to the fact that an instruction with an illegal opcode had been fetched. Several signals necessary to control the data path and the microsequencer are not among those listed in Tables C.1 and C.2. They are DR, SR1, BEN, and R. Figure C.6 shows the additional logic needed to generate DR, SR1, and BEN. The remaining signal, R, is a signal generated by the memory in order to allow the The Microsequencer: Some Questions

 When is the IRD signal asserted?

 What happens if an illegal instruction is decoded?

 What are condition (COND) bits for?

 How is variable latency memory handled?

 How do you do the state encoding?

 Minimize number of state variables

 Start with the 16-way branch

 Then determine constraint tables and states dependent on COND

20 The Control Store: Some Questions

 What control signals can be stored in the control store?

vs.

 What control signals have to be generated in hardwired logic?

 i.e., what signal cannot be available without processing in the datapath?

21 Variable-Latency Memory

 The ready signal (R) enables memory read/write to execute correctly

 Example: transition from state 33 to state 35 is controlled by the R bit asserted by memory when memory data is available

 Could we have done this in a single-cycle microarchitecture?

22 The Microsequencer: Advanced Questions

 What happens if the machine is interrupted?

 What if an instruction generates an exception?

 How can you implement a complex instruction using this control structure?

 Think REP MOVS

23 The Power of Abstraction

 The concept of a control store of microinstructions enables the hardware designer with a new abstraction: microprogramming

 The designer can translate any desired operation to a sequence microinstructions  All the designer needs to provide is

 The sequence of microinstructions needed to implement the desired operation

 The ability for the control logic to correctly sequence through the microinstructions

 Any additional datapath control signals needed (no need if the operation can be “translated” into existing control signals)

24 Let’s Do Some Microprogramming

 Implement REP MOVS in the LC-3b microarchitecture

 What changes, if any, do you make to the

 state machine?

 datapath?

 control store?

 microsequencer?

 Show all changes and microinstructions  Coming up in Homework 3

25 Aside: Alignment Correction in Memory

 Remember unaligned accesses

 LC-3b has byte load and byte store instructions that move data not aligned at the word-address boundary

 Convenience to the programmer/compiler

 How does the hardware ensure this works correctly?

 Take a look at state 29 for LDB

 State 17 for STB

 Additional logic to handle unaligned accesses

26 Aside: Memory Mapped I/O

 Address control logic determines whether the specified address of LDx and STx are to memory or I/O devices

 Correspondingly enables memory or I/O devices and sets up muxes

 Another instance where the final control signals (e.g., MEM.EN or INMUX/2) cannot be stored in the control store

 Dependent on address

27 Advantages of Microprogrammed Control

 Allows a very simple datapath to do powerful computation by controlling the datapath (using a sequencer)

 High-level ISA translated into microcode (sequence of microinstructions)

 Microcode enables a minimal datapath to emulate an ISA

 Microinstructions can be thought of a user-invisible ISA

 Enables easy extensibility of the ISA

 Can support a new instruction by changing the ucode

 Can support complex instructions as a sequence of simple microinstructions

 If I can sequence an arbitrary instruction then I can sequence an arbitrary “program” as a microprogram sequence

 will need some new state (e.g. loop counters) in the microcode for sequencing more elaborate programs

28 Update of Machine Behavior

 The ability to update/patch microcode in the field (after a processor is shipped) enables

 Ability to add new instructions without changing the processor!

 Ability to “fix” buggy hardware implementations

 Examples

 IBM 370 Model 145: microcode stored in main memory, can be updated after a reboot

 B1700 microcode can be updated while the processor is running

 User-microprogrammable machine!

29 Microcoded Multi-Cycle MIPS Design

 P&H, Appendix D

 Any ISA can be implemented this way

 We will not cover this in class

 However, you can do an extra credit assignment for Lab 2

30 Microcoded Multi-Cycle MIPS Design

[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 31 Control Logic for MIPS FSM

[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 32 Microprogrammed Control for MIPS FSM

[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 33 Horizontal Microcode

Microcode ALUSrcA

storage output IorD Datapath ” IRWrite Outputs control ouPCWritetputs

PCWriteCond control

…. “

n-bit ImnpPCut input

bit bit -

1 k

Sequencing Microprogram control

Address select logic

Inputs from opcode field

[Based on original figure from P&H CO&D, COPYRIGHT n 34 2004 Elsevier. ALL RIGHTS RESERVED.] Control Store: 2  k bit (not including sequencing) Vertical Microcode

1-bit signal means do this RT (or combination of RTs) Microcode “PC  PC+4” storage “PC  ALUOut” D“PCata pa tPC[h 31:28 ],IR[ 25:0 ],2’b00” Outputs c“oIRn trol MEM[ PC ]” outputs “A  RF[ IR[ 25:21 ] ]” “B  RF[ IR[ 20:16 ] ]” …………. ……. n-bit mInPCput input 1 m-bit input

Sequencing Microprogram counter control ROM Adder

Address select logic k-bit output

[Based on original figure from P&H CO&D, COPYRIGHT

…. PCWriteCond PCWrite IRWrite IorD 2004 Elsevier. ALL RIGHTS RESERVED.] Inputs from instruction ALUSrcA

register opcode field

If done right (i.e., m<

 Nanocode: a level below mcode  mprogrammed control for sub-systems (e.g., a complicated floating- point module) that acts as a slave in a mcontrolled datapath

 Millicode: a level above mcode  ISA-level subroutines hardcoded into a ROM that can be called by the mcontroller to handle complicated operations

 In both cases, we avoid complicating the main mcontroller

36 Nanocode Concept Illustrated

a “mcoded” processor implementation

ROM processor datapath mPC

We refer to this a “mcoded” FPU implementation as “nanocode” when a mcoded ROM arithmetic subsystem is embedded datapath in a mcoded system mPC 37 Multi-Cycle vs. Single-Cycle uArch

 Advantages

 Disadvantages

 You should be very familiar with this right now

38 Microprogrammed vs. Hardwired Control

 Advantages

 Disadvantages

 You should be very familiar with this right now

39 Can We Do Better?

 What limitations do you see with the multi-cycle design?

 Limited concurrency

 Some hardware resources are idle during different phases of instruction processing cycle

 “Fetch” logic is idle when an instruction is being “decoded” or “executed”

 Most of the datapath is idle when a memory access is happening

40 Can We Use the Idle Hardware to Improve Concurrency?

 Goal: Concurrency  throughput (more “work” completed in one cycle)

 Idea: When an instruction is using some resources in its processing phase, other instructions on idle resources not needed by that instruction

 E.g., when an instruction is being decoded, fetch the next instruction

 E.g., when an instruction is being executed, decode another instruction

 E.g., when an instruction is accessing data memory (ld/st), execute the next instruction

 E.g., when an instruction is writing its result into the , access data memory for the next instruction 41 Pipelining: Basic Idea

 More systematically:

 Pipeline the execution of multiple instructions

 Analogy: “Assembly line processing” of instructions

 Idea:

 Divide the instruction processing cycle into distinct “stages” of processing

 Ensure there are enough hardware resources to process one instruction in each stage

 Process a different instruction in each stage

 Instructions consecutive in program order are processed in consecutive stages

 Benefit: Increases instruction processing throughput (1/CPI)  Downside: Start thinking about this… 42 Example: Execution of Four Independent ADDs

 Multi-cycle: 4 cycles per instruction

F D E W F D E W F D E W F D E W Time

 Pipelined: 4 cycles per 4 instructions (steady state)

F D E W F D E W F D E W F D E W Time

43

The Laundry Analogy

6 PM 7 8 9 10 11 12 1 2 AM Time Task order A B

C

D

 “place one dirty load of clothes in the washer”

 “when the washer is finished, place the wet load in the dryer”

 “when the dryer6 PM is7 finished,8 take9 out10 the11 dry 1load2 and1 fold2 AM ” Time  “when folding is finished, ask your roommate (??) to put the clothes Task away” order A - steps to do a load are sequentially dependent B - no dependence between different loads

C - different steps do not share resources

D 44 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] Pipelining Multiple Loads of Laundry 6 PM 7 8 9 10 11 12 1 2 AM Time Task order 6 PM 7 8 9 10 11 12 1 2 AM TimeA Task B order AC

DB

C

D

6 PM 7 8 9 10 11 12 1 2 AM Time

Task order 6 PM 7 8 9 10 11 12 1 2 AM Time A - 4 loads of laundry in parallel Task order B - no additional resources A C - throughput increased by 4 B D

C - latency per load is the same

D 45 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 6 PM 7 8 9 10 11 12 1 2 AM PipeliningTime Multiple Loads of Laundry: In Practice Task order A 6 PM 7 8 9 10 11 12 1 2 AM Time B Task order C A D B

C

D

6 PM 7 8 9 10 11 12 1 2 AM Time

Task order 6 PM 7 8 9 10 11 12 1 2 AM TimeA

Task B order C A

D B C the slowest step decides throughput D 46 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] 6 PM 7 8 9 10 11 12 1 2 AM PipeliningTime Multiple Loads of Laundry: In Practice Task order A6 PM 7 8 9 10 11 12 1 2 AM Time Task B order C A

D B

C

D

6 PM 7 8 9 10 11 12 1 2 AM Time

Task order 6 PM 7 8 9 10 11 12 1 2 AM Time A A

Task B order B AC A DB B C Throughput restored (2 loads per hour) using 2 dryers D 47 Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.] We did not cover the following slides in lecture. These are for your preparation for the next lecture. An Ideal Pipeline

 Goal: Increase throughput with little increase in cost (hardware cost, in case of instruction processing)

 Repetition of identical operations

 The same operation is repeated on a large number of different inputs

 Repetition of independent operations

 No dependencies between repeated operations

 Uniformly partitionable suboperations

 Processing an be evenly divided into uniform-latency suboperations (that do not share resources)

 Good examples: automobile assembly line, doing laundry

 What about instruction processing pipeline? 49 Ideal Pipelining

(F,D,E,M,W) BW=~(1/T) T psec

T/2 ps (F,D,E) T/2 ps (M,W) BW=~(2/T)

T/3 T/3 T/3 BW=~(3/T) ps (F,D) ps (E,M) ps (M,W)

50 More Realistic Pipeline: Throughput

 Nonpipelined version with delay T BW = 1/(T+S) where S = latch delay

T ps

 k-stage pipelined version

BWk-stage = 1 / (T/k +S )

BWmax = 1 / (1 gate delay + S )

T/k T/k ps ps

51 More Realistic Pipeline: Cost

 Nonpipelined version with combinational cost G Cost = G+L where L = latch cost

G gates

 k-stage pipelined version

Costk-stage = G + Lk

G/k G/K

52