Superscalar Execution Scalar Pipeline and the Flynn Bottleneck Multiple

Superscalar Execution Scalar Pipeline and the Flynn Bottleneck Multiple

Remainder of CSE 560: Parallelism • Last unit: pipeline-level parallelism • Execute one instruction in parallel with decode of next • Next: instruction-level parallelism (ILP) CSE 560 • Execute multiple independent instructions fully in parallel Computer Systems Architecture • Today: multiple issue • In a few weeks: dynamic scheduling • Extract much more ILP via out-of-order processing Superscalar • Data-level parallelism (DLP) • Single-instruction, multiple data • Ex: one instruction, four 16-bit adds (using 64-bit registers) • Thread-level parallelism (TLP) • Multiple software threads running on multiple cores 1 6 This Unit: Superscalar Execution Scalar Pipeline and the Flynn Bottleneck App App App • Superscalar scaling issues regfile System software • Multiple fetch and branch prediction I$ D$ • Dependence-checks & stall logic Mem CPU I/O B • Wide bypassing P • Register file & cache bandwidth • So far we have looked at scalar pipelines: • Multiple-issue designs 1 instruction per stage (+ control speculation, bypassing, etc.) – Performance limit (aka “Flynn Bottleneck”) is CPI = IPC = 1 • Superscalar – Limit is never even achieved (hazards) • VLIW and EPIC (Itanium) – Diminishing returns from “super-pipelining” (hazards + overhead) 7 8 Multiple-Issue Pipeline Superscalar Pipeline Diagrams - Ideal regfile scalar 1 2 3 4 5 6 7 8 9101112 lw 0(r1)r2 F D XMW I$ D$ lw 4(r1)r3 F D XMW lw 8(r1)r4 F DXMW B add r14,r15r6 F D XMW P add r12,r13r7 F DXMW add r17,r16r8 F D XMW lw 0(r18)r9 F DXMW • Overcome this limit using multiple issue • Also called superscalar 2-way superscalar 1 2 3 4 5 6 7 8 9101112 • Two instructions per stage at once (or 3 or 4 or 8…) lw 0(r1)r2 F D XMW lw 4(r1)r3 F D XMW • “Instruction-Level Parallelism (ILP)” [Fisher, IEEE TC’81] lw 8(r1)r4 F D XMW • Today, typically “4-wide” (Intel Core 2, AMD Opteron) add r14,r15r6 F D XMW add r12,r13r7 F DXMW • Some more (Power5 is 5-issue; Itanium is 6-issue) add r17,r16r8 F DXMW • Some less (dual-issue is common for simple cores) lw 0(r18)r9 F D XMW 9 10 Superscalar Pipeline Diagrams - Realistic Superscalar CPI Calculations • Base CPI for scalar pipeline is 1 scalar 1 2 3 4 5 6 7 8 9101112 lw 0(r1)r2 F D XMW • Base CPI for N-way superscalar pipeline is 1/N lw 4(r1)r3 F D XMW – Amplifies stall penalties lw 8(r1)r4 F DXMW • Assumes no data stalls (an overly optimistic assumption) add r4,r5r6 F d* D X MW add r2,r3r7 F D XMW add r7,r6r8 F DXMW • Example: Branch penalty calculation lw 0(r8)r9 F D XMW • 20% branches, 75% taken, no explicit branch prediction • Scalar pipeline 2-way superscalar 1 2 3 4 5 6 7 8 9101112 • 1 + 0.2 x 0.75 x 2 = 1.3 1.3/1 = 1.3 30% slowdown lw 0(r1)r2 F D XMW lw 4(r1)r3 F D XMW • 2-way superscalar pipeline lw 8(r1)r4 F D XMW • 0.5 + 0.2 x 0.75 x 2 = 0.8 0.8/0.5 = 1.6 60% slowdown add r4,r5r6 F d* d* D X MW • 4-way superscalar add r2,r3r7 F p* D X MW add r7,r6r8 F DXMW • 0.25 + 0.2 x 0.75 x 2 = 0.55 0.55/0.25 = 2.2 120% lw 0(r8)r9 F d* D X MW slowdown 11 12 A Typical Dual-Issue Pipeline (1) A Typical Dual-Issue Pipeline (2) regfile regfile I$ D$ I$ D$ B B P P • Fetch an entire 16B or 32B cache block • Multi-ported register file • 4 to 8 instructions (assuming 4-byte fixed length instructions) • Larger area, latency, power, cost, complexity • Predict a single branch per cycle • Multiple execution units • Parallel decode • Simple adders are easy, but bypass paths are expensive • Memory unit • Need to check for conflicting instructions • 1 load per cycle (stall at decode) probably okay for dual issue • Output of I is an input to I 1 2 • Alternative: add a read port to data cache • Other stalls, too (for example, load-use delay) • Larger area, latency, power, cost, complexity 13 14 Superscalar Challenges - Front End Superscalar Challenges - Back End • Wide instruction fetch • Wide instruction execution • Modest: need multiple instructions per cycle • Replicate arithmetic units • Aggressive: predict multiple branches • Perhaps multiple cache ports • Wide instruction decode • Wide bypass paths • Replicate decoders • More possible sources for data values • Wide instruction issue • Order (N2 x P) for N-wide machine, execute pipeline depth P • Determine when instructions can proceed in parallel • Wide instruction register writeback • Not all combinations possible • One write port per instruction that writes a register • More complex stall logic - order N2 for N-wide machine • Example, 4-wide superscalar 4 write ports • Wide register read • One port for each register read • Fundamental challenge: • Each port needs its own set of address and data wires • Amount of ILP (instruction-level parallelism) in the program • Example, 4-wide superscalar 8 read ports • Compiler must schedule code and extract parallelism 15 16 How Much ILP is There? Wide Decode • The compiler tries to “schedule” code to avoid stalls regfile • Hard for scalar machines (to fill load-use delay slot) • Even harder to schedule multiple-issue (superscalar) • Even given unbounded ILP, superscalar has limits • IPC (or CPI) vs clock frequency trade-off • What is involved in decoding multiple (N) insns per cycle? • Given these challenges, what is reasonable N? 3 or 4 • Actually doing the decoding? today • Easy if fixed length (multiple decoders), doable if variable • Reading input registers? – 2N register read ports (latency #ports) + Actually < 2N, most values come from bypasses (more later) • What about the stall logic? 17 20 N2 Dependence Cross-Check Wide Execute • Stall logic for 1-wide pipeline with full bypassing • Full bypassing load/use stalls only X/M.op==LOAD && (D/X.rs1==X/M.rd || D/X.rs2==X/M.rd) • Two “terms”: 2N • Now: same logic for a 2-wide pipeline • What is involved in executing N insns per cycle? • Multiple execution units … N of every kind? X/M1.op==LOAD && (D/X1.rs1==X/M1.rd || D/X1.rs2==X/M1.rd) || • N ALUs? OK, ALUs are small X/M1.op==LOAD && (D/X2.rs1==X/M1.rd || D/X2.rs2==X/M1.rd) || X/M2.op==LOAD && (D/X1.rs1==X/M2.rd || D/X1.rs2==X/M2.rd) || • N FP dividers? No, FP dividers are huge, fdiv is uncommon X/M2.op==LOAD && (D/X2.rs1==X/M2.rd || D/X2.rs2==X/M2.rd) • How many branches/cycle? How many loads/stores /cycle? • Eight “terms”: 2N2 • Typically mix of functional units proportional to insn mix • N2 dependence cross-check • Intel Pentium: 1 any + 1 ALU • Not quite done, also need • Alpha 21164: 2 integer (including 2 loads) + 2 FP • D/X2.rs1==D/X1.rd || D/X2.rs2==D/X1.rd 21 22 Wide Memory Access Wide Register Read/Write regfile D$ • What about multiple loads/stores per cycle? • How many register file ports to execute N insns per cycle? • Probably only necessary on processors 4-wide or wider • Nominally, 2N read + N write (2 read + 1 write per insn) • More important to support multiple loads than stores – Latency, area #ports2 • Insn mix: loads (~20–25%), stores (~10–15%) • In reality, fewer than that • Alpha 21164: two loads or one store per cycle • Read ports: many values from bypass network, immediates • Write ports: stores, branches (35% insns) don’t write registers • Replication works great for regfiles (used in Alpha 21164) 23 24 Wide Bypass Not All N2 Created Equal • N2 bypass vs. N2 stall logic & dependence cross-check • N2 bypass network – N+1 input muxes at each ALU input • Which is the bigger problem? – N2 point-to-point connections – Routing lengthens wires • N2 bypass … by far – Expensive metal layer crossings • And this is just one bypass stage (MX)! • 32- or 64- bit quantities (vs. 5-bit) • There is also WX bypassing • Multiple bypass levels (MX, WX) vs. 1 level of stall logic • Even more for deeper pipelines • Must fit in one clock period with ALU (vs. not) • One of the big problems of superscalar • Implemented as bit-slicing • 64 1-bit bypass networks • Dependence cross-check not even 2nd biggest N2 problem • Mitigates routing problem somewhat • Regfile also N2 problem (think latency where N is #ports) • And also more serious than cross-check 25 26 Avoid N2 Bypass/RegFile: Clustering Wide Non-Sequential Fetch • Two related questions cluster 0 RF0 • How many branches predicted per cycle? • Can we fetch across the branch if it is predicted “taken”? • Simplest, most common organization: “1” and “No” cluster 1 RF1 DM • 1 prediction, discard post-branch insns if prediction is “taken” – Lowers effective fetch width and IPC Clustering: group ALUs into K clusters • Average number of instructions per taken branch? • Full bypassing within cluster, limited bypassing between clusters • Assume: 20% branches, 50% taken ~10 instructions • Get values from regfile with 1-2 cycle delay • Consider: 10-instruction loop body with an 8-issue processor + N/K non-regfile inputs at each mux, N2/K point-to-point paths • Without smarter fetch, ILP is limited to 5 (not 8) • Key to performance: steering dependent insns to same cluster • Hurts IPC, helps clock frequency (or wider issue at same clock) • Compiler can help • Typically uses replicated register files (1 per cluster) • Alpha 21264: 4-way superscalar, two clusters • Reduce taken branch frequency (e.g., unroll loops) 27 28 Parallel Non-Sequential Fetch Multiple-issue CISC • How do we apply superscalar techniques to CISC? I$ b0 • Break “macro-ops” into “micro-ops” B I$ • Also called “mops” or “RISC-ops” P b1 • A typical CISCy instruction “add [r1], [r2] [r3]” becomes: • Allowing “embedded” taken branches is possible • Load [r1] t1 (t1 = temp.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us