Introduction to Instruction Scheduling

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Instruction Scheduling Reminders Introduction to Instruction Scheduling Read: Ch.17 (instruction scheduling) T3 due after Spring Break 15-745 Optimizing Compilers Spring 2006 Watch the RSS feed for announcements about reading list and project suggestions Peter Lee Contact Mike about proposal dates Optimizations Instruction scheduling IR-level optimizations Assume all instructions are essential mostly machine independent i.e., we have finished optimizing the IR goal is to eliminate computations Instruction scheduling attempts to rewrite the code for maximum Instruction-level optimizations instruction-level parallelism (ILP) register allocation: reduce data access cost Instruction scheduling (IS) is NP-complete today: instruction scheduling (and bad in practice), so heuristics must be used Instruction scheduling Parallelism constraints Data-dependence constraints time If instruction A computes a value that is a = 1 + x; a = 1 + x; b = 2 + y; c = 3 + z; read by instruction B, then B can’t b = 2 + y; c = 3 + z; execute before A is completed Resource hazards Since all three instructions are independent, Finiteness of hardware (e.g., how many we can execute them in parallel, assuming adequate hardware processing resources add circuits) means limited parallelism Hardware parallelism Pipelining Basic idea is to decompose an instruction’s Three forms of parallelism are found in execution into a sequence of stages, so modern hardware: that multiple instruction executions can pipelining be overlapped superscalar processing same principle as the assembly line multiprocessing the classic 5-stage pipeline - instruction fetch Of these, the first two forms are commonly - decode and register fetch IF RF EX ME WB - execute on ALU exploited by instruction scheduling - memory access time - write back to register file Why this might help Pipelining speedup The classic Suppose a non-pipelined machine can execute an laundry picture... instruction in 5ns this is a completion rate of 0.20inst/ns If each stage of the pipelined machine completes in 1ns, then it executes at a completion rate of 1.0inst/ns (after 5ns to fill the pipeline) Works best if each stage takes the same amount of an N-fold improvement (where N is the time (e.g., one clock cycle) number of stages)! Pipelining illustration Pipelining illustration In a given cycle, each IF RF EX MEWB IF RF EX MEWB instruction is in a IF RF EX MEWB IF RF EX MEWB different stage, but every IF RF EX MEWB IF RF EX MEWB stage is active IF RF EX MEWB IF RF EX MEWB IF RF EX MEWB IF RF EX MEWB time time The pipeline is “full” here The standard von Neumann model The pipeline concept was first developed for the MIPS RISC processor, by Hennessy, et al. 1981 Pipelining speedup Is more better? In the ideal case, a pipelined processor will If 5 stages is good, 8 should be better, right? complete one instruction every cycle and 20 even better than that! this is the instruction throughput, which ideally is 1 IPC (instructions per Not only do we get (potentially) more cycle) parallelism this way, but with smaller/simpler stages, the clock rate can be even higher Alternatively, we can talk in terms of CPI (cycles per instruction) so we win on both parallelism and cycle time More and more stages Limitations of pipelining Since all stages execute in parallel, the Indeed, the MIPS R4000 used an 8-stage cycle time is limited by the slowest stage pipeline, and the Pentium 4 has 20 stages! Also, the deeper the pipeline the longer it most consumers equate performance with takes to fill clock rate, so this also sells more chips Worst of all, pipeline hazards can cause However, the value of a stage is unclear if it is the processor to suspend, or stall, extremely simple, e.g., less than the time of progress temporarily an integer add Stalls can have a devastating effect on Amdahl’s Law: a limit to available parallelism performance virtual cache Pipeline hazards memory One instruction may need the results of a previous instruction y e r o 8B h 32B 8KB A branch/jump target might not be c main m CPU a disk c known until later in the pipeline regs me memory An instruction may be waiting on a fetch 3ns 6ns from main memory 60ns Such inconsistencies in the pipeline that causes stalls are called hazards larger, slower, cheaper 8ms Superscalar processing Superscalar illustration Multiple instructions in the same To get even more parallelism, we can go IF RF EX MEWB pipeline stage at the same time superscalar IF RF EX MEWB IF RF EX MEWB Some versions of the PowerPC, for The basic idea: IF RF EX MEWB example, have 4 ALUs and 2 FPUs multiple instructions proceed simultaneously IF RF EX MEWB IF RF EX MEWB The Intel i960 (1988) was the through the same pipeline stages first commercial superscalar IF RF EX MEWB microprocessor this is accomplished by adding more IF RF EX MEWB hardware, for parallel execution of stages IF RF EX MEWB IF RF EX MEWB Before that, 1960’s-era Cray and for dispatching instructions to them supercomputers also used the superscalar concept Scheduling complications Complication #1: Data dependences We would like to reorder/rewrite instructions so as to maximize parallelism, i.e., keep all hardware resources busy Must be careful not to read or write a data location “too early” before after time 1 cycle x = 1 y = x True dependence: read-after-write x = 1 Output dependence: write-after-write n cycles Complications: sometimes x = 2 fixed via renaming 1) data dependences y = x Anti-dependence: write-after-read 2) control dependences {x = 1 3) limited hardware resources Data hazards Data dependences r2+r3 available here r1 = r2 + r3 In practice, data r4 = r1 + r1 dependences are extremely difficult to IF RF EX ME WB x = a[i]; reason about *p = 1; y = *q; Considerable research *r = z; r1 = [r2] [r2] available here effort on alias analysis, r4 = r1 + r1 pointer analysis, with limited success IF RF EX ME WB Register renaming Renaming example r1 = r2 + 1 r7 = r2 + 1 r7 = r2 + 1 Sometimes data hazards are not “real”, in [fp+8] = r1 [fp+8] = r7 r1 = r3 + 2 the sense that a simple renaming of r1 = r3 + 2 r1 = r3 + 2 [fp+8] = r7 registers can eliminate them [fp+12] = r1 [fp+12] = r1 [fp+12] = r1 for output dependence, A and B write Some register allocators avoid quick re-use of registers, for anti dependence, A reads & B writes to get some of the benefit of renaming. The hardware can sometimes rename Note that there is a phase-ordering problem: registers, thereby allowing reordering - scheduling before or after register allocation? Complication #2: Control dependences Branch delay slots What do we do when we reach a conditional branch? jump IF RF EX ME WB Option 1: hw solution stall the pipeline unconditional target is IF RF EX ME WB available only after decode stage conditional target is jump IF RF EX ME WB Option 2: sw solution available only after Can try to schedule for insert nop instructions nop execute stage multiple paths, but this is IF RF EX ME WB largely impractical for IF RF EX ME WB IF RF EX ME WB real programs Both make the code slower Branch delay slots Branch prediction Option 3: branch takes effect Some current processors will speculatively after the delay slots. execute at conditional branches jump IF RF EX ME WB if correct guess, then great inst x IF RF EX ME WB This means some instructions get if not, then flush the pipelines before the inst y IF RF EX ME WB executed after the WB stage branch but before the inst 2 IF RF EX ME WB branching takes effect Average number of instructions per basic block (in C code) is 5 inst 3 IF RF EX ME WB what happens with a 20-stage pipeline? Complication #3: Hardware HW complication: Issue width Superscalar allows more than one instruction to be “issued” to the processor The hardware is finite (obviously) in each cycle Also, for engineering reasons, there may However, there is a finite limit be constraints on how the hardware resources may be used time infinite issue width issue width = 4 HW Complication: Functional units HW complication: Pipeline limits A 4-way superscalar might be limited to Often, there are restrictions on the issuing, e.g., at most 2 integer, 1 memory, pipelining in a functional unit and 1 FP instruction per cycle e.g., some FPUs allow a new FP division only once every 2 cycles original unconstrained realistic e.g.,or ionlygina 1l new floating-pointrealisti cdivision once every 2 time time cycles integer FP bottleneck integer memory memory floating point floating point Compiler or hardware? Compiler or hardware? It is possible to do some “on-the-fly” In reality, a combination of compiler and instruction re-ordering in hardware hardware scheduling support will be used This raises the question of whether it is better to do scheduling in the compiler or compiler- hardware in hardware centric -centric There is general agreement, however, that very long in-order out-of-order further improvements in superscalar instruction superscalar superscalar control will be quite limited word (VLIW) [early Pentium] [Pentium 4] [Itanium] VLIW processors Compiling for VLIW Except for memory references, execution Idea: Give full control of scheduling to the latencies for each functional unit are fixed compiler so, the hardware typically checks data Why: The hardware can be simpler (and thus dependences on memory references faster) if it isn’t complicated by scheduling dynamically How: Expose all functional units via a “very The compiler is responsible for taking data long instruction word” dependences into account int mem fp a = b + 1; a very long time must insert NOPs for empty slots c = a – d; instruction word
Recommended publications
  • Integrating Program Optimizations and Transformations with the Scheduling of Instruction Level Parallelism*
    Integrating Program Optimizations and Transformations with the Scheduling of Instruction Level Parallelism* David A. Berson 1 Pohua Chang 1 Rajiv Gupta 2 Mary Lou Sofia2 1 Intel Corporation, Santa Clara, CA 95052 2 University of Pittsburgh, Pittsburgh, PA 15260 Abstract. Code optimizations and restructuring transformations are typically applied before scheduling to improve the quality of generated code. However, in some cases, the optimizations and transformations do not lead to a better schedule or may even adversely affect the schedule. In particular, optimizations for redundancy elimination and restructuring transformations for increasing parallelism axe often accompanied with an increase in register pressure. Therefore their application in situations where register pressure is already too high may result in the generation of additional spill code. In this paper we present an integrated approach to scheduling that enables the selective application of optimizations and restructuring transformations by the scheduler when it determines their application to be beneficial. The integration is necessary because infor- mation that is used to determine the effects of optimizations and trans- formations on the schedule is only available during instruction schedul- ing. Our integrated scheduling approach is applicable to various types of global scheduling techniques; in this paper we present an integrated algorithm for scheduling superblocks. 1 Introduction Compilers for multiple-issue architectures, such as superscalax and very long instruction word (VLIW) architectures, axe typically divided into phases, with code optimizations, scheduling and register allocation being the latter phases. The importance of integrating these latter phases is growing with the recognition that the quality of code produced for parallel systems can be greatly improved through the sharing of information.
    [Show full text]
  • Optimal Software Pipelining: Integer Linear Programming Approach
    OPTIMAL SOFTWARE PIPELINING: INTEGER LINEAR PROGRAMMING APPROACH by Artour V. Stoutchinin Schooi of Cornputer Science McGill University, Montréal A THESIS SUBMITTED TO THE FACULTYOF GRADUATESTUDIES AND RESEARCH IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTEROF SCIENCE Copyright @ by Artour V. Stoutchinin Acquisitions and Acquisitions et Bibliographie Services services bibliographiques 395 Wellington Street 395. nie Wellington Ottawa ON K1A ON4 Ottawa ON KI A ON4 Canada Canada The author has granted a non- L'auteur a accordé une licence non exclusive licence allowing the exclusive permettant à la National Library of Canada to Bibliothèque nationale du Canada de reproduce, loan, distribute or sell reproduire, prêter, distribuer ou copies of this thesis in microform, vendre des copies de cette thèse sous paper or electronic formats. la fome de microfiche/^, de reproduction sur papier ou sur format électronique. The author retains ownership of the L'auteur conserve la propriété du copyright in this thesis. Neither the droit d'auteur qui protège cette thèse. thesis nor substantialextracts fiom it Ni la thèse ni des extraits substantiels may be printed or otherwise de celle-ci ne doivent être imprimés reproduced without the author's ou autrement reproduits sans son permission. autorisation- Acknowledgements First, 1 thank my family - my Mom, Nina Stoutcliinina, my brother Mark Stoutchinine, and my sister-in-law, Nina Denisova, for their love, support, and infinite patience that comes with having to put up with someone like myself. I also thank rny father, Viatcheslav Stoutchinin, who is no longer with us. Without them this thesis could not have had happened.
    [Show full text]
  • Decoupled Software Pipelining with the Synchronization Array
    Decoupled Software Pipelining with the Synchronization Array Ram Rangan Neil Vachharajani Manish Vachharajani David I. August Department of Computer Science Princeton University {ram, nvachhar, manishv, august}@cs.princeton.edu Abstract mentally affect run-time performance of the code. Out-of- order (OOO) execution mitigates this problem to an extent. Despite the success of instruction-level parallelism (ILP) Rather than stalling when a consumer, whose dependences optimizations in increasing the performance of micropro- are not satisfied, is encountered, the OOO processor will ex- cessors, certain codes remain elusive. In particular, codes ecute instructions from after the stalled consumer. Thus, the containing recursive data structure (RDS) traversal loops compiler can now safely assume average latency since ac- have been largely immune to ILP optimizations, due to tual latencies longer than the average will not necessarily the fundamental serialization and variable latency of the lead to execution stalls. loop-carried dependence through a pointer-chasing load. Unfortunately, the predominant type of variable latency To address these and other situations, we introduce decou- instruction, memory loads, have worst case latencies (i.e., pled software pipelining (DSWP), a technique that stati- cache-miss latencies) so large that it is often difficult to find cally splits a single-threaded sequential loop into multi- sufficiently many independent instructions after a stalled ple non-speculative threads, each of which performs use- consumer. As microprocessors become wider and the dis- ful computation essential for overall program correctness. parity between processor speeds and memory access laten- The resulting threads execute on thread-parallel architec- cies grows [8], this problem is exacerbated since it is un- tures such as simultaneous multithreaded (SMT) cores or likely that dynamic scheduling windows will grow as fast chip multiprocessors (CMP), expose additional instruction as memory access latencies.
    [Show full text]
  • Computer Architecture: Static Instruction Scheduling
    Key Questions Q1. How do we find independent instructions to fetch/execute? Computer Architecture: Q2. How do we enable more compiler optimizations? Static Instruction Scheduling e.g., common subexpression elimination, constant propagation, dead code elimination, redundancy elimination, … Q3. How do we increase the instruction fetch rate? i.e., have the ability to fetch more instructions per cycle Prof. Onur Mutlu (Editted by Seth) Carnegie Mellon University 2 Key Questions VLIW (Very Long Instruction Word Q1. How do we find independent instructions to fetch/execute? Simple hardware with multiple function units Reduced hardware complexity Q2. How do we enable more compiler optimizations? Little or no scheduling done in hardware, e.g., in-order e.g., common subexpression elimination, constant Hopefully, faster clock and less power propagation, dead code elimination, redundancy Compiler required to group and schedule instructions elimination, … (compare to OoO superscalar) Predicated instructions to help with scheduling (trace, etc.) Q3. How do we increase the instruction fetch rate? More registers (for software pipelining, etc.) i.e., have the ability to fetch more instructions per cycle Example machines: Multiflow, Cydra 5 (8-16 ops per VLIW) IA-64 (3 ops per bundle) A: Enabling the compiler to optimize across a larger number of instructions that will be executed straight line (without branches TMS32xxxx (5+ ops per VLIW) getting in the way) eases all of the above Crusoe (4 ops per VLIW) 3 4 Comparison between SS VLIW Comparison:
    [Show full text]
  • VLIW Architectures Lisa Wu, Krste Asanovic
    CS252 Spring 2017 Graduate Computer Architecture Lecture 10: VLIW Architectures Lisa Wu, Krste Asanovic http://inst.eecs.berkeley.edu/~cs252/sp17 WU UCB CS252 SP17 Last Time in Lecture 9 Vector supercomputers § Vector register versus vector memory § Scaling performance with lanes § Stripmining § Chaining § Masking § Scatter/Gather CS252, Fall 2015, Lecture 10 © Krste Asanovic, 2015 2 Sequential ISA Bottleneck Sequential Superscalar compiler Sequential source code machine code a = foo(b); for (i=0, i< Find independent Schedule operations operations Superscalar processor Check instruction Schedule dependencies execution CS252, Fall 2015, Lecture 10 © Krste Asanovic, 2015 3 VLIW: Very Long Instruction Word Int Op 1 Int Op 2 Mem Op 1 Mem Op 2 FP Op 1 FP Op 2 Two Integer Units, Single Cycle Latency Two Load/Store Units, Three Cycle Latency Two Floating-Point Units, Four Cycle Latency § Multiple operations packed into one instruction § Each operation slot is for a fixed function § Constant operation latencies are specified § Architecture requires guarantee of: - Parallelism within an instruction => no cross-operation RAW check - No data use before data ready => no data interlocks CS252, Fall 2015, Lecture 10 © Krste Asanovic, 2015 4 Early VLIW Machines § FPS AP120B (1976) - scientific attached array processor - first commercial wide instruction machine - hand-coded vector math libraries using software pipelining and loop unrolling § Multiflow Trace (1987) - commercialization of ideas from Fisher’s Yale group including “trace scheduling” - available
    [Show full text]
  • Static Instruction Scheduling for High Performance on Limited Hardware
    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2017.2769641, IEEE Transactions on Computers 1 Static Instruction Scheduling for High Performance on Limited Hardware Kim-Anh Tran, Trevor E. Carlson, Konstantinos Koukos, Magnus Själander, Vasileios Spiliopoulos Stefanos Kaxiras, Alexandra Jimborean Abstract—Complex out-of-order (OoO) processors have been designed to overcome the restrictions of outstanding long-latency misses at the cost of increased energy consumption. Simple, limited OoO processors are a compromise in terms of energy consumption and performance, as they have fewer hardware resources to tolerate the penalties of long-latency loads. In worst case, these loads may stall the processor entirely. We present Clairvoyance, a compiler based technique that generates code able to hide memory latency and better utilize simple OoO processors. By clustering loads found across basic block boundaries, Clairvoyance overlaps the outstanding latencies to increases memory-level parallelism. We show that these simple OoO processors, equipped with the appropriate compiler support, can effectively hide long-latency loads and achieve performance improvements for memory-bound applications. To this end, Clairvoyance tackles (i) statically unknown dependencies, (ii) insufficient independent instructions, and (iii) register pressure. Clairvoyance achieves a geomean execution time improvement of 14% for memory-bound applications, on top of standard O3 optimizations, while maintaining compute-bound applications’ high-performance. Index Terms—Compilers, code generation, memory management, optimization ✦ 1 INTRODUCTION result in a sub-optimal utilization of the limited OoO engine Computer architects of the past have steadily improved that may stall the core for an extended period of time.
    [Show full text]
  • Lecture 1 Introduction
    Lecture 1 Introduction • What would you get out of this course? • Structure of a Compiler • Optimization Example Carnegie Mellon Todd C. Mowry 15-745: Introduction 1 What Do Compilers Do? 1. Translate one language into another – e.g., convert C++ into x86 object code – difficult for “natural” languages, but feasible for computer languages 2. Improve (i.e. “optimize”) the code – e.g, make the code run 3 times faster – driving force behind modern processor design Carnegie Mellon 15-745: Introduction 2 Todd C. Mowry What Do We Mean By “Optimization”? • Informal Definition: – transform a computation to an equivalent but “better” form • in what way is it equivalent? • in what way is it better? • “Optimize” is a bit of a misnomer – the result is not actually optimal Carnegie Mellon 15-745: Introduction 3 Todd C. Mowry How Can the Compiler Improve Performance? Execution time = Operation count * Machine cycles per operation • Minimize the number of operations – arithmetic operations, memory acesses • Replace expensive operations with simpler ones – e.g., replace 4-cycle multiplication with 1-cycle shift • Minimize cache misses Processor – both data and instruction accesses • Perform work in parallel cache – instruction scheduling within a thread memory – parallel execution across multiple threads • Related issue: minimize object code size – more important on embedded systems Carnegie Mellon 15-745: Introduction 4 Todd C. Mowry Other Optimization Goals Besides Performance • Minimizing power and energy consumption • Finding (and minimizing the
    [Show full text]
  • Compiler Optimisation 6 – Instruction Scheduling
    Compiler Optimisation 6 – Instruction Scheduling Hugh Leather IF 1.18a [email protected] Institute for Computing Systems Architecture School of Informatics University of Edinburgh 2019 Introduction This lecture: Scheduling to hide latency and exploit ILP Dependence graph Local list Scheduling + priorities Forward versus backward scheduling Software pipelining of loops Latency, functional units, and ILP Instructions take clock cycles to execute (latency) Modern machines issue several operations per cycle Cannot use results until ready, can do something else Execution time is order-dependent Latencies not always constant (cache, early exit, etc) Operation Cycles load, store 3 load 2= cache 100s loadI, add, shift 1 mult 2 div 40 branch 0 – 8 Machine types In order Deep pipelining allows multiple instructions Superscalar Multiple functional units, can issue > 1 instruction Out of order Large window of instructions can be reordered dynamically VLIW Compiler statically allocates to FUs Effect of scheduling Superscalar, 1 FU: New op each cycle if operands ready Simple schedule1 a := 2*a*b*c Cycle Operations Operands waiting loadAI rarp; @a ) r1 add r1; r1 ) r1 loadAI rarp; @b ) r2 mult r1; r2 ) r1 loadAI rarp; @c ) r2 mult r1; r2 ) r1 storeAI r1 ) rarp; @a Done 1loads/stores 3 cycles, mults 2, adds 1 Effect of scheduling Superscalar, 1 FU: New op each cycle if operands ready Simple schedule1 a := 2*a*b*c Cycle Operations Operands waiting 1 loadAI rarp; @a ) r1 r1 2 r1 3 r1 add r1; r1 ) r1 loadAI rarp; @b ) r2 mult r1; r2 ) r1 loadAI rarp;
    [Show full text]
  • Register Allocation and Instruction Scheduling in Unison
    Register Allocation and Instruction Scheduling in Unison Roberto Castaneda˜ Lozano Mats Carlsson Gabriel Hjort Blindell Swedish Institute of Computer Science Swedish Institute of Computer Science Christian Schulte School of ICT, KTH Royal Institute of [email protected] School of ICT, KTH Royal Institute of Technology, Sweden Technology, Sweden [email protected] Swedish Institute of Computer Science fghb,[email protected] Abstract (1) (4) low-level IR This paper describes Unison, a simple, flexible, and potentially op- register (2) allocation (6) (7) timal software tool that performs register allocation and instruction integrated Unison IR generic scheduling in integration using combinatorial optimization. The combinatorial construction (5) solver tool can be used as an alternative or as a complement to traditional model approaches, which are fast but complex and suboptimal. Unison is instruction most suitable whenever high-quality code is required and longer (3) scheduling (8) compilation times can be tolerated (such as in embedded systems processor assembly or library releases), or the targeted processors are so irregular that description code traditional compilers fail to generate satisfactory code. Figure 1. Unison’s approach. Categories and Subject Descriptors D.3.4 [Programming Lan- guages]: Processors—compilers, code generation, optimization; D.3.2 [Programming Languages]: Language Classifications— full scope while robustly scaling to medium-size functions. A de- constraint and logic languages; I.2.8 [Artificial Intelligence]: tailed comparison with related approaches is available in a survey Problem Solving, Control Methods, and Search—backtracking, by Castaneda˜ Lozano and Schulte [2, Sect. 5]. scheduling Keywords combinatorial optimization; register allocation; in- Approach. Unison approaches register allocation and instruction struction scheduling scheduling as shown in Figure 1.
    [Show full text]
  • Introduction to Software Pipelining in the IA-64 Architecture
    Introduction to Software Pipelining in the IA-64 Architecture ® IA-64 Software Programs This presentation is an introduction to software pipelining in the IA-64 architecture. It is recommended that the reader have some exposure to the IA-64 architecture prior to viewing this presentation. Page 1 Agenda Objectives What is software pipelining? IA-64 architectural support Assembly code example Compiler support Summary ® IA-64 Software Programs Page 2 Objectives Introduce the concept of software pipelining (SWP) Understand the IA-64 architectural features that support SWP See an assembly code example Know how to enable SWP in compiler ® IA-64 Software Programs Page 3 What is Software Pipelining? ® IA-64 Software Programs Page 4 Software Pipelining Software performance technique that overlaps the execution of consecutive loop iterations Exploits instruction level parallelism across iterations ® IA-64 Software Programs Software Pipelining (SWP) is the term for overlapping the execution of consecutive loop iterations. SWP is a performance technique that can be done in just about every computer architecture. SWP is closely related to loop unrolling. Page 5 Software Pipelining i = 1 i = 1 i = 2 i = 1 i = 2 i = 3 i = 1 i = 2 i = 3 i = 4 Time i = 2 i = 3 i = 4 i = 5 i = 3 i = 4 i = 5 i = 6 i = 4 i = 5 i = 6 i = 5 i = 6 i = 6 ® IA-64 Software Programs Here is a conceptual block diagram of a software pipeline. The loop code is separated into four pipeline stages. Six iterations of the loop are shown (i = 1 to 6).
    [Show full text]
  • Register Assignment for Software Pipelining with Partitioned Register Banks
    Register Assignment for Software Pipelining with Partitioned Register Banks Jason Hiser Steve Carr Philip Sweany Department of Computer Science Michigan Technological University Houghton MI 49931-1295 ¡ jdhiser,carr,sweany ¢ @mtu.edu Steven J. Beaty Metropolitan State College of Denver Department of Mathematical and Computer Sciences Campus Box 38, P. O. Box 173362 Denver, CO 80217-3362 [email protected] Abstract Trends in instruction-level parallelism (ILP) are putting increased demands on a machine’s register resources. Computer architects are adding increased numbers of functional units to allow more instruc- tions to be executed in parallel, thereby increasing the number of register ports needed. This increase in ports hampers access time. Additionally, aggressive scheduling techniques, such as software pipelining, are requiring more registers to attain high levels of ILP. These two factors make it difficult to maintain a single monolithic register bank. One possible solution to the problem is to partition the register banks and connect each functional unit to a subset of the register banks, thereby decreasing the number of register ports required. In this type of architecture, if a functional unit needs access to a register to which it is not directly connected, a delay is incurred to copy the value so that the proper functional unit has access to it. As a consequence, the compiler now must not only schedule code for maximum parallelism but also for minimal delays due to copies. In this paper, we present a greedy framework for generating code for architectures with partitioned register banks. Our technique is global in nature and can be applied using any scheduling method £ This research was supported by NSF grant CCR-9870871 and a grant from Texas Instruments.
    [Show full text]
  • The Bene T of Predicated Execution for Software Pipelining
    Published in HICSS-26 Conference Pro ceedings, January 1993, Vol. 1, pp. 497-506. 1 The Bene t of Predicated Execution for Software Pip elining Nancy J. Warter Daniel M. Lavery Wen-mei W. Hwu Center for Reliable and High-Performance Computing University of Illinois at Urbana/Champaign Urbana, IL 61801 Abstract implementations for the Warp [5] [3] and Cydra 5 [6] machines. Both implementations are based on the mo dulo scheduling techniques prop osed by Rau and Software pipelining is a compile-time scheduling Glaeser [7]. The Cydra 5 has sp ecial hardware sup- technique that overlaps successive loop iterations to p ort for mo dulo scheduling in the form of a rotating expose operation-level paral lelism. An important prob- register le and predicated op erations [8]. In the ab- lem with the development of e ective software pipelin- sence of sp ecial hardware supp ort in the Warp, Lam ing algorithms is how to hand le loops with conditional prop oses mo dulo variable expansion and hierarchical branches. Conditional branches increase the complex- reduction for handling register allo cation and condi- ity and decrease the e ectiveness of software pipelin- tional constructs resp ectively [3]. Rau has studied the ing algorithms by introducing many possible execution b ene ts of hardware supp ort for register allo cation [9]. paths into the scheduling scope. This paper presents In this pap er we analyze the implications and b en- an empirical study of the importanceofanarchi- e ts of predicated op erations for software pip elining tectural support, referredtoaspredicated execution, lo ops with conditional branches.
    [Show full text]