Pipelining a Triggered Processing Element

Total Page:16

File Type:pdf, Size:1020Kb

Pipelining a Triggered Processing Element Pipelining a Triggered Processing Element Thomas J. Repetti Jo˜ao P. Cerqueira Dept. of Computer Science, Columbia University Dept. of Electrical Engineering, Columbia University [email protected] [email protected] Martha A. Kim Mingoo Seok Dept. of Computer Science, Columbia University Dept. of Electrical Engineering, Columbia University [email protected] [email protected] ABSTRACT ACM Reference format: Programmable spatial architectures composed of ensembles Thomas J. Repetti, Jo˜ao P. Cerqueira, Martha A. Kim, and Min- of autonomous fixed-ISA processing elements o↵er a com- goo Seok. 2017. Pipelining a Triggered Processing Element. In Proceedings of MICRO-50, Cambridge, MA, USA, October 14–18, pelling design point between the flexibility of an FPGA and 2017, 13 pages. the compute density of a GPU or shared-memory many-core. https://doi.org/10.1145/3123939.3124551 The design regularity of spatial architectures demands exam- ination of the processing element microarchitecture early in the design process to optimize overall efficiency. 1 INTRODUCTION This paper considers the microarchitectural issues sur- Spatial accelerators support important workloads such as rounding pipelining a spatial processing element with triggered- information retrieval [22], databases [33–35], string process- instruction control. We propose two new techniques to miti- ing [26], and neural networks [8, 9]. A general purpose spatial gate pipeline hazards particular to spatial accelerators and array of programmable processing elements can serve these non-program-counter architectures, evaluating them using in- and other applications with spatial parallelism and direct vivo performance counters from an FPGA prototype coupled inter-processing element communication. In contrast to a with a rigorous VLSI power and timing estimation methodol- fixed-function accelerator, a programmable accelerator ac- ogy. We consider the e↵ect of modern, post-Dennard-scaling commodates new workloads and optimization of existing ones. CMOS technology on the energy-delay tradeo↵s and identify In such designs, triggered control [19, 20, 32] has demonstra- a set of microarchitectures optimal for both high-performance ble architectural benefits, reducing both dynamic and static and low-power application settings. Our analysis reveals the instruction counts relative to program counter-based control. e↵ectiveness of our hazard mitigation techniques as well as the This is an important element of reducing energy and delay, range of microarchitectures designers might consider when but it is only part of the story. selecting a processing element for triggered spatial accelera- Total energy consumption is a product of dynamic instruc- tors. tion count and the energy expended per instruction: Energy = Instructions Energy . CCS CONCEPTS Program Program ⇥ Instruction Instructions While the architecture and workload determine Program , • Computer systems organization → Pipeline com- Energy puting; Reduced instruction set computing ; Multiple instruc- the microarchitecture and circuit determine Instruction .Total tion, multiple data; Multicore architectures; Interconnection delay is likewise determined by the archictectural constraint of architectures; • Hardware → Power and energy; dynamic instruction count but also by cycles per instruction (CPI) and cycles per unit time (frequency): Time = Instructions Cycles Time . KEYWORDS Program Program ⇥ Instruction ⇥ Cycle Cycles Time Spatial architectures, pipeline hazards, microarchitecture, Instruction and Cycle are also properties of the microarchi- design-space exploration, low-power design tecture and underlying circuit technology. Exploring these elements of energy and delay below the architectural level is the focus of this paper. Permission to make digital or hard copies of all or part of this work This work investigates the microarchitectural and circuit for personal or classroom use is granted without fee provided that design space of triggered processing elements. The replication copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first factor in a tiled architecture demands deep analysis and page. Copyrights for components of this work owned by others than optimization of the central building block: the processing ACM must be honored. Abstracting with credit is permitted. To copy element (or PE). We focus on the interplay between the otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions instruction pipeline and supply voltage scaling, as it has from [email protected]. been long established that pipelining can improve instruction MICRO-50, October 14–18, 2017, Cambridge, MA, USA level parallelism, timing closure, and power efficiency through © 2017 Association for Computing Machinery. ACM ISBN 978-1-4503-4952-9/17/10. $15.00 voltage scaling [2, 3, 16, 27]. Once a pipeline has reduced https://doi.org/10.1145/3123939.3124551 the critical path of a circuit, additional opportunity to trade 96 MICRO-50, October 14–18, 2017, Cambridge, MA, USA T. Repetti et al. energy and delay appears. One could maintain nominal supply 2.1 Background voltage and increase clock frequency, maintain the original Triggered control was proposed by Parashar et al. in 2013 [19] clock frequency and reduce supply voltage, or apply some as an alternative to program-counter-based control for spatial combination in the middle. Our results reveal the importance arrays of autonomous PEs. In the triggered scheme, each PE of such design choices. Having explored over 4,000 unique is programmed with a priority ordered list of guarded atomic designs in this space, the energy-delay tradeo↵ curve spans actions. This list represents a finite, statically configured 71x in energy – from 0.67 to 47.59 pJ / instruction – and local pool of datapath instructions, whose eligibility for issue 225x in delay – from 1.37 to 309.03 ns / instruction. in any given cycle is determined by a corresponding “trigger” Triggered control poses unique sorts of hazards for an condition (i.e., the guard). Each cycle, all of the triggers are instruction pipeline. To launch an instruction, the front end compared to designated architectural state – predicate and must compare the predicate and communication queue state queue status, described shortly – to determine whether the to a programmed set of trigger conditions, as opposed to just corresponding instruction has been “triggered”. Instructions calculating the next address for the program counter. We are ordered by priority rather than sequence, with the highest present two new hazard mitigation techniques that help keep priority triggered instruction issued for execution (Figure 2). the pipeline full: speculation on upcoming predicate state and Each trigger-controlled PE is connected to neighboring PEs accurate queue status accounting given the current contents by a set of incoming and outgoing tagged data queues over of the pipeline. We find that these techniques reduce the an interconnect fabric. Tags encode programmable semantic increases in CPI that otherwise accompany deep pipelines, information that accompanies the data communicated over together reducing CPI in a 4-stage pipeline by 35%. They these queues. For example, a tag might be used to indicate incur some overheads – in the worst case 1.4% area, 8% the datatype of the accompanying data word or a message to power, 20% critical path – but ultimately improve the optimal e↵ect control flow like a termination condition. Tag values at design frontier by 20-25% in both energy and delay. While the head of the input queues determine, in part, whether an predicate prediction is applicable specifically to triggered instruction can be fired. The PE also contains a set of single- instruction architectures, the method of determining e↵ective bit predicate registers, which can be updated immediately queue status benefits any spatial architecture with pipelined upon triggering an instruction, or as the result of a datapath processing elements and register queues. operation. Each trigger’s validity is determined by the state We have released an open-source repository to support of the predicate registers, the availability of tagged input further investigation in this area. It includes SystemVerilog operands on the incoming queues, and capacity on the output implementations of both the single cycle and pipelined mi- queues for any instructions that write there. Comparison or croarchitectures presented here, which in turn can be used in logic instructions whose destination is a predicate register synthesizable spatial arrays. It is supported by a toolchain provide control flow equivalent to branching in program- that includes an assembler, functional ISA simulator, Linux counter-based ISAs. driver and userspace library. All of these are governed by a By eliminating explicit branches, triggered control reduces single parameter file that configures the architecture (e.g., the dynamic instruction count of a given task. Moreover, in queue counts or instructions per processing element) and a spatial context, it allows PEs to react quickly to incoming microarchitecture (e.g., turning on/o↵ the aforementioned data. Together these two features help multiple PEs to work hazard mitigation techniques). Lastly, we include a set of ten together in an efficient processing chain: each PE in the chain triggered instruction microbenchmarks that exhibit a
Recommended publications
  • Performance and Energy Efficient Network-On-Chip Architectures
    Linköping Studies in Science and Technology Dissertation No. 1130 Performance and Energy Efficient Network-on-Chip Architectures Sriram R. Vangal Electronic Devices Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden Linköping 2007 ISBN 978-91-85895-91-5 ISSN 0345-7524 ii Performance and Energy Efficient Network-on-Chip Architectures Sriram R. Vangal ISBN 978-91-85895-91-5 Copyright Sriram. R. Vangal, 2007 Linköping Studies in Science and Technology Dissertation No. 1130 ISSN 0345-7524 Electronic Devices Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden Linköping 2007 Author email: [email protected] Cover Image A chip microphotograph of the industry’s first programmable 80-tile teraFLOPS processor, which is implemented in a 65-nm eight-metal CMOS technology. Printed by LiU-Tryck, Linköping University Linköping, Sweden, 2007 Abstract The scaling of MOS transistors into the nanometer regime opens the possibility for creating large Network-on-Chip (NoC) architectures containing hundreds of integrated processing elements with on-chip communication. NoC architectures, with structured on-chip networks are emerging as a scalable and modular solution to global communications within large systems-on-chip. NoCs mitigate the emerging wire-delay problem and addresses the need for substantial interconnect bandwidth by replacing today’s shared buses with packet-switched router networks. With on-chip communication consuming a significant portion of the chip power and area budgets, there is a compelling need for compact, low power routers. While applications dictate the choice of the compute core, the advent of multimedia applications, such as three-dimensional (3D) graphics and signal processing, places stronger demands for self-contained, low-latency floating-point processors with increased throughput.
    [Show full text]
  • Computer Architecture: Dataflow (Part I)
    Computer Architecture: Dataflow (Part I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture n These slides are from 18-742 Fall 2012, Parallel Computer Architecture, Lecture 22: Dataflow I n Video: n http://www.youtube.com/watch? v=D2uue7izU2c&list=PL5PHm2jkkXmh4cDkC3s1VBB7- njlgiG5d&index=19 2 Some Required Dataflow Readings n Dataflow at the ISA level q Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. q Arvind and Nikhil, “Executing a Program on the MIT Tagged- Token Dataflow Architecture,” IEEE TC 1990. n Restricted Dataflow q Patt et al., “HPS, a new microarchitecture: rationale and introduction,” MICRO 1985. q Patt et al., “Critical issues regarding HPS, a high performance microarchitecture,” MICRO 1985. 3 Other Related Recommended Readings n Dataflow n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Lee and Hurson, “Dataflow Architectures and Multithreading,” IEEE Computer 1994. n Restricted Dataflow q Sankaralingam et al., “Exploiting ILP, TLP and DLP with the Polymorphous TRIPS Architecture,” ISCA 2003. q Burger et al., “Scaling to the End of Silicon with EDGE Architectures,” IEEE Computer 2004. 4 Today n Start Dataflow 5 Data Flow Readings: Data Flow (I) n Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. n Treleaven et al., “Data-Driven and Demand-Driven Computer Architecture,” ACM Computing Surveys 1982. n Veen, “Dataflow Machine Architecture,” ACM Computing Surveys 1986. n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Arvind and Nikhil, “Executing a Program on the MIT Tagged-Token Dataflow Architecture,” IEEE TC 1990.
    [Show full text]
  • Configurable Fine-Grain Protection for Multicore Processor Virtualization 1
    Configurable Fine-Grain Protection for Multicore Processor Virtualization David Wentzlaff1, Christopher J. Jackson2, Patrick Griffin3, and Anant Agarwal2 [email protected], [email protected], griffi[email protected], [email protected] 1Princeton University 2Tilera Corp. 3Google Inc. Abstract TLB Access DMA Engine “User” Network I/O Network 2 2 2 2 1 3 1 3 1 3 1 3 Multicore architectures, with their abundant on-chip re- 0 0 0 0 sources, are effectively collections of systems-on-a-chip. The protection system for these architectures must support Key: 0 – User Code, 1 – OS, 2 – Hypervisor, 3 – Hypervisor Debugger multiple concurrently executing operating systems (OSes) Figure 1. With CFP, system software can dy• with different needs, and manage and protect the hard- namically set the privilege level needed to ac• ware’s novel communication mechanisms and hardware cess each fine•grain processor resource. features. Traditional protection systems are insufficient; they protect supervisor from user code, but typically do not protect one system from another, and only support fixed as- ticore systems, a protection system must both temporally signment of resources to protection levels. In this paper, protect and spatially isolate access to resources. Spatial iso- we propose an alternative to traditional protection systems lation is the need to isolate different system software stacks which we call configurable fine-grain protection (CFP). concurrently executing on spatially disparate cores in a mul- CFP enables the dynamic assignment of in-core resources ticore system. Spatial isolation is especially important now to protection levels. We investigate how CFP enables differ- that multicore systems have directly accessible networks ent system software stacks to utilize the same configurable connecting cores to other cores and cores to I/O devices.
    [Show full text]
  • CG-Ooo Energy-Efficient Coarse-Grain Out-Of-Order Execution
    CG-OoO Energy-Efficient Coarse-Grain Out-of-Order Execution Milad Mohammadi⋆, Tor M. Aamodt†, William J. Dally⋆‡ ⋆Stanford University, †University of British Columbia, ‡NVIDIA Research [email protected], [email protected], [email protected] ABSTRACT CG-OoO model to make it even more energy efficient. We introduce the Coarse-Grain Out-of-Order (CG- Despite the significant achievements in improving en- OoO) general purpose processor designed to achieve ergy and performance properties of the OoO proces- close to In-Order processor energy while maintaining sor in the recent years [2], studies show the energy Out-of-Order (OoO) performance. CG-OoO is an and performance attributes of the OoO execution model energy-performance proportional general purpose remain superlinearly proportional [3, 4]. Studies indi- architecture that scales according to the program cate control speculation and dynamic scheduling tech- load1. Block-level code processing is at the heart of nique amount to 88% and 10% of the OoO superior the this architecture; CG-OoO speculates, fetches, performance compared to the In-Order (InO) proces- schedules, and commits code at block-level granu- sor [5]. Scheduling and speculation in OoO is performed larity. It eliminates unnecessary accesses to energy at instruction granularity regardless of the instruction consuming tables, and turns large tables into smaller type even though they are mainly effective during un- and distributed tables that are cheaper to access. predictable dynamic events (e.g. unpredictable cache CG-OoO leverages compiler-level code optimizations misses) [5]. Furthermore, our studies show speculation to deliver efficient static code, and exploits dynamic and dynamic scheduling amount to 67% and 51% of instruction-level parallelism and block-level parallelism.
    [Show full text]
  • Parallel Computer Architecture III
    Parallel Computer Architecture III Stefan Lang Interdisciplinary Center for Scientific Computing (IWR) University of Heidelberg INF 368, Room 532 D-69120 Heidelberg phone: 06221/54-8264 email: [email protected] WS 14/15 Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 1 / 51 Parallel Computer Architecture III Parallelism and Granularity Graphic cards I/O Detailed study Hypertransport Protocol Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 2 / 51 Parallelism and Granularity level five jobs or Programs coarse grained Subprograms, modules level four or classes middle grained Increase of Increase of Parallelism Communications Procedures, functions level three Requirements or methods Non−recursive loops level two or iterators fine grained Instructions level one or statements Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 3 / 51 Graphics Cards GPU = Graphics Processing Unit CUDA = Compute Unified Device Architecture ◮ Toolkit by NVIDIA for direct GPU Programming ◮ Programming of a GPU without graphical API ◮ GPGPU compared to CPUs strongly increased computing performance and storage bandwidth GPUs are cheap and broadly established Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 4 / 51 Computing Performance: CPU vs. GPU Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 5 / 51 Graphics Cards: Hardware Specification Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 6 / 51 Chip Architecture: CPU vs. GPU Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 7 / 51 Graphics Cards: Hardware Design Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 8 / 51 Graphics Cards: Memory Design 8192 registers (32-bit), in total 32KB per multiprocessor 16KB of fast shared memory per multiprocessor Large global memory (hundreds of MBs, e.g.
    [Show full text]
  • Distributed Microarchitectural Protocols in the TRIPS Prototype Processor
    Øh Appears in the ¿9 Annual International Symposium on Microarchitecture Distributed Microarchitectural Protocols in the TRIPS Prototype Processor Karthikeyan Sankaralingam Ramadass Nagarajan Robert McDonald Ý Rajagopalan DesikanÝ Saurabh Drolia M.S. Govindan Paul Gratz Divya Gulati Heather HansonÝ ChangkyuKim HaimingLiu NityaRanganathan Simha Sethumadhavan Sadia SharifÝ Premkishore Shivakumar Stephen W. Keckler Doug Burger Department of Computer Sciences ÝDepartment of Electrical and Computer Engineering The University of Texas at Austin [email protected] www.cs.utexas.edu/users/cart Abstract are clients on one or more micronets. Higher-level mi- croarchitectural protocols direct global control across the Growing on-chip wire delays will cause many future mi- micronets and tiles in a manner invisible to software. croarchitectures to be distributed, in which hardware re- In this paper, we describe the tile partitioning, micronet sources within a single processor become nodes on one or connectivity, and distributed protocols that provide global more switched micronetworks. Since large processor cores services in the TRIPS processor, including distributed fetch, will require multiple clock cycles to traverse, control must execution, flush, and commit. Prior papers have described be distributed, not centralized. This paper describes the this approach to exploiting parallelism as well as high-level control protocols in the TRIPS processor, a distributed, tiled performanceresults [15, 3], but have not described the inter- microarchitecture that supports dynamic execution. It de- tile connectivity or protocols. Tiled architectures such as tails each of the five types of reused tiles that compose the RAW [23] use static orchestration to manage global oper- processor, the control and data networks that connect them, ations, but in a dynamically scheduled, distributed archi- and the distributed microarchitectural protocols that imple- tecture such as TRIPS, hardware protocols are required to ment instruction fetch, execution, flush, and commit.
    [Show full text]
  • An Evaluation of the TRIPS Computer System
    Appears in the Proceedings of the 14th International Conference on Architecture Support for Programming Languages and Operating Systems An Evaluation of the TRIPS Computer System Mark Gebhart Bertrand A. Maher Katherine E. Coons Jeff Diamond Paul Gratz Mario Marino Nitya Ranganathan Behnam Robatmili Aaron Smith James Burrill Stephen W. Keckler Doug Burger Kathryn S. McKinley Department of Computer Sciences The University of Texas at Austin [email protected] www.cs.utexas.edu/users/cart Abstract issue-width scaling of conventional superscalar architec- The TRIPS system employs a new instruction set architec- tures. Because of these trends, major microprocessor ven- ture (ISA) called Explicit Data Graph Execution (EDGE) dors have abandoned architectures for single-thread perfor- that renegotiates the boundary between hardware and soft- mance and turned to the promise of multiple cores per chip. ware to expose and exploit concurrency. EDGE ISAs use a While many applications can exploit multicore systems, this block-atomic execution model in which blocks are composed approach places substantial burdens on programmers to par- of dataflow instructions. The goal of the TRIPS design is allelize their codes. Despite these trends, Amdahl’s law dic- to mine concurrency for high performance while tolerating tates that single-thread performance will remain key to the emerging technology scaling challenges, such as increas- future success of computer systems [9]. ing wire delays and power consumption. This paper eval- In response to semiconductor scaling trends, we designed uates how well TRIPS meets this goal through a detailed a new architecture and microarchitecture intended to extend ISA and performance analysis.
    [Show full text]
  • A Survey on Coarse-Grained Reconfigurable Architectures from a Performance Perspective
    Received May 31, 2020, accepted July 13, 2020, date of publication July 27, 2020, date of current version August 20, 2020. Digital Object Identifier 10.1109/ACCESS.2020.3012084 A Survey on Coarse-Grained Reconfigurable Architectures From a Performance Perspective ARTUR PODOBAS 1,2, KENTARO SANO1, AND SATOSHI MATSUOKA1,3 1RIKEN Center for Computational Science, Kobe 650-0047, Japan 2Department of Computer Science, KTH Royal Institute of Technology, 114 28 Stockholm, Sweden 3Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, Tokyo 152-8550, Japan Corresponding author: Artur Podobas ([email protected]) This work was supported by the New Energy and Industrial Technology Development Organization (NEDO). ABSTRACT With the end of both Dennard’s scaling and Moore’s law, computer users and researchers are aggressively exploring alternative forms of computing in order to continue the performance scaling that we have come to enjoy. Among the more salient and practical of the post-Moore alternatives are reconfigurable systems, with Coarse-Grained Reconfigurable Architectures (CGRAs) seemingly capable of striking a balance between performance and programmability. In this paper, we survey the landscape of CGRAs. We summarize nearly three decades of literature on the subject, with a particular focus on the premise behind the different CGRAs and how they have evolved. Next, we compile metrics of available CGRAs and analyze their performance properties in order to understand and discover knowledge gaps and opportunities for future CGRA research specialized towards High-Performance Computing (HPC). We find that there are ample opportunities for future research on CGRAs, in particular with respect to size, functionality, support for parallel programming models, and to evaluate more complex applications.
    [Show full text]
  • Designing Heterogeneous Many-Core Processors to Provide High Performance Under Limited Chip Power Budget
    DESIGNING HETEROGENEOUS MANY-CORE PROCESSORS TO PROVIDE HIGH PERFORMANCE UNDER LIMITED CHIP POWER BUDGET A Thesis Presented to The Academic Faculty by Dong Hyuk Woo In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Electrical and Computer Engineering Georgia Institute of Technology December 2010 DESIGNING HETEROGENEOUS MANY-CORE PROCESSORS TO PROVIDE HIGH PERFORMANCE UNDER LIMITED CHIP POWER BUDGET Approved by: Dr. Hsien-Hsin S. Lee, Advisor Dr. Sung Kyu Lim School of Electrical and Computer School of Electrical and Computer Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Sudhakar Yalamanchili Dr. Milos Prvulovic School of Electrical and Computer School of Computer Science Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Marilyn Wolf Date Approved: 23 September 2010 School of Electrical and Computer Engineering Georgia Institute of Technology To my family. iii ACKNOWLEDGEMENTS I would like to take this opportunity to thank all those who directly or indirectly helped me in completing my Ph.D. study. First of all, I would like to thank my advisor, Dr. Hsien-Hsin S. Lee, who contin- uously motivated me, patiently listened to me, and often challenged me with critical feedback. I would also like to thank Dr. Sudhakar Yalamanchili, Dr. Marilyn Wolf, Dr. Sung Kyu Lim, and Dr. Milos Prvulovic for volunteering to serve in my commit- tee and reviewing my thesis. I would also like to thank all the MARS lab members, Dr. Weidong Shi, Dr. Taeweon Suh, Dr. Chinnakrishnan Ballapuram, Dr. Mrinmoy Ghosh, Fayez Mo- hamood, Richard Yoo, Dean Lewis, Eric Fontaine, Ahmad Sharif, Pratik Marolia, Vikas Vasisht, Nak Hee Seong, Sungkap Yeo, Jen-Cheng Huang, Abilash Sekar, Manoj Athreya, Ali Benquassmi, Tzu-Wei Lin, Mohammad Hossain, Andrei Bersatti, and and Jae Woong Sim.
    [Show full text]
  • Modeling Instruction Placement on a Spatial Architecture
    Modeling Instruction Placement on a Spatial Architecture Martha Mercaldi Steven Swanson Andrew Petersen Andrew Putnam Andrew Schwerin Mark Oskin Susan J. Eggers Computer Science & Engineering University of Washington Seattle, WA USA {mercaldi,swanson,petersen,aputnam,schwerin,oskin,eggers}@cs.washington.edu ABSTRACT Keywords In response to current technology scaling trends, architects dataflow, instruction placement, spatial computing are developing a new style of processor, known as spatial computers. A spatial computer is composed of hundreds or even thousands of simple, replicated processing elements 1. INTRODUCTION (or PEs), frequently organized into a grid. Several current Today’s manufacturing technologies provide an enormous spatial computers, such as TRIPS, RAW, SmartMemories, quantity of computational resources. Computer architects nanoFabrics and WaveScalar, explicitly place a program’s are currently exploring how to convert these resources into instructions onto the grid. improvements in application performance. Despite signifi- Designing instruction placement algorithms is an enor- cant differences in execution models and underlying process mous challenge, as there are an exponential (in the size of technology, five recently proposed architectures - nanoFab- the application) number of different mappings of instruc- rics [18], TRIPS [34], RAW [23], SmartMemories [26], and tions to PEs, and the choice of mapping greatly affects pro- WaveScalar [39] - share the task of mapping large portions gram performance. In this paper we develop an instruction of an application’s binary onto a collection of processing el- placement performance model which can inform instruction ements. Once mapped, the instructions execute “in place”, placement. The model comprises three components, each explicitly sending data between the processing elements. Re- of which captures a different aspect of spatial computing searchers call this form of computation distributed ILP [34, performance: inter-instruction operand latency, data cache 23, 39] or spatial computing [18].
    [Show full text]
  • Compiling for EDGE Architectures
    Appears in the Proceedings of the 4th International Symposium on Code Generation and Optimization (CGO 04). Compiling for EDGE Architectures AaronSmith JimBurrill1 Jon Gibson Bertrand Maher Nick Nethercote BillYoder DougBurger KathrynS.McKinley Department of Computer Sciences 1Department of Computer Science The University of Texas at Austin University of Massachusetts Austin, Texas 78712 Amherst, Massachusetts 01003 Abstract sibilities between programmer, compiler, and hardware to discover and exploit concurrency. Explicit Data Graph Execution (EDGE) architectures of- In previous solutions, CISC processors intentionally fer the possibility of high instruction-level parallelism with placed few ISA-imposed requirements on the compiler to energy efficiency. In EDGE architectures, the compiler expose concurrency. In-order RISC processors required the breaks a program into a sequence of structured blocks that compiler to schedule instructions to minimize pipeline bub- the hardware executes atomically. The instructions within bles for effective pipelining concurrency. With the advent each block communicate directly, instead of communicating of large-window out-of-order microarchitectures, however, through shared registers. The TRIPS EDGE architecture both RISC and CISC processors rely mostly on the hard- imposes restrictions on its blocks to simplify the microar- ware to support superscalar issue. These processors use a chitecture: each TRIPS block has at most 128 instructions, dynamic placement, dynamic issue execution model that re- issues at most 32 loads and/or stores, and executes at most quires the hardware to construct the program dataflow graph 32 register bank reads and 32 writes. To detect block com- on the fly, with little compiler assistance. VLIW processors, pletion, each TRIPS block must produce a constant number conversely, place most of the burden of identifying con- of outputs (stores and register writes) and a branch deci- current instructions on the compiler, which must fill long sion.
    [Show full text]
  • Scatter-Add in Data Parallel Architectures
    Scatter-Add in Data Parallel Architectures Jung Ho Ahn, Mattan Erez and William J. Dally ∗ Computer Systems Laboratory Stanford University, Stanford, CA 94305, USA {gajh,merez,billd}@cva.stanford.edu Abstract histogram Many important applications exhibit large amounts of data parallelism, and modern computer systems are de- signed to take advantage of it. While much of the com- putation in the multimedia and scientific application do- mains is data parallel, certain operations require costly bins serialization that increase the run time. Examples in- clude superposition type updates in scientific computing and histogram computations in media processing. We in- troduce scatter-add, which is the data-parallel form of dataset the well-known scalar fetch-and-op, specifically tuned for SIMD/vector/stream style memory systems. The scatter-add Figure 1: Parallel histogram computation leads to mem- mechanism scatters a set of data values to a set of mem- ory collision, when multiple elements of the dataset up- ory addresses and adds each data value to each refer- date the same histogram bin. enced memory location instead of overwriting it. This novel architecture extension allows us to efficiently sup- tencies. In this paper we will concentrate on the single in- port data-parallel atomic update computations found in struction multiple data (SIMD) class of DPAs, exemplified parallel programming languages such as HPF, and ap- by vector [9, 6], and stream processors [21, 10, 37]. plies both to single-processor and multi-processor SIMD While much of the computation of a typical multimedia data-parallel systems. We detail the micro-architecture ofa or scientific application is indeed data parallel, some sec- scatter-add implementation on a stream architecture, which tions of the code require serialization which significantly requires less than 2% increase in die area yet shows per- limits overall performance.
    [Show full text]