
Appears in the proceedings of 16th Int'l Conference on Parallel Architectures and Compilation Techniques (PACT), Sep '07 Architectural Support for the Stream Execution Model on General-Purpose Processors Jayanth Gummaraju* Mattan Erez** Joel Coburn* Mendel Rosenblum* William J. Dally* *Computer Systems Laboratory, Stanford University **Electrical and Computer Engineering Department, The University of Texas at Austin Abstract storing of the data back into memory. This style of program- ming is key to the high efficiency demonstrated by SPs. There has recently been much interest in stream process- Although current multicore GPPs such as those from Intel ing, both in industry (e.g., Cell, NVIDIA G80, ATI R580) and and AMD, lack the peak FLOPS and bandwidth of stream academia (e.g., Stanford Merrimac, MIT RAW), with stream processors, their likely ubiquitous deployment as part of in- programs becoming increasingly popular for both media and dustry standard computing platforms make them an attractive more general-purpose computing. Although a special style of target for stream programming. It is desirable to effectively programming called stream programming is needed to target use these commodity general-purpose multicores rather than these stream architectures, huge performance benefits can be targeting only special purpose stream-only processors such achieved. as Cell or GP-GPUs. In this paper, we minimally add architectural features to One problem with this approach is that although peak commodity general-purpose processors (e.g., Intel/AMD) to FLOPS and memory bandwidth of general-purpose proces- efficiently support the stream execution model. We design sors are improving and narrowing the gap between them and the extensions to reuse existing components of the general- stream processors, GPPs lack some of the features that are purpose processor hardware as much as possible by investi- key to the high-efficiency of stream processors. In this pa- gating low-cost modifications to the CPU caches, hardware per we examine these differences in detail, and propose and 1 prefetcher, and the execution core. With a less than % in- evaluate extensions to a general-purpose core to allow it to crease in die area along with judicious use of a software run- efficiently map the stream programming style. To simplify time system, we show that we can efficiently support stream our discussion we focus on single core behavior but operate programming on traditional processor cores. We evaluate under the assumption that the core is part of a multicore sys- our techniques by running scientific applications on a cycle- tem used in a streaming style. level simulation system. The results show that our system Our work shows that although stream cores and general- executes stream programs as efficiently as possible, limited purpose cores appear very different to the programmer, the only by the ALU performance and the memory bandwidth underlying implementations are similar enough that only rel- needed to feed the ALUs. atively minor architectural extensions are needed to map stream programs efficiently. Our basic approach is to ex- 1 Introduction amine the key features of stream processors such as Cell and Merrimac and determine how to best emulate them on Recently there has been much interest in both research and a general-purpose core. By using some architectural fea- the commercial marketplace for architectures that support a tures in an unintended way (e.g. treating a processor cache stream-style of execution [15, 17, 23, 2, 6]. Although initially as a software managed local memory) and judiciously us- targeted at applications such as signal processing that oper- ing a software runtime, we found that the only required ar- ate on continuous streams of data, stream programming has chitectural extension is a memory transfer engine to asyn- broadened to encompass general compute intensive applica- chronously bulk load and store operand data to and from the tions. Research has shown that stream architectures such as local memory. Stanford Merrimac [15], Cell Broadband Engine (Cell) [17], In this paper we describe the design of the memory trans- and general-purpose computing on graphic processing units fer engine we call the stream load/store unit (SLS unit). The (GP-GPUs) [11] deliver superior performance for applica- SLS unit can be logically viewed as an extension and gen- tions that can exploit the high bandwidth and large numbers eralization of a traditional hardware memory prefetch and of functional units offered by these architectures. writeback unit that is able to transfer large groups of po- Stream processors (SP) require a different program- tentially non-contiguous memory locations to and from the ming abstraction from traditional general-purpose processors cache memory. We also show how the SLS unit aligns data (GPPs). To get performance benefits, stream processors are before being transferred to the cache so that it can directly programmed in a style that involves bulk loading of data into feed into short-vector SIMD units such as the SSE units of a local memory, operating on the data in parallel, and bulk an x86 processor. We claim that the SLS unit is a relatively 1 minor extension leveraging much of the existing functional- useful in drawing parallels with a stream micro-architecture ity and data-paths in a general-purpose core, requiring less (Figure 1(b)) to be discussed in the next section. than 1% increase in die area. The controller is responsible for fetching instructions in We show that our extensions allow a traditional GPP core control-flow order and extracting parallelism from the in- to efficiently execute the stream programming model. This struction sequence. Because the GPP needs to support means that performance will be limited by either the opera- arbitrary instruction sequences that potentially have little tion rate of the ALUs (peak FLOPS) or memory bandwidth explicit parallelism, it uses several speculative hardware needed to fetch the operands depending on if the application structures and static/dynamic scheduling to extract paral- is compute or memory bound. We demonstrate this with four lelism and drive multiple functional units. Along with real scientific applications that have been coded in a stream conventional functional units such as ALUs/FPUs, modern style. We also show the potential improvement we get over processors typically feature short-vector SIMD units (e.g., the same program written in a conventional style run on the SSE/Altivec/VMX units), which substantially increase the same GPP. compute power of GPPs. However, the utilization of these The paper is organized as follows. We start by comparing units is usually low because it is difficult to automatically and contrasting the architectures used for traditional GPPs generate code to feed these units efficiently. and the new breed of architectures for stream computing in Global memory accesses can have immediate effect on Section 2. In Section 3 we show how stream programs can be subsequent instructions in this programming model, so great mapped onto the GPP core by focusing on the SLS unit ex- emphasis is placed on a storage hierarchy that minimizes data tension. In Section 4 we present the evaluation of our exten- access latencies rather than increasing data bandwidth. The sions using simulation. We present additional related work storage hierarchy is composed of the central register file at in Section 5, and conclude in Section 6. the lowest level, followed by multiple levels of cache mem- ories. Caches work well for most control intensive appli- 2 General-Purpose and Stream Programming cations that access a limited working set of data, but com- and Architectures pute and data intensive applications need additional hardware structures and software techniques. A hardware prefetcher In this section we compare and contrast the programming is one such structure which attempts to predict and prefetch model and micro-architecture of two different architecture data using the data access pattern. If the prediction is both classes: a general-purpose processor (GPP) architecture and timely and correct, the memory access latency is completely a stream processor (SP) architecture. While the former is op- hidden. timized to run applications written in conventional von Neu- mann style where the parallelism and data locality is auto- 2.2 Stream Programming and Architecture matically extracted from sequential code, the latter is op- Stream programing, on the other hand, provides an effi- timized to run applications written in a stream-style where cient style to represent compute or memory intensive applica- both parallelism and data locality are explicitly expressed by tions that have large amounts of data-parallelism, that are less the programmer. control-intensive, and that have memory accesses that can be determined well in advance of the data use. The computa- 2.1 General­Purpose Programming and Architec­ tion is decoupled from memory accesses to enable efficient ture utilization of computation units and memory bandwidth. The programming model typically used for GPPs is ex- Although originally intended for applications that follow hibited by the familiar sequential languages (e.g., C, FOR- restricted, synchronous data flow, stream programming has TRAN, Java). Conceptually, instructions execute sequen- recently been shown to work well for more general applica- tially and in program order, often with frequent control trans- tions (e.g., irregular scientific applications) [32, 15]. Several fers. Requests to memory are performed on a per-use basis software systems have been created to support the develop- resulting in memory accesses that are of single-word granu- ment and compilation of stream programs for stream proces- larity. This programming model is most suited for applica- sors. (e.g., Brook [11], Sequoia [18], StreamIt [31]). tions that have fine-grained control and uncertainty both in In the stream programming model complex kernel oper- control flow and data accesses. ations execute on collections of data elements referred to as streams.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-