Processor/Memory/Array Size Tradeoffs in the Design of SIMD

Processor/Memory/Array Size Tradeoffs in the Design of SIMD

Processor/Memory/Array Size Tradeo®s in the Design of SIMD Arrays for a Spatially Mapped Workload¤ Martin C. Herbordt Anisha Anand Charles C. Weems Owais Kidwai Renoy Sam Department of Electrical and Computer Engineering Department of Computer Science University of Houston University of Massachusetts Houston, TX 77204-4793 Amherst, MA 01003 Abstract: Though massively parallel SIMD arrays continue The design issues are as follows. The ¯rst is to rank to be promising for many computer vision applications, they datapath and memory designs with respect to performance, have undergone few systematic empirical studies. The prob- eliminating those with poor cost/performance ratios. The lems include the size of the architecture space, the lack of second is to see how the performance of these designs varies portability of the test programs, and the inherent complex- with array size. This is complicated by the fact that al- ity of simulating up to hundreds of thousands of processing though the e®ect of the datapath on performance varies pre- elements. The latter two issues have been addressed previ- dictably with array size, that of the memory hierarchy does ously, here we describe how spreadsheets and tk/tcl are used not. The next issue is to relate the datapath to the memory to endow our simulator with the flexibility to model a large designs to ¯nd which combinations result in i) well-balanced variety of designs. The utility of this approach is shown in systems and ii) systems which could be enhanced with PE the second half of the paper where results are presented as to cache. Finally, the overall designs (including array size, dat- the performance of a large number of array size, datapath, apath, and memory) with the best performance/chip area register ¯le, and application code combinations. The con- ratios are determined. clusions derived include the utility of multiplier and floating The constraint that we wish to examine processor de- point support, the cost of virtual PE emulation, likely dat- signs by running real application programs, together with apath/memory combinations, and overall designs with the the wide variety of designs to be evaluated, clearly requires most promising performance/chip area ratios. that simulation be used. Many problems remain, however, including how to i) implement a simulator with the flexibility 1 Introduction to emulate large numbers of models without requiring sub- stantial recoding, ii) achieve throughput given the slowdown It is commonly recognized that the computational character- inherent in software simulation, and iii) guarantee fairness istics of many low-level, and even some intermediate-level, for a wide range of designs that may not share the same vision tasks map well to massively parallel SIMD arrays of optimal algorithm for any particular task. processing elements. Primary reasons for this are that these These issues are dealt with by the ENPASSANT en- vision tasks typically require processing of pixel data with vironment; the latter two are discussed elsewhere [6, 4], the respect to either nearest-neighbor or other two-dimensional former has been addressed recently and is described here. pattern exhibiting locality or regularity. Tasks within this The key idea is to use spreadsheets coupled with tk/tcl and domain often require complex codes. As a consequence, C code to specify templates (classes of architectures) relat- gauging computational requirements is facilitated by evalu- ing design parameters to the performance of virtual machine ating performance with respect to real program executions. constructs. The user then can specify models (speci¯c de- What we examine here are the e®ects on both cost and per- signs) within those templates from a graphic user interface. formance of varying the array size (number of PEs), the datapath, and the memory hierarchy with respect to a set 2 Architecture and Application Domains of computer vision applications. 2.1 Architecture Space ¤This work was supported in part by an IBM Fellowship; by the NSF through CAREER award #9702483; by DARPA under contract Massively Parallel SIMD Arrays (MPAs) are asymmetric, DACA76-89-C-0016, monitored by the U.S. Army Engineer Topo- graphic Laboratory; and under contract DAAL02-91-K-0047, moni- having a controller and an array consisting of from a few tored by the U.S. Army Harry Diamond Laboratory. thousand to a few hundred thousand processing elements (PEs). The PEs execute synchronously code broadcast by the controller. One consequence is that PEs do not have in- dividual micro-sequencers or other control circuitry; rather their `CPUs' consist entirely of the datapath. Within this constraint, however, there are few limitations. See Figure 1. Controller Here we examine array sizes of from 4K to 64K PEs. The complexity of current PE datapaths ranges from Instruction Register the MGAP which does not have even a complete one-bit Decoder Decoder ALU [9] to the MasPar MP2 which contains a 32-bit data- Memory Address Decoder and path and extensive floating point support [1]. The particu- Cache Controller lar features that are examined here are the ALU/datapath width, the multiplier, and the floating point support. The PE 0 PE 0 e®ect of other features, such as the number of ports in the PE 1 PE 1 register ¯le, have been found to be second order. Dedicated Router Network . 1. Packet Switched Combining InterPE PE n PE n 2. Circuit Switched Communication 3. Reconfigurable Broadcast Mesh Datapath Register Cache Main Mesh Network File Memory instructions Figure 2: Shown is the MPA PE memory design space. Note and global data that all PEs share the same controllers. Controller becoming more and more clear that, for many application ALU Index Global OR Registers domains, evaluation of parallel processors requires using ap- feedback Array Registers Barrel plication speci¯c test suites. That this has long been the Shifter Global COUNT Multiplier concensus of the computer vision community is shown in feedback PE Shift Divider Register [10, 11, 13]. Required Floating Point Datapath Our test suite consists of the parts of the second IU Optional Coprocessors Benchmark particularly suited to MPAs, plus other impor- tant vision tasks. These inlude a convolution-based cor- Figure 1: Shown is some the MPA architecture space. Note respondence matcher, a motion-from-correspondence ana- that PEs do not have local control and that their number lyzer, an image preprocessor that uses complex edge model- ranges from the thousands to 100's of thousands. ing, and two segmentation-based line ¯nders. Some general comments are as follows: they are all Another consequence of SIMD control is on MPA PE non-trivial in that they consist of from several hundred to memory con¯gurations: Each array instruction operates on 1 several thousand lines of code; they are, with the exception the same registers and memory locations for each PE. In of the IU Benchmark, in use in a research environment; and this case, it is possible to model the memory hierarchy, in- they span a wide variety of types of computations. As an cluding cache, with a simple serial model. See Figure 2. example of the last point, some are dominated by gray-scale Existing MPAs and MPA designs contain on-chip mem- 2 computation (which is mostly 8 bit integer), others by float- ory in the range of 32 to 1024 bits. O®-chip memory typi- ing point. They are described in detail in [3]. cally consists of DRAM, although SRAM has also been used. Although no MPA has yet been built with PE cache, studies 2.3 Mapping Applications to Architectures: Virtual PEs have shown the clear bene¯ts [8]. The particular features we examine here are the register ¯le size from 128 to 1600 bits The standard MPA programmer's model maps elements of and the cache size. We do not examine very small register parallel variables to individual PEs. It is rarely the case, ¯le sizes because our programmer's model requires at least however, that a processor has enough PEs to create this three 32 bit registers. We assume enough main memory to precise mapping; rather, some number of elements must be store all the data the programs require, and test caches up mapped to each PE. to the sizes necessary to achieve a 99% hit rate. In order to align the programmer's model with physi- cal machines, the PEs assumed by the programmer's model 2.2 Application Space (referred to as virtual PEs or VPEs) are emulated by the physical PEs. We refer to the number of VPEs each PE The fundamental criterion for selecting test suite programs must emulate as the VPE/PE ratio, or VPR. This emula- is that they should be representative of the expected work- tion can either be done entirely in software, through hard- load of the target machine. Since SIMD arrays, however, are ware support in the array control unit (ACU), or through not fully general purpose processors, general purpose bench- a combination of the two. One of the fundamental goals of marks such as SPEC [12] are not appropriate. In fact, it is this study is to assess the cost of VPE emulation. 1An exception is for processors which support a local indexing We now briefly describe how VPE emulation is carried mode where the register can be a pointer to a memory location|as out. VPE variables are mapped VPR elements per PE. We use of this mode is not critical in our current application test suite refer to a single physical slice of a VPE variable across the we postpone this analysis to a later study. 2We refer to on-chip memory as register ¯le because of its explicit array of physical PEs as a tile.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us