Rationale, Design and Performance of the Hydra Multiprocessor 1

Rationale, Design and Performance of the Hydra Multiprocessor 1

Rationale, Design and Performance of the Hydra Multiprocessor Kunle Olukotun, Jules Bergmann, Kun-Yung Chang and Basem Nayfeh Computer Systems Laboratory Stanford University, CA 94305 Abstract In Hydra four high performance processors communicate via a shared secondary cache. The shared cache is implemented using multichip module (MCM) packag- ing technology. The Hydra multiprocessor is designed to efÞciently support auto- matically parallelized programs that have high degrees of Þne grained sharing. This paper motivates the Hydra multiprocessor design by reviewing current trends in architecture and development in parallelizing compiler technology and imple- mentation technology. The design of the Hydra multiprocessor is described and explained. Initial estimates of the interprocessor communication latencies show them to be much better than current bus-based multiprocessors. These lower laten- cies result in higher performance on applications with Þne grained parallelism. 1 Introduction The performance of microprocessors has been improving at a phenomenal rate for the last ten years. This performance growth has been driven by innovation in compilers, improvements in architecture and tremendous improvements in implementation technology. Currently high perfor- mance microprocessors use multiple instruction issue superscalar design techniques; however, ex- tensions to these superscalar design techniques cannot sustain the current growth in microproces- sor performance. As the number of instructions issued every cycle increases there will be dimin- ishing returns in performance and significant increases in design and implementation complexity. An alternative to superscalar design is to develop multiprocessor microarchitectures that have mul- tiple threads of control. These architectures promise to be easier to build than wide issue supersca- lar machines and will offer equal or better performance. To understand the complexities of building a multiprocessor microarchitectures and to dem- onstrate that these architectures can achieve high performance we are embarking on the Hydra mul- tiprocessor project. The key elements of the Hydra multiprocessor are multiple microprocessors sharing a secondary cache and the use of multichip module packaging technology. The goal of the Hydra multiprocessor project is to demonstrate that the cost/performance of general purpose com- puting can be improved by combining parallelizing compilers, high performance single chip mi- croprocessors and advanced packaging technology. The purpose of this paper is to present the rationale for the Hydra multiprocessor and to de- scribe its operation. The remainder of the paper is organized as follows. The next section presents the rational for the Hydra multiprocessor. This is followed by a section that describes the design and operation of the Hydra multiprocessor. Following this is a section that presents preliminary performance results that indicate that the Hydra architecture will have an advantage over existing 1 multiprocessor designs. The last section presents the conclusions. 2 Rational for the Hydra Multiprocessor The design of Hydra multiprocessor is motivated by trends in microprocessor architecture, and developments in parallel compilers and packaging technology. We describe these trends and developments in this section and indicate how the Hydra architecture will exploit these develop- ments to improve the cost/performance of microprocessors. 2.1 Trends in single chip microprocessor design Improvements in silicon integrated circuit technology continue to provide faster transistor switching speeds and higher integration density. Currently, transistor speed is improving at 40% per year and integration density is increasing at 60% per year [7]. The performance of micropro- cessors built with this technology is increasing at the unprecedented rate of 60% per year. The in- crease in microprocessor speed relative to main memory speed and the desire to maintain micro- processor performance growth has resulted in two trends in microprocessor design. These are large on-chip caches and complex microarchitectures designed to exploit instruction level parallelism (ILP). As the performance of processors improve they require higher bandwidth and lower latency memory hierarchies. Increasing the size and complexity of the on-chip cache is one way to match improvements in processor performance with improvements in memory hierarchy latency and bandwidth. Currently, the fastest microprocessor, the DEC Alpha 21164, has a two-level cache with a total of over a 100 KB of SRAM. However, there are alternatives to architectures with large on-chip caches. An example of an alternative are the HP PA-RISC microprocessor architectures. These architectures have achieved high performance with very little on-chip cache by using the fastest commodity SRAMs in an off-chip cache and a highly optimized design of the electrical in- terface between the processor and the cache SRAMs. The result is an off-chip primary cache access time that is close to that of an on-chip cache and a cache size that is much larger than it is possible to have on a single chip. The other trend in microprocessor design is the use of increasingly complex architectures to exploit ILP. There are two approaches that are used to exploit ILP in programs: superscalar archi- tectures and VLIW architectures. Both approaches attempt to issue multiple instructions to inde- pendent function units every CPU clock cycle. Superscalar architectures use hardware to dynami- cally find data-independent instructions in an instruction window and to issue these instructions to independent functional units. On the other hand, VLIW architectures rely on the compiler to find the ILP and to statically schedule the execution of independent instructions. Superscalar architec- tures are appealing because they can improve the performance of existing application binaries. However, the ease with which applications can be accelerated is not without costs. Superscalar ar- chitectures are complex to design and difficult to implement. Looking for parallelism in a large window of instructions requires a significant amount of hardware and usually does not improve performance as much as one might expect. Due to the complexity of superscalar design it is diffi- cult to get the architecture correct and at the same time optimize the pipelines and circuits to achieve high clock frequencies. For this reason, superscalar designs with complex architectures have tended to have lower clock frequencies than simpler designs. VLIW CPUs rely on the compiler to find packets of independent instructions. VLIW proces- 2 sors do not require the hardware for dynamic scheduling and so can be simpler to design and to implement. However, they require significant compiler support such as trace scheduling to find the ILP in application programs. Because programs must be recompiled before they will run correctly on a new VLIW architecture, VLIW processors will not execute binary programs compiled for an existing instruction set architecture (ISA). This is a serious disadvantage in the computer market- place but one that may be overcome by the use of software for emulating existing ISAs. VLIW ar- chitectures will be preferred over superscalar architectures when the issue width is large enough so that dynamic issue hardware in the superscalar architecture is too complex and costly to build. However, even in the VLIW processor, such a wide machine will have a centralized register file which must have many ports to supply operands to the independent functional units. The access time of such a register file and the complexity of the bypass buses connecting the functional will ultimately limit the clock frequency of the of a very wide VLIW implementation. A final disadvan- tage of VLIW architectures is that VLIWs force all instructions in a packet to execute together. If one instruction in the packet stalls the whole packet must stall. This limits the VLIW architectureÕs ability to deal with unpredictable events such as data accesses that miss the cache. To understand how architects have used greater silicon resources to increase the performance of microprocessors we have collected area and performance data for microprocessors that will have Òworking siliconÓ by the end of 1994 [14]. Their performance is measured on the SPEC92 bench- mark suite [13]. Figure 1 shows the change in the utility of silicon area as the area of the micropro- cessor increases. Silicon utility is defined as the sum of the SPEC integer and SPEC floating point numbers of a microprocessor divided by the normalized chip area of the microprocessor. To make the area comparisons reasonably fair, the area of each microprocessor is normalized to that of a 0.5 micron process. Three die area regions can be identified on Figure 1: are processors with areas less than 150 mm2, which we will call small, processors with areas ranging from 150 mm2 to 250 mm2, which we will call medium, and processors areas with over 250 mm2, which we will call large. If we look at the processor that achieves the highest utility of silicon area for each size range, we see that me- dium size microprocessors achieve higher utility than small size microprocessors, but large size microprocessors do not improve utility, and may decrease it. If we investigate the reasons for these differences in utility, we see that the small and medium size processors focus on maximizing clock frequency, while the large size processors, excepting the

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us