The Impact of Exploiting Instruction-Level Parallelism on Shared-Memory Multiprocessors

The Impact of Exploiting Instruction-Level Parallelism on Shared-Memory Multiprocessors

218 IEEE TRANSACTIONS ON COMPUTERS, VOL. 48, NO. 2, FEBRUARY 1999 The Impact of Exploiting Instruction-Level Parallelism on Shared-Memory Multiprocessors Vijay S. Pai, Student Member, IEEE, Parthasarathy Ranganathan, Student Member, IEEE, Hazim Abdel-Shafi, Student Member, IEEE, and Sarita Adve, Member, IEEE AbstractÐCurrent microprocessors incorporate techniques to aggressively exploit instruction-level parallelism (ILP). This paper evaluates the impact of such processors on the performance of shared-memory multiprocessors, both without and with the latency- hiding optimization of software prefetching. Our results show that, while ILP techniques substantially reduce CPU time in multiprocessors, they are less effective in removing memory stall time. Consequently, despite the inherent latency tolerance features of ILP processors, we find memory system performance to be a larger bottleneck and parallel efficiencies to be generally poorer in ILP- based multiprocessors than in previous generation multiprocessors. The main reasons for these deficiencies are insufficient opportunities in the applications to overlap multiple load misses and increased contention for resources in the system. We also find that software prefetching does not change the memory bound nature of most of our applications on our ILP multiprocessor, mainly due to a large number of late prefetches and resource contention. Our results suggest the need for additional latency hiding or reducing techniques for ILP systems, such as software clustering of load misses and producer-initiated communication. Index TermsÐShared-memory multiprocessors, instruction-level parallelism, software prefetching, performance evaluation. æ 1INTRODUCTION HARED-MEMORY multiprocessors built from commodity performance improvements from the use of current ILP Smicroprocessors are being increasingly used to provide techniques, but the improvements vary widely. In parti- high performance for a variety of scientific and commercial cular, ILP techniques successfully and consistently reduce applications. Current commodity microprocessors improve the CPU component of execution time, but their impact on performance by using aggressive techniques to exploit high the memory stall time is lower and more application- levels of instruction-level parallelism (ILP). These techni- dependent. Consequently, despite the inherent latency ques include multiple instruction issue, out-of-order (dy- tolerance features integrated within ILP processors, we namic) scheduling, nonblocking loads, and speculative find memory system performance to be a larger bottleneck execution. We refer to these techniques collectively as ILP and parallel efficiencies to be generally poorer in ILP-based techniques and to processors that exploit these techniques as multiprocessors than in previous-generation multiproces- ILP processors. Most previous studies of shared-memory sors. These deficiencies are caused by insufficient opportu- multiprocessors, however, have assumed a simple processor nities in the application to overlap multiple load misses and with single-issue, in-order scheduling, blocking loads, and increased contention for system resources from more no speculation. A few multiprocessor architecture studies frequent memory accesses. model state-of-the-art ILP processors [2], [7], [8], [9], but do Software-controlled nonbinding prefetching has been not analyze the impact of ILP techniques. shown to be an effective technique for hiding memory To fully exploit recent advances in uniprocessor technol- latency in simple processor-based shared memory systems ogy for shared-memory multiprocessors, a detailed analysis [6]. Section 4 analyzes the interaction between software of how ILP techniques affect the performance of such prefetching and ILP techniques in shared-memory multi- systems and how they interact with previous optimizations processors. We find that, compared to previous generation for such systems is required. This paper evaluates the systems, increased late prefetches and increased contention impact of exploiting ILP on the performance of shared- for resources cause software prefetching to be less effective memory multiprocessors, both without and with the in reducing memory stall time in ILP-based systems. Thus, latency-hiding optimization of software prefetching.1 even after adding software prefetching, most of our For our evaluations, we study five applications using applications remain largely memory bound on the ILP- detailed simulation, described in Section 2. based system. Section 3 analyzes the impact of ILP techniques on the Overall, our results suggest that, compared to previous- performance of shared-memory multiprocessors without generation shared-memory systems, ILP-based systems the use of software prefetching. All our applications see have a greater need for additional techniques to tolerate or reduce memory latency. Specific techniques motivated . The authors are with the Department of Electrical and Computer 1. This paper combines results from two previous conference papers [11], Engineering, Rice University, Houston, TX 77251-1892. [12], using a common set of system parameters, a more aggressive MESI (vs. E-mail: {vijaypai, parthas, shafi, sarita}@rice.edu. MSI) cache-coherence protocol, a more aggressive compiler (the better of For information on obtaining reprints of this article, please send e-mail to: SPARC SC 4.2 and gcc 2.7.2 for each application, rather than gcc 2.5.8), and [email protected], and reference IEEECS Log Number 108235. full simulation of private memory references. 0018-9340/99/$10.00 ß 1999 IEEE PAI ET AL.: THE IMPACT OF EXPLOITING INSTRUCTION-LEVEL PARALLELISM ON SHARED-MEMORY MULTIPROCESSORS 219 Fig. 1. System parameters. by our results include clustering of load misses in the assumed by most previous simple-processor based applications to increase opportunities for load misses to shared-memory studies). Both processor models include overlap with each other and techniques such as producer- support for software-controlled nonbinding prefetching to initiated communication that reduce latency to make the L1 cache. prefetching more effective (Section 5). 2.1.2 Memory Hierarchy and Multiprocessor 2METHODOLOGY Configuration 2.1 Simulated Architectures We simulate a hardware cache-coherent, nonuniform To determine the impact of ILP techniques on multi- memory access (CC-NUMA) shared-memory multiproces- processor performance, we compare two systemsÐILP sor using an invalidation-based, four-state MESI directory and SimpleÐequivalent in every respect except the coherence protocol [4]. We model release consistency processor used. The ILP system uses state-of-the-art ILP because previous studies have shown that it achieves the processors while the Simple system uses simple processors best performance [9]. (Section 2.1.1). We compare the ILP and Simple systems The processing nodes are connected using a two- not to suggest any architectural trade-offs but, rather, to dimensional mesh network. Each node includes a proces- understand how aggressive ILP techniques impact multi- sor, two levels of caches, a portion of the global shared- processor performance. Therefore, the two systems have memory and directory, and a network interface. A split- identical clock rates and include identical aggressive memory and network configurations suitable for the ILP transaction bus connects the network interface, directory system (Section 2.1.2). Fig. 1 summarizes all the system controller, and the rest of the system node. Both caches use parameters. a write-allocate, write-back policy. The cache sizes are chosen commensurate with the input sizes of our applica- 2.1.1 Processor Models tions, following the methodology described by Woo et al. The ILP system uses state-of-the-art processors that include [14]. The primary working sets for our applications fit in the multiple issue, out-of-order (dynamic) scheduling, non- L1 cache, while the secondary working sets do not fit in the blocking loads, and speculative execution. The Simple L2 cache. Both caches are nonblocking and use miss status system uses previous-generation simple processors with single issue, in-order (static) scheduling, and blocking holding registers (MSHRs) [3] to store information on loads, and represents commonly studied shared-memory outstanding misses and to coalesce multiple requests to the systems. Since we did not have access to a compiler that same cache line. All multiprocessor results reported in this schedules instructions for our in-order simple processor, we paper use a configuration with eight nodes. assume single-cycle functional unit latencies (as also 220 IEEE TRANSACTIONS ON COMPUTERS, VOL. 48, NO. 2, FEBRUARY 1999 2.2 Simulation Environment partly hidden by the out-of-order scheduling in the ILP We use RSIM, the Rice Simulator for ILP Multiprocessors, to processor.2 model the systems studied [10]. RSIM is an execution- driven simulator that models the processor pipelines, 3IMPACT OF ILP TECHNIQUES ON PERFORMANCE memory system, and interconnection network in detail, including contention at all resources. It takes SPARC This section analyzes the impact of ILP techniques on application executables as input. To speed up our simula- multiprocessor performance by comparing the Simple and tions, we assume that all instructions hit in the instruction ILP systems, without software prefetching. cache. This assumption is reasonable since all our applica- 3.1 Overall Results tions have very small instruction footprints. Figs. 3

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us