Integrating Message-Passing and Shared-Memory: Early Experience

Total Page:16

File Type:pdf, Size:1020Kb

Integrating Message-Passing and Shared-Memory: Early Experience Integrating Message-Passing and Shared-Memory: Early Experience David Kranz, Kirk Johnson, Anant Agarwal, John Kubiatowicz, and Beng-Hong Lim Laboratory for Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 Abstract is reasonable to expect that future large-scale multiprocessors will be built upon what are essentially message-passing communication This paper discusses some of the issues involved in implement- substrates. In turn, this suggests that multiprocessor architectures ing a shared-address space programming model on large-scale, might be designed such that processors are able to communicate distributed-memory multiprocessors. While such a programming with one another via either shared-memory or message-passing model can be implemented on both shared-memory and message- interfaces, using whichever is likely to be most ef®cient for the passing architectures, we argue that the transparent, coherent cach- communication in question. ing of global data provided by many shared-memory architectures In fact, the MIT Alewife machine [1] does exactly thatÐ is of crucial importance. Because message-passing mechanisms interprocessor communication can be effected either through a are much more ef®cient than shared-memory loads and stores for sequentially-consistent shared-memory interface or by way of a certain types of interprocessor communication and synchroniza- messaging mechanism as ef®cient as those found in many present tion operations, however, we argue for building multiprocessors day message-passing architectures. The bulk of this paper re- that ef®ciently support both shared-memory and message-passing lates early experience we've had integrating shared-memory and mechanisms. We describe an architecture, Alewife, that integrates message-passing in the Alewife machine. Section 2 overviews support for shared-memory and message-passing through a simple implementation schemes for shared-memory. Section 3 describes interface; we expect the compiler and runtime system to cooperate the basic Alewife architecture, paying special attention to the in- in using appropriate hardware mechanisms that are most ef®cient tegration of the message-passing and shared-memory interfaces. for speci®c operations. We report on both integrated and exclu- Section 4 provides results comparing purely shared-memory and sively shared-memory implementations of our runtime system and message-passing implementations of several components of the two applications. The integrated runtime system drastically cuts Alewife runtime system and two applications. We show that a hy- down the cost of communication incurred by the scheduling, load brid thread scheduler utilizing both shared-memory and message- balancing, and certain synchronization operations. We also present based communication performs up to a factor of two better than preliminary performance results comparing the two systems. an implementation using only shared-memory for communication. Section 5 discusses related work. Finally, Section 6 summarizes the major points of the paper and outlines directions for future research. 1 Intro duction Contributions of this Pap er Researchers in parallel computing generally agree that it is im- 1.1 portant to support a shared-address space or shared-memory pro- gramming modelÐprogrammers should not bear the responsibility In this paper, we argue for building multiprocessors that ef®ciently for orchestrating all interprocessor communication through explicit support both shared-memory and message-passing mechanisms messages. Implementations of this programming model can be di- for interprocessor communication. We argue that ef®cient sup- vided into two broad areas depending on the target architecture. port for ®ne-grained data sharing is fundamental, pointing out With message-passing architectures, the shared-address space is that traditional message-passing architectures are unable to pro- typically synthesized by low-level system software, while tradi- vide such support, especially for applications with irregular and tional shared-memory architectures of¯oad this functionality into dynamic communication behavior. Similarly, we point out that specialized hardware. while shared-memory hardware can support such communication, it is not a panacea eitherÐwe identify several other communica- In the following section, we discuss some of the issues involved tion styles which are handled less ef®ciently by shared-memory in implementing a shared-address space programming model on than by message-passing architectures. Finally, we (i) describe large-scale, distributed-memory multiprocessors. We provide a how shared-memory and message-passing communication is inte- brief overview and comparison of conventional implementation grated in the Alewife hardware and software systems; (ii) present strategies for message-passing and shared-memory architectures. some preliminary performance data obtained with a simulator of We argue that the transparent, coherent caching of global data Alewife comparing the performance of software systems using both provided by many shared-memory architectures is of fundamen- message-passing and shared-memory against implementations us- tal importance, because it affords low-overhead, ®ne-grained data ing exclusively shared-memory; and (iii) provide analysis and early sharing. We point out, however, that some types of communica- conclusions based upon the performance data. tion are handled less ef®ciently via shared-memory then they might be via message-passing. Building upon this, we suggest that it Implementing a Shared-Address Space 2 shared-address-space-reference(location) if currently-cached?(location) then Implementations of a shared-address programming model can be // satisfy request from cache divided into two broad areas depending on the underlying system load-from-cache(location) architecture. On message-passing architectures, a shared-address elsif is-local-address?(location) then space is typically synthesized through some combination of com- // satisfy request from local memory pilers and low-level system software. In traditional shared-memory load-from-local-memory(location) architectures, this shared-address space functionality is provided else by specialized hardware. This section gives a brief comparison of // must load from remote memory; send remote these implementation schemes and their impact on application per- // read request message and invoke any actions formance. We conclude that the ef®cient ®ne-grained data sharing // required to maintain cache coherency afforded by hardware implementations is of fundamental impor- load-from-remote-memory(location) tance for some applications, but also point out that some types of communication are handled less ef®ciently via shared-memory hardware then they could be via explicit message-passing. In either Figure 1: Anatomy of a shared-address space reference. case, we assume that the physical memory is distributed across the processors. others that are static. Data sharing in programs can also be ®ne- or Anatomy of a Memory Reference 2.1 coarse-grained. When executing a multiprocessor application written assuming a Through a great deal of hard work, a programmer can often shared-address space programming model, the actions indicated eliminate the overhead incurred by local/remote checks shown in by the pseudocode in Figure 1 must be taken for every shared- Figure 1 by sending explicit messages. The ®xed costs of such address space reference. The pseudocode assumes that locally operations are typically high enough, however, that for ef®cient ex- cached copies of locations can exist. (If shared-data caching is not ecution, the messages must be made fairly large. The result is that supported, the ®rst test is eliminated.) The most important parts of the granularity at which data sharing can be exploited ef®ciently the code in Figure 1 are the two ªlocal/remoteº checks. is quite coarse. For static programs with suf®ciently large-grained data sharing, these explicit messages are like assembly language: The local/remote check is the essence of the distinction be- if done well, they always provide the best performance. However, tween shared-memory and message-passing architectures. In the as with assembly language, it is not at all clear that most pro- former, the instruction to reference memory is the same whether the grammers are better off if forced to orchestrate all communication object referenced happens to be in local or remote memory; the lo- through explicit messages. For static applications which share data cal/remote checks are facilitated by hardware support to determine at too ®ne a grain, even if overhead related to local/remote checks whether a location has been cached (cache tags and comparison can be eliminated, performance suffers because messages typically logic) or, if not, whether it resides in local or remote memory (lo- aren't large enough to amortize any ®xed overheads inherent in cal/remote address resolution logic). If the data is remote and not message-based communication. It is worth noting, however, that cached, a messagewill be sent to the remote node to access the data. for architectures like iWarp [4] which in effect support extremely Although the ®rst test is unnecessary if local caching is disallowed, low overhead ªmessagingº for static communication patterns, this the second test is still required to implement a shared-addressspace is not a problem. programming model. Because shared-memory systems provide hardware support for detecting
Recommended publications
  • Memory Consistency
    Lecture 9: Memory Consistency Parallel Computing Stanford CS149, Winter 2019 Midterm ▪ Feb 12 ▪ Open notes ▪ Practice midterm Stanford CS149, Winter 2019 Shared Memory Behavior ▪ Intuition says loads should return latest value written - What is latest? - Coherence: only one memory location - Consistency: apparent ordering for all locations - Order in which memory operations performed by one thread become visible to other threads ▪ Affects - Programmability: how programmers reason about program behavior - Allowed behavior of multithreaded programs executing with shared memory - Performance: limits HW/SW optimizations that can be used - Reordering memory operations to hide latency Stanford CS149, Winter 2019 Today: what you should know ▪ Understand the motivation for relaxed consistency models ▪ Understand the implications of relaxing W→R ordering Stanford CS149, Winter 2019 Today: who should care ▪ Anyone who: - Wants to implement a synchronization library - Will ever work a job in kernel (or driver) development - Seeks to implement lock-free data structures * - Does any of the above on ARM processors ** * Topic of a later lecture ** For reasons to be described later Stanford CS149, Winter 2019 Memory coherence vs. memory consistency ▪ Memory coherence defines requirements for the observed Observed chronology of operations on address X behavior of reads and writes to the same memory location - All processors must agree on the order of reads/writes to X P0 write: 5 - In other words: it is possible to put all operations involving X on a timeline such P1 read (5) that the observations of all processors are consistent with that timeline P2 write: 10 ▪ Memory consistency defines the behavior of reads and writes to different locations (as observed by other processors) P2 write: 11 - Coherence only guarantees that writes to address X will eventually propagate to other processors P1 read (11) - Consistency deals with when writes to X propagate to other processors, relative to reads and writes to other addresses Stanford CS149, Winter 2019 Coherence vs.
    [Show full text]
  • Memory Consistency: in a Distributed Outline for Lecture 20 Memory System, References to Memory in Remote Processors Do Not Take Place I
    Memory consistency: In a distributed Outline for Lecture 20 memory system, references to memory in remote processors do not take place I. Memory consistency immediately. II. Consistency models not requiring synch. This raises a potential problem. Suppose operations that— Strict consistency Sequential consistency • The value of a particular memory PRAM & processor consist. word in processor 2’s local memory III. Consistency models is 0. not requiring synch. operations. • Then processor 1 writes the value 1 to that word of memory. Note that Weak consistency this is a remote write. Release consistency • Processor 2 then reads the word. But, being local, the read occurs quickly, and the value 0 is returned. What’s wrong with this? This situation can be diagrammed like this (the horizontal axis represents time): P1: W (x )1 P2: R (x )0 Depending upon how the program is written, it may or may not be able to tolerate a situation like this. But, in any case, the programmer must understand what can happen when memory is accessed in a DSM system. Consistency models: A consistency model is essentially a contract between the software and the memory. It says that iff they software agrees to obey certain rules, the memory promises to store and retrieve the expected values. © 1997 Edward F. Gehringer CSC/ECE 501 Lecture Notes Page 220 Strict consistency: The most obvious consistency model is strict consistency. Strict consistency: Any read to a memory location x returns the value stored by the most recent write operation to x. This definition implicitly assumes the existence of a global clock, so that the determination of “most recent” is unambiguous.
    [Show full text]
  • 2.5 Classification of Parallel Computers
    52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor.
    [Show full text]
  • Chapter 5 Multiprocessors and Thread-Level Parallelism
    Computer Architecture A Quantitative Approach, Fifth Edition Chapter 5 Multiprocessors and Thread-Level Parallelism Copyright © 2012, Elsevier Inc. All rights reserved. 1 Contents 1. Introduction 2. Centralized SMA – shared memory architecture 3. Performance of SMA 4. DMA – distributed memory architecture 5. Synchronization 6. Models of Consistency Copyright © 2012, Elsevier Inc. All rights reserved. 2 1. Introduction. Why multiprocessors? Need for more computing power Data intensive applications Utility computing requires powerful processors Several ways to increase processor performance Increased clock rate limited ability Architectural ILP, CPI – increasingly more difficult Multi-processor, multi-core systems more feasible based on current technologies Advantages of multiprocessors and multi-core Replication rather than unique design. Copyright © 2012, Elsevier Inc. All rights reserved. 3 Introduction Multiprocessor types Symmetric multiprocessors (SMP) Share single memory with uniform memory access/latency (UMA) Small number of cores Distributed shared memory (DSM) Memory distributed among processors. Non-uniform memory access/latency (NUMA) Processors connected via direct (switched) and non-direct (multi- hop) interconnection networks Copyright © 2012, Elsevier Inc. All rights reserved. 4 Important ideas Technology drives the solutions. Multi-cores have altered the game!! Thread-level parallelism (TLP) vs ILP. Computing and communication deeply intertwined. Write serialization exploits broadcast communication
    [Show full text]
  • Chapter 5 Thread-Level Parallelism
    Chapter 5 Thread-Level Parallelism Instructor: Josep Torrellas CS433 Copyright Josep Torrellas 1999, 2001, 2002, 2013 1 Progress Towards Multiprocessors + Rate of speed growth in uniprocessors saturated + Wide-issue processors are very complex + Wide-issue processors consume a lot of power + Steady progress in parallel software : the major obstacle to parallel processing 2 Flynn’s Classification of Parallel Architectures According to the parallelism in I and D stream • Single I stream , single D stream (SISD): uniprocessor • Single I stream , multiple D streams(SIMD) : same I executed by multiple processors using diff D – Each processor has its own data memory – There is a single control processor that sends the same I to all processors – These processors are usually special purpose 3 • Multiple I streams, single D stream (MISD) : no commercial machine • Multiple I streams, multiple D streams (MIMD) – Each processor fetches its own instructions and operates on its own data – Architecture of choice for general purpose mps – Flexible: can be used in single user mode or multiprogrammed – Use of the shelf µprocessors 4 MIMD Machines 1. Centralized shared memory architectures – Small #’s of processors (≈ up to 16-32) – Processors share a centralized memory – Usually connected in a bus – Also called UMA machines ( Uniform Memory Access) 2. Machines w/physically distributed memory – Support many processors – Memory distributed among processors – Scales the mem bandwidth if most of the accesses are to local mem 5 Figure 5.1 6 Figure 5.2 7 2. Machines w/physically distributed memory (cont) – Also reduces the memory latency – Of course interprocessor communication is more costly and complex – Often each node is a cluster (bus based multiprocessor) – 2 types, depending on method used for interprocessor communication: 1.
    [Show full text]
  • Towards Shared Memory Consistency Models for Gpus
    TOWARDS SHARED MEMORY CONSISTENCY MODELS FOR GPUS by Tyler Sorensen A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Bachelor of Science School of Computing The University of Utah March 2013 FINAL READING APPROVAL I have read the thesis of Tyler Sorensen in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place. Date Ganesh Gopalakrishnan ABSTRACT With the widespread use of GPUs, it is important to ensure that programmers have a clear understanding of their shared memory consistency model i.e. what values can be read when issued concurrently with writes. While memory consistency has been studied for CPUs, GPUs present very different memory and concurrency systems and have not been well studied. We propose a collection of litmus tests that shed light on interesting visibility and ordering properties. These include classical shared memory consistency model properties, such as coherence and write atomicity, as well as GPU specific properties e.g. memory visibility differences between intra and inter block threads. The results of the litmus tests are determined by feedback from industry experts, the limited documentation available and properties common to all modern multi-core systems. Some of the behaviors remain unresolved. Using the results of the litmus tests, we establish a formal state transition model using intuitive data structures and operations. We implement our model in the Murphi modeling language and verify the initial litmus tests.
    [Show full text]
  • Trafficdb: HERE's High Performance Shared-Memory Data Store
    TrafficDB: HERE’s High Performance Shared-Memory Data Store Ricardo Fernandes Piotr Zaczkowski Bernd Gottler¨ HERE Global B.V. HERE Global B.V. HERE Global B.V. [email protected] [email protected] [email protected] Conor Ettinoffe Anis Moussa HERE Global B.V. HERE Global B.V. conor.ettinoff[email protected] [email protected] ABSTRACT HERE's traffic-aware services enable route planning and traf- fic visualisation on web, mobile and connected car appli- cations. These services process thousands of requests per second and require efficient ways to access the information needed to provide a timely response to end-users. The char- acteristics of road traffic information and these traffic-aware services require storage solutions with specific performance features. A route planning application utilising traffic con- gestion information to calculate the optimal route from an origin to a destination might hit a database with millions of queries per second. However, existing storage solutions are not prepared to handle such volumes of concurrent read Figure 1: HERE Location Cloud is accessible world- operations, as well as to provide the desired vertical scalabil- wide from web-based applications, mobile devices ity. This paper presents TrafficDB, a shared-memory data and connected cars. store, designed to provide high rates of read operations, en- abling applications to directly access the data from memory. Our evaluation demonstrates that TrafficDB handles mil- HERE is a leader in mapping and location-based services, lions of read operations and provides near-linear scalability providing fresh and highly accurate traffic information to- on multi-core machines, where additional processes can be gether with advanced route planning and navigation tech- spawned to increase the systems' throughput without a no- nologies that helps drivers reach their destination in the ticeable impact on the latency of querying the data store.
    [Show full text]
  • On the Coexistence of Shared-Memory and Message-Passing in The
    On the Co existence of SharedMemory and MessagePassing in the Programming of Parallel Applications 1 2 J Cordsen and W SchroderPreikschat GMD FIRST Rudower Chaussee D Berlin Germany University of Potsdam Am Neuen Palais D Potsdam Germany Abstract Interop erability in nonsequential applications requires com munication to exchange information using either the sharedmemory or messagepassing paradigm In the past the communication paradigm in use was determined through the architecture of the underlying comput ing platform Sharedmemory computing systems were programmed to use sharedmemory communication whereas distributedmemory archi tectures were running applications communicating via messagepassing Current trends in the architecture of parallel machines are based on sharedmemory and distributedmemory For scalable parallel applica tions in order to maintain transparency and eciency b oth communi cation paradigms have to co exist Users should not b e obliged to know when to use which of the two paradigms On the other hand the user should b e able to exploit either of the paradigms directly in order to achieve the b est p ossible solution The pap er presents the VOTE communication supp ort system VOTE provides co existent implementations of sharedmemory and message passing communication Applications can change the communication paradigm dynamically at runtime thus are able to employ the under lying computing system in the most convenient and applicationoriented way The presented case study and detailed p erformance analysis un derpins the
    [Show full text]
  • Consistency Models • Data-Centric Consistency Models • Client-Centric Consistency Models
    Consistency and Replication • Today: – Consistency models • Data-centric consistency models • Client-centric consistency models Computer Science CS677: Distributed OS Lecture 15, page 1 Why replicate? • Data replication: common technique in distributed systems • Reliability – If one replica is unavailable or crashes, use another – Protect against corrupted data • Performance – Scale with size of the distributed system (replicated web servers) – Scale in geographically distributed systems (web proxies) • Key issue: need to maintain consistency of replicated data – If one copy is modified, others become inconsistent Computer Science CS677: Distributed OS Lecture 15, page 2 Object Replication •Approach 1: application is responsible for replication – Application needs to handle consistency issues •Approach 2: system (middleware) handles replication – Consistency issues are handled by the middleware – Simplifies application development but makes object-specific solutions harder Computer Science CS677: Distributed OS Lecture 15, page 3 Replication and Scaling • Replication and caching used for system scalability • Multiple copies: – Improves performance by reducing access latency – But higher network overheads of maintaining consistency – Example: object is replicated N times • Read frequency R, write frequency W • If R<<W, high consistency overhead and wasted messages • Consistency maintenance is itself an issue – What semantics to provide? – Tight consistency requires globally synchronized clocks! • Solution: loosen consistency requirements
    [Show full text]
  • Challenges for the Message Passing Interface in the Petaflops Era
    Challenges for the Message Passing Interface in the Petaflops Era William D. Gropp Mathematics and Computer Science www.mcs.anl.gov/~gropp What this Talk is About The title talks about MPI – Because MPI is the dominant parallel programming model in computational science But the issue is really – What are the needs of the parallel software ecosystem? – How does MPI fit into that ecosystem? – What are the missing parts (not just from MPI)? – How can MPI adapt or be replaced in the parallel software ecosystem? – Short version of this talk: • The problem with MPI is not with what it has but with what it is missing Lets start with some history … Argonne National Laboratory 2 Quotes from “System Software and Tools for High Performance Computing Environments” (1993) “The strongest desire expressed by these users was simply to satisfy the urgent need to get applications codes running on parallel machines as quickly as possible” In a list of enabling technologies for mathematical software, “Parallel prefix for arbitrary user-defined associative operations should be supported. Conflicts between system and library (e.g., in message types) should be automatically avoided.” – Note that MPI-1 provided both Immediate Goals for Computing Environments: – Parallel computer support environment – Standards for same – Standard for parallel I/O – Standard for message passing on distributed memory machines “The single greatest hindrance to significant penetration of MPP technology in scientific computing is the absence of common programming interfaces across various parallel computing systems” Argonne National Laboratory 3 Quotes from “Enabling Technologies for Petaflops Computing” (1995): “The software for the current generation of 100 GF machines is not adequate to be scaled to a TF…” “The Petaflops computer is achievable at reasonable cost with technology available in about 20 years [2014].” – (estimated clock speed in 2004 — 700MHz)* “Software technology for MPP’s must evolve new ways to design software that is portable across a wide variety of computer architectures.
    [Show full text]
  • Scheduling of Shared Memory with Multi - Core Performance in Dynamic Allocator Using Arm Processor
    VOL. 11, NO. 9, MAY 2016 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2016 Asian Research Publishing Network (ARPN). All rights reserved. www.arpnjournals.com SCHEDULING OF SHARED MEMORY WITH MULTI - CORE PERFORMANCE IN DYNAMIC ALLOCATOR USING ARM PROCESSOR Venkata Siva Prasad Ch. and S. Ravi Department of Electronics and Communication Engineering, Dr. M.G.R. Educational and Research Institute University, Chennai, India E-Mail: [email protected] ABSTRACT In this paper we proposed Shared-memory in Scheduling designed for the Multi-Core Processors, recent trends in Scheduler of Shared Memory environment have gained more importance in multi-core systems with Performance for workloads greatly depends on which processing tasks are scheduled to run on the same or different subset of cores (it’s contention-aware scheduling). The implementation of an optimal multi-core scheduler with improved prediction of consequences between the processor in Large Systems for that we executed applications using allocation generated by our new processor to-core mapping algorithms on used ARM FL-2440 Board hardware architecture with software of Linux in the running state. The results are obtained by dynamic allocation/de-allocation (Reallocated) using lock-free algorithm to reduce the time constraints, and automatically executes in concurrent components in parallel on multi-core systems to allocate free memory. The demonstrated concepts are scalable to multiple kernels and virtual OS environments using the runtime scheduler in a machine and showed an averaged improvement (up to 26%) which is very significant. Keywords: scheduler, shared memory, multi-core processor, linux-2.6-OpenMp, ARM-Fl2440.
    [Show full text]
  • The Impact of Exploiting Instruction-Level Parallelism on Shared-Memory Multiprocessors
    218 IEEE TRANSACTIONS ON COMPUTERS, VOL. 48, NO. 2, FEBRUARY 1999 The Impact of Exploiting Instruction-Level Parallelism on Shared-Memory Multiprocessors Vijay S. Pai, Student Member, IEEE, Parthasarathy Ranganathan, Student Member, IEEE, Hazim Abdel-Shafi, Student Member, IEEE, and Sarita Adve, Member, IEEE AbstractÐCurrent microprocessors incorporate techniques to aggressively exploit instruction-level parallelism (ILP). This paper evaluates the impact of such processors on the performance of shared-memory multiprocessors, both without and with the latency- hiding optimization of software prefetching. Our results show that, while ILP techniques substantially reduce CPU time in multiprocessors, they are less effective in removing memory stall time. Consequently, despite the inherent latency tolerance features of ILP processors, we find memory system performance to be a larger bottleneck and parallel efficiencies to be generally poorer in ILP- based multiprocessors than in previous generation multiprocessors. The main reasons for these deficiencies are insufficient opportunities in the applications to overlap multiple load misses and increased contention for resources in the system. We also find that software prefetching does not change the memory bound nature of most of our applications on our ILP multiprocessor, mainly due to a large number of late prefetches and resource contention. Our results suggest the need for additional latency hiding or reducing techniques for ILP systems, such as software clustering of load misses and producer-initiated communication. Index TermsÐShared-memory multiprocessors, instruction-level parallelism, software prefetching, performance evaluation. æ 1INTRODUCTION HARED-MEMORY multiprocessors built from commodity performance improvements from the use of current ILP Smicroprocessors are being increasingly used to provide techniques, but the improvements vary widely.
    [Show full text]