Performance Evaluation of Inter-Thread Communication Mechanisms on Multicore/Multithreaded Architectures

Performance Evaluation of Inter-Thread Communication Mechanisms on Multicore/Multithreaded Architectures

Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architectures Massimiliano Meneghin Davide Pasetto Hubertus Franke IBM Research Lab IBM Research Lab IBM TJ Watson Dublin, Ireland Dublin, Ireland Yorktown Heights, NY [email protected] [email protected] [email protected] Fabrizio Petrini Jimi Xenidis IBM TJ Watson IBM Research Lab Yorktown Heights, NY Austin, TX [email protected] [email protected] ABSTRACT 1. INTRODUCTION The three major solutions for increasing the nominal per- In the past, advances in transistor count and fabrication formance of a CPU are: multiplying the number of cores technology have led to increased performance, typically pro- per socket, expanding the embedded cache memories and portional to improvements in clock rates. However, this use multi-threading to reduce the impact of the deep mem- trend has slowed due to limitations arising from power con- ory hierarchy. System with tens or hundreds of hardware sumption, design complexity, and wire delays. In response, threads, all sharing a cache coherent UMA or NUMA mem- designers have turned to multi-core and multi-thread confi- ory space, are today the de-facto standard. While these gurations that incorporate several cores on one or more dies. solutions can easily provide benefits in a multi-program en- While multiple cores can readily support throughput appli- vironment, they require recoding of applications to leverage cations, such as web servers or map-reduce searches that the available parallelism. Application threads must synchro- are embarrassingly parallel, threaded applications that op- nize and exchange data, and the overall performance is heav- erate on a shared address space to complete a unique task ily influenced by the overhead added by these mechanisms, demand efficient synchronization and communication mech- especially as developers try to exploit finer grain parallelism anisms. Efficient and low overhead core-to-core communi- to be able to use all available resources. cation is critical for many solution domains with countless examples in network computing, business analytics, financial This paper examines two fundamental synchronization mech- markets, biology, and high-performance computing in gen- anisms - locks and queues - in the context of multi and eral. As the number of cores increases, the desired grain of many cores systems with tens of hardware threads. Locks parallelism becomes smaller and understanding the overhead are typically used in non streaming environments to syn- and tradeoff of the core-to-core communication mechanism chronize access to shared data structures, while queues are is becoming increasingly important. mainly used as a support for streaming computational mod- els. The analysis examines how the algorithmic aspect of the Most threaded applications have concurrent needs to access implementation, the interaction with the operating system resources that can only be shared with a logical sequential and the availability of supporting machine language mecha- consistency model. The way these contentions are resolved nism contribute to the overall performance. Experiments are directly affects the system's timeliness properties. Several run on Intel X86TM and IBM PowerENTM , a novel highly mechanisms are available today, and they can broadly be multi-threaded user-space oriented solution, and focus on classified into: (1) lock-based schemes and (2) non-blocking fine grain parallelism - where the work performed on each schemes including wait-free protocols [16] and lock-free pro- data item requires only a handful of microseconds. The re- tocols [12] [2]. Lock-based protocols, typically used in multi- sults presented here constitute both a selection tool for soft- threaded applications that do not follow a stream computing ware developer and a datapoint for CPU architects. model, serialize accesses to shared objects by using mutual exclusion, resulting in reduced concurrency [4]. Many lock- General Terms based protocols typically incur additional run-time overhead Performance,Experimentation due to scheduler activations that occur when activities re- quest locked objects. Concurrent lock-free queues for inter-thread communication have been widely studied in literature since they are the basic building block of stream computing solutions. These algorithms are normally based on atomic operations and modern processors provide all the necessary hardware primi- tives such as atomic compare-and-set (CAS) and load-linked IBM Technical Report store-conditional (LL/SC). All these primitives implicitly in- troduce synchronization at the hardware level that are often characteristics are summarized in Table 1 and both systems an order of magnitude slower, even for uncontested cache run Linux Operating System. aligned and resident words, than primitive stores. With the exception of Lamport's queue [13], the focus of prior art The Intel XeonTM 5570 (Nehalem-EP) is a 45nm quad core has been on multiple producer and/or multiple-consumer processor whose high level overview is shown in figure 1. De- (MP/MC) queue variants. These general purpose queues signed for general purpose processing, each core has a pri- to date have been limited in performance due to high over- vate L1 and L2 cache, while the L3 cache is shared across the heads of their implementations. Additionally, general pur- cores on a socket. Each core supports Simultaneous Multi pose MP/MC variants often use linked lists which require in- Threading (SMT), allowing two threads to share processing direction, exhibit poor cache-locality, and require additional resources in parallel on a single core. Nehalem EP has a synchronization under weak consistency models [15]. 32KB L1, 256KB L2 and a 8MB L3 cache. As opposed to older processor configurations, this architecture implements While an extensive amount of work has been performed on an inclusive last level cache. Each cache line contains `core locks and queues, being these fundamental building blocks valid bits' that specify the state of the cache line across the for threaded applications, a comprehensive comparison of processor. A set bit corresponding to a core indicates that their runtime performance characteristics and scalability on the core may contain a copy of this line. When a core re- modern architectures is still missing. This paper wants to quests for a cache line that is contained in the L3 cache, address the following open questions: the bits specify which cores to snoop for the latest copy of this line, thus reducing the snoop traffic. The MESIF [10] cache-coherence protocol extends the native MESI [7] proto- • Can modern CPU architectures effectively execute fine col to include `forwarding'. This feature enables forwarding grain parallel programs that utilize all available hard- of unmodified data that is shared by two cores to a third ware resources in a coordinated way? one. • What is the overhead and the scalability of the sup- porting synchronization mechanisms? Core 0 Core 1 Core 2 Core 0 Core 1 Core 2 • Which synchronization algorithm and Instruction Set 32 KB 32 KB 32 KB 32 KB 32 KB 32 KB L1D Cache L1D Cache L1D Cache L1D Cache L1D Cache L1D Cache Architecture level support is the best for fine grain 256 KB 256 KB 256 KB 256 KB 256 KB 256 KB parallel programs? L2 Cache L2 Cache L2 Cache L2 Cache L2 Cache L2 Cache 8 MB L3 Cache 8 MB L3 Cache The paper contains an evaluation of different types of locks DDR3 Memory Quick Path Quick Path DDR3 Memory Controller Interconnect Interconnect Controller and queues on large multi-core multi-threaded systems and focuses on fine grain parallelism, where the amount of work performed on each \data item" is in the order of microsec- IO Hub DDR 3 A DDR 3 B DDR 3 C onds. The implementations cover a range of different solu- DDR 3 A DDR 3 B DDR 3 C tions, from Operating System to full user-space based, and look at their behaviour as the load increases. Moreover a new Instruction Set Architecture solution for low over- Figure 1: High-Level Overview of Intel NehalemTM . head user-space memory waiting is presented and its usage is evaluated both from a performance and overhead reduc- tion point of view. The next section describes the two test Atomic operations on the Intel architecture are implemented systems and briefly details the hardware synchronization using a LOCK prefix: the lock instruction can be prefixed mechanisms available in their CPU cores. Section 3 briefly to a number of operations and has the effect to lock the sys- describes the locking algorithms considered while section 4 tem bus (sometimes only the local cache in recent architec- examines their runtime behavior; the following section 5 de- tures) to ensure exclusive access to the shared resource. In tails the queueing strategies implemented; these are then 2003, Intel first introduced, as part of the SSE3 instruction evaluated in section 6. Section 7 contains some concluding set extension, the MONITOR and MWAIT instructions. remarks. The MONITOR operation sets up an address range that is monitored by the hardware for special events to occur, and System Nehalem PowerEN MWAIT waits for that event or for a general interrupt to Sockets 2 1 happen. One possible use is the monitoring of store events: Cores 6 16 a spin lock's wait operation can be implement by arming the Threads per core 2 4 MONITOR facility and executing the MWAIT instruction, Total threads 24 64 which puts the hardware thread into an\implementation op- timized state", which generally implies this hardware thread Table 1: Systems under test. does not dispatch further instructions. When the thread holding the critical section stores to the lock variable re- leasing the lock, the waiting hardware thread will be woken up. The drawback of the MONITOR/MWAIT instructions 2. TARGET PLATFORMS is that they are privileged in the X86 Instruction Set Archi- This paper examines the performance of synchronization tecture and thus cannot be executed by application software.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us