
Universita` di Pisa Dipartimento di Informatica Technical Report: TR-10-20 Single-Producer/ Single-Consumer Queues on Shared Cache Multi-Core Systems Massimo Torquati Computer Science Department University of Pisa, Italy. Email: [email protected] arXiv:1012.1824v1 [cs.DS] 8 Dec 2010 November 30, 2010 ADDRESS: Largo B. Pontecorvo 3, 56127 Pisa, Italy. TEL: +39 050 2212700 FAX: +39 050 2212726 Single-Producer/ Single-Consumer Queues on Shared Cache Multi-Core Systems Massimo Torquati Computer Science Department University of Pisa, Italy. Email: [email protected] November 30, 2010 Abstract Using efficient point-to-point communication channels is critical for implementing fine grained parallel program on modern shared cache multi- core architectures. This report discusses in detail several implementations of wait-free Single-Producer/Single-Consumer queue (SPSC), and presents a novel and efficient algorithm for the implementation of an unbounded wait-free SPSC queue (uSPSC). The correctness proof of the new algorithm, and several performance measurements based on simple synthetic benchmark and microbenchmark, are also discussed. 1 Introduction This report focuses on Producer-Consumer coordination, and in particular on Single-Producer/Single-Consumer (SPSC) coordination. The producer and the consumer are concurrent entities, i.e. processes or threads. The first one pro- duces items placing them in a shared structure, whereas the the second one consumes these items by removing them from the shared structure. Different kinds of shared data structures provide different fairness guarantees. Here, we consider a queue data structure that provides First-In-First-Out fairness (FIFO queue), and we assume that Producer and Consumer share a common address space, that is, we assume threads as concurrent entities. In the end of '70s, Leslie Lamport proved that, under Sequential Consistency memory model [10], a Single-Producer/Single-Consumer circular buffer 1 can be implemented without using explicit synchronization mechanisms between the producer and the consumer [9]. Lamport's circular buffer is a wait-free 1A circular buffer can be used to implement a FIFO queue 1 algorithm. A wait-free algorithm is guaranteed to complete after a finite number of steps, regardless of the timing behavior of other operations. Differently, a lock-free algorithm guarantees only that after a finite number of steps, some operation completes. Wait-freedom is a stronger condition than lock-freedom and both conditions are strong enough to preclude the use of blocking constructs such as locks. With minimal modification to Lamport's wait-free SPSC algorithm, it results correct also under Total-Store-Order and others weaker consistency models, but it fails under weakly ordered memory model such as those used in IBM's Power and Intel's Itanium architectures. On such systems, expensive memory barrier (also known as memory fence) instructions are needed in order to ensure correct load/store instructions ordering. Maurice Herlihy in his seminal paper [5] formally proves that few simple HW atomic instructions are enough for building any wait-free data structure for any number of concurrent entities. The simplest and widely used primitive is the compare-and-swap (CAS). Over the years, many works have been pro- posed with focus on lock-free/wait-free Multiple-Producer/Multiple-Consumer (MPMC) queue [13, 8, 12]. They use CAS-like primitives in order to guarantee correct implementation. Unfortunately, the CAS-like hardware primitives used in the implementa- tions, introduce non-negligible overhead on modern shared-cache architectures, so even the best MPMC queue implementation, is not able to obtain better performance than Lamport's circular buffer in cases with just 1 producer and 1 consumer. FIFO queues are typically used to implement streaming networks [2, 14]. Streams are directional channels of communication that behave as a FIFO queue. In many cases streams are implemented using circular buffer instead of a pointer-based dynamic queue in order to avoid excessive memory usage. Hoverer, when complex streaming networks have to be implemented, which have multiple nested cycles, the use of bounded-size queues as basic data struc- ture requires more complex and costly communication prtocols in order to avoid deadlock situations. Unbounded size queue are particularly interesting in these complex cases, and in all the cases where it is extremely difficult to choose a suitable queue size. As we shall see, it is possible to implement a wait-free unbounded SPSC queue by using Lamport's algorithm and dynamic memory allocation. Unfortunately, dynamic memory allocation/deallocation is costly because they use locks to protect internal data structures, hence introduces costly memory barriers. In this report it is presented an efficient implementation of an unbounded wait-free SPSC FIFO queue which makes use only of a modified version of the Lamport's circular buffer without requiring any additional memory barrier, and, at the same time, minimizes the use of dynamic memory allocation. The novel unbounded queue implementation presented here, is able to speed up producer-consumer coordination, and, in turn, provides the basic mechanisms for implementing complex streaming networks of cooperating entities. The remainder of this paper is organized as follows. Section 2 reminds 2 1 bool push(data) f 8 bool pop(data) f 2 if (( tail +1 mod N)==head ) 9 if (head==tail) 3 return false; // buffer full 10 return false; // buffer empty 4 buffer [ tail ]=data; 11 data = buffer[head]; 5 tail = tail+1 mod N; 12 head = head+1 mod N; 6 return true; 13 return true; 7 g 14 g Figure 1: Lamport's circular buffer push and pop methods pseudo-code. At the beginning head=tail=0. 1 bool push(data) f 10 bool pop(data) f 2 if (buffer [ tail ]==BOTTOM) f 11 if (buffer [head]!=BOTTOM) f 3 buffer [ tail ]=data; 12 data = buffer[head]; 4 tail = tail+1 mod N; 13 buffer [head] = BOTTOM; 5 return true; 14 head = head+1 mod N; 6 g 15 return true; 7 return false; // buffer full 16 g 8 g 17 return false; // buffer empty 9 18 g Figure 2: P1C1-buffer buffer pseudocode. Modified version of the code presented in [6]. The buffer is initialized to BOTTOM and head=tail=0 at the beginning. Lamport's algorithm and also shows the necessary modifications to make it work efficiently on modern shared-cache multiprocessors. Section 3 discuss the extension of the Lamport's algorithm to the unbounded case. Section 4 presents the new implementations with a proof of correctness. Section 5 presents some performance results, and Sec. 6 concludes. 2 Lamport's circular buffer In Fig. 1 the pseudocode of the push and pop methods of the Lamport's circular buffer algorithm, is sketched. The buffer is implemented as an array of N entries. Lamport proved that, under Sequential Consistency [10], no locks are needed around pop and push methods, thus resulting in a concurrent wait-free queue implementation. If Sequential Consistency requirement is released, it is easy to see that Lamport's algorithm fails. This happens for example with the PowerPC architecture where write to write relaxation is allowed (W ! W using the same notation used in [1]), i.e. 2 distinct writes at different memory locations may be executed not in program order. In fact, the consumer may pop out of the buffer a value before the data is effectively written in it, this is because the update of the tail pointer (modified only by the producer) can be seen by the consumer before the producer writes in the tail position of the buffer. In this case, the test at line §1. would be passed even though buffer[head] contains stale data. Few simple modifications to the basic Lamport's algorithm, allow the correct execution even under weakly ordered memory consistency model. To the best of our knowledge such modifications have been presented and formally proved 3 correct for the first time by Higham and Kavalsh in [6]. The idea mainly consists in tightly coupling control and data information into a single buffer operation by using a know value (called BOTTOM), which cannot be used by the application. The BOTTOM value is used to indicate whether there is an empty buffer slot, which in turn indicates an available room in the buffer to the producer and the empty buffer condition to the consumer. With the circular buffer implementation sketched in Fig. 2, the consistency problem described for the Lamport's algorithm cannot occur provided that the generic store buffer[i]=data is seen in its entirety by a processor, or not at all, i.e. a single memory store operation is executed atomically. To the best of our knowledge, this condition is satisfied in any modern general-purpose processor for aligned memory word stores. As shown by Giacomoni et all. in [3], Lamport's circular buffer algorithm results in cache line thrashing on shared-cache multiprocessors, as the head and tail buffer pointers are shared between consumer and producer. Modifications of pointers, at lines §1. and §1., turn out in cache-line invalidation (or up- date) traffic among processors, thus introducing unexpected overhead. With the implementation in Fig. 2, the head and the tail buffer pointers are always in the local cache of the consumer and the producer respectively, without incurring in cache-coherence overhead since they are not shared. When transferring references through the buffer rather than plain data val- ues, a memory fence is required on processors with weakly memory consistency model, in which stores can be executed out of program order. In fact, without a memory fence, the write of the reference in the buffer could be visible to the consumer before the referenced data has been committed in memory. In the code in Fig. 2, a write-memory-barrier (WMB) must be inserted between line §1.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages19 Page
-
File Size-