The Data Locality of Work Stealing

The Data Locality of Work Stealing

The Data Locality of Work Stealing Umut A. Acar Guy E. Blelloch Robert D. Blumofe School of Computer Science School of Computer Science Department of Computer Sciences Carnegie Mellon University Carnegie Mellon University University of Texas at Austin [email protected] [email protected] [email protected] Abstract of these systems is a thread scheduler that balances load among the processes. In addition to a good load balance, however, good data This paper studies the data locality of the work-stealing scheduling locality is essential in obtaining high performance from modern algorithm on hardware-controlled shared-memory machines. We parallel systems. present lower and upper bounds on the number of cache misses Several researches have studied techniques to improve the data using work stealing, and introduce a locality-guided work-stealing locality of multithreaded programs. One class of such techniques algorithm along with experimental validation. is based on software-controlled distribution of data among the lo- As a lower bound, we show that there is a family of multi- cal memories of a distributed shared memory system [15, 22, 26]. ¢¡ threaded computations each member of which requires £¥¤§¦©¨ Another class of techniques is based on hints supplied by the pro- total operations (work), for which when using work-stealing the to- grammer so that “similar” tasks might be executed on the same tal number of cache misses on one processor is constant, while even processor [15, 31, 34]. Both these classes of techniques rely on on two processors the total number of cache misses is ¤§¦©¨ . For the programmer or compiler to determine the data access patterns nested-parallel computations, however, we show that on proces- in the program, which may be very difficult when the program has sors the expected additional number of cache misses beyond those complicated data access patterns. Perhaps the earliest class of tech- ¨ on a single processor is bounded by ¥¤§ , where is niques was to attempt to execute threads that are close in the com- the execution time of an instruction incurring a cache miss, is the putation graph on the same processor [1, 9, 20, 23, 26, 28]. The steal time, is the size of cache, and is the number of nodes work-stealing algorithm is the most studied of these techniques [9, on the longest chain of dependences. Based on this we give strong 11, 19, 20, 24, 36, 37]. Blumofe et al showed that fully-strict com- bounds on the total running time of nested-parallel computations putations achieve a provably good data locality [7] when executed using work stealing. with the work-stealing algorithm on a dag-consistent distributed For the second part of our results, we present a locality-guided shared memory systems. In recent work, Narlikar showed that work work stealing algorithm that improves the data locality of multi- stealing improves the performance of space-efficient multithreaded threaded computations by allowing a thread to have an affinity for applications by increasing the data locality [29]. None of this pre- a processor. Our initial experiments on iterative data-parallel appli- vious work, however, has studied upper or lower bounds on the cations show that the algorithm matches the performance of static- data locality of multithreaded computations executed on existing partitioning under traditional work loads but improves the perform- hardware-controlled shared memory systems. ance up to "! over static partitioning under multiprogrammed In this paper, we present theoretical and experimental results work loads. Furthermore, the locality-guided work stealing im- on the data locality of work stealing on hardware-controlled shared proves the performance of work-stealing up to #$"! . memory systems (HSMSs). Our first set of results are upper and lower bounds on the number of cache misses in multithreaded com- putations executed by the work-stealing algorithm. Let %'&(¤§¢¨ de- 1 Introduction note the number of cache misses in the uniprocessor execution and %')*¤§¢¨ denote the number of cache misses in a -processor ex- Many of today’s parallel applications use sophisticated, adaptive ecution of a multithreaded computation by the work stealing algo- algorithms which are best realized with parallel programming sys- rithm on an HSMS with cache size . Then, for a multithreaded tems that support dynamic, lightweight threads such as Cilk [8], computation with & work (total number of instructions), criti- Nesl [5], Hood [10], and many others [3, 16, 17, 21, 32]. The core cal path (longest sequence of dependences), we show the following results for the work-stealing algorithm running on a HSMS. + Lower bounds on the number of cache misses for general computations: We show that there is a family of computa- ¢¡ %'&(¤§¢¨/,102 tions with &-,.£¥¤§¦©¨ such that while even on two processors the number of misses %435¤§¢¨6, £¥¤§¦©¨ . + Upper bounds on the number of cache misses for nested- parallel computations: For a nested-parallel computation, we )87 %'&9¤§¢¨:<;=¢> > show that % , where is the number of steals in the -processor execution. We then show that the linear stealing, locality-guided work stealing, and static partitioning for a 18 work-stealing locality-guided workstealing simple over-relaxation algorithm on a J9K processor Sun Ultra En- static partioning 16 terprise. The over-relaxation algorithm iterates over a J dimen- 14 sional array performing a 0 -point stencil computation on each step. The superlinear speedup for static partitioning and locality-guided 12 work stealing is due to the fact that the data for each run does not ? 10 ; fit into the L ; cache of one processor but fits into the collective L Speedup 8 cache of L or more processors. For this benchmark the following can be seen from the graph. 6 4 1. Locality-guided work stealing does significantly better than standard work stealing since on each step the cache is pre- 2 warmed with the data it needs. 0 0 5 10 15 20 25 30 35 40 45 50 Number of Processes 2. Locality-guided work stealing does approximately as well as static partitioning for up to 14 processes. Figure 1: The speedup obtained by three different over-relaxation 3. When trying to schedule more than 14 processes on 14 algorithms. processors static partitioning has a serious performance drop. The initial drop is due to load imbalance caused by the coarse-grained partitioning. The performance then ap- proaches that of work stealing as the partitioning gets more ( ¨ expected number of steals is ¥¤( , where is the fine-grained. time for a cache miss and is the time for a steal. We are interested in the performance of work-stealing computa- + Upper bound on the execution time of nested-parallel com- tions on hardware-controlled shared memory (HSMSs). We model putations: We show that the expected execution time of a an HSMS as a group of identical processors each of which has its ¥¤=@$A9BDCFE : nested-parallel computation on processors is ) own cache and has a single shared memory. Each cache contains $ :/¤§G:H ¨I ¨ &9¤§¢¨ ' , where is the uniprocessor blocks and is managed by the memory subsystem automatically. execution time of the computation including cache misses. We allow for a variety of cache organizations and replacement poli- cies, including both direct-mapped and associative caches. We as- As in previous work [6, 9], we represent a multithreaded com- sign a server process with each processor and associate the cache putation as a directed, acyclic graph (dag) of instructions. Each of a processor with process that the processor is assigned. One lim- node in the dag represents a single instruction and the edges repre- itation of our work is that we assume that there is no false sharing. sent ordering constraints. A nested-parallel computation [5, 6] is a race-free computation that can be represented with a series-parallel dag [33]. Nested-parallel computations include computations con- 2 Related Work sisting of parallel loops and fork an joins and any nesting of them. This class includes most computations that can be expressed in As mentioned in Section 1, there are three main classes of tech- Cilk [8], and all computations that can be expressed in Nesl [5]. niques that researchers have suggested to improve the data locality Our results show that nested-parallel computations have much bet- of multithreaded programs. In the first class, the program data is ter locality characteristics under work stealing than do general com- distributed among the nodes of a distributed shared-memory system putations. We also briefly consider another class of computations, by the programmer and a thread in the computation is scheduled on computations with futures [12, 13, 14, 20, 25], and show that they the node that holds the data that the thread accesses [15, 22, 26]. can be as bad as general computations. In the second class, data-locality hints supplied by the programmer The second part of our results are on further improving the data are used in thread scheduling [15, 31, 34]. Techniques from both locality of multithreaded computations with work stealing. In work classes are employed in distributed shared memory systems such as stealing, a processor steals a thread from a randomly (with uniform COOL and Illinois Concert [15, 22] and also used to improve the distribution) chosen processor when it runs out of work. In certain data locality of sequential programs [31]. However, the first class applications, such as iterative data-parallel applications, random of techniques do not apply directly to HSMSs, because HSMSs do steals may cause poor data locality. The locality-guided work steal- not allow software controlled distribution of data among the caches. ing is a heuristic modification to work stealing that allows a thread Furthermore, both classes of techniques rely on the programmer to to have an affinity for a process.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us