Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors Lars Arge Michael T. Goodrich Michael Nelson ∗ University of California, Irvine University of California, Irvine MADALGO Irvine, CA 92697, USA Irvine, CA 92697, USA University of Aarhus [email protected] [email protected] Aarhus, Denmark [email protected] Nodari Sitchinava University of California, Irvine Irvine, CA 92697, USA [email protected] ABSTRACT Keywords In this paper, we study parallel algorithms for private-cache Parallel External Memory, PEM, private-cache CMP, PEM chip multiprocessors (CMPs), focusing on methods for foun- sorting dational problems that can scale to hundreds or even thou- sands of cores. By focusing on private-cache CMPs, we show that we can design efficient algorithms that need no addi- 1. INTRODUCTION tional assumptions about the way that cores are intercon- Advances in multi-core architectures are showing great nected, for we assume that all inter-processor communica- promise at demonstrating the benefits of parallelism at the tion occurs through the memory hierarchy. We study several chip level. Current architectures have 2, 4, or 8 cores on fundamental problems, including prefix sums, selection, and a single die, but industry insiders are predicting orders of sorting, which often form the building blocks of other paral- magnitude larger numbers of cores in the not too distance lel algorithms. Indeed, we present two sorting algorithms, a future [12, 20, 23]. Such advances naturally imply a number distribution sort and a mergesort. All algorithms in the pa- of paradigm shifts, not the least of which is the impact on per are asymptotically optimal in terms of the parallel cache algorithm design. That is, the coming multicore revolution accesses and space complexity under reasonable assumptions implies a compelling need for algorithmic techniques that about the relationships between the number of processors, scale to hundreds or even thousands of cores. Parallelism the size of memory, and the size of cache blocks. In addition, extraction at the compiler level may be able to handle part we study sorting lower bounds in a computational model, of this load, but part of the load will also need to be carried which we call the parallel external-memory (PEM) model, by parallel algorithms. This paper is directed at this latter that formalizes the essential properties of our algorithms for goal. private-cache chip multiprocessors. There is a sizable literature on algorithms for shared- memory parallel models, most notably for variations of the Categories and Subject Descriptors PRAM model (e.g., see [17, 18, 24]). Indeed, some re- searchers (e.g., see [26]) advocate that PRAM algorithms Analysis of Algorithms and Problem Complex- F.2.2 [ can be directly implemented in multicores, since separate ity]: Nonnumerical Algorithms and Problems—Sorting and cores share some levels of the memory hierarchy, e.g. the searching L2 cache or main memory. After all, an argument can be made that ignoring the memory hierarchy during algorithm General Terms design worked reasonably well for the single-processor ar- chitectures: in spite of recent developments in the cache- Algorithms optimal models, most algorithms implemented and used by an average user are designed in the RAM model due to the ∗Center for Massive Data Algorithmics – a Center of the Danish National Research Foundation small size of average input sets and relative simplicity of the RAM algorithms. However, we feel that to take advantage of the parallelism provided by the multicore architectures, problems will have to be partitioned across a large number Permission to make digital or hard copies of all or part of this work for of processors. Therefore, the latency of the shared memory personal or classroom use is granted without fee provided that copies are will have a bigger impact on the overall speed of execution not made or distributed for profit or commercial advantage and that copies of the algorithms, even if the original problem fits into the bear this notice and the full citation on the first page. To copy otherwise, to memory of a single processor. The PRAM model contains republish, to post on servers or to redistribute to lists, requires prior specific no notion of the memory hierarchy or private memories be- permission and/or a fee. SPAA'08, June 14–16, 2008, Munich, Germany. yond a few registers in the CPUs. It does not model the Copyright 2008 ACM 978-1-59593-973-9/08/06 ...$5.00. differences in the access speeds between the private cache on each processor and the shared memory that is address- two parallel sorting algorithms—a distribution sort and a able by all the processors in a system. Therefore, it cannot mergesort. We also provide a sorting lower bound for our accurately model the actual execution time of the algorithms model, which shows that our sorting algorithms are optimal on modern multicore architectures. for reasonable numbers of processors. At the other extreme, the LogP model [9, 19] and bulk- In the following section we formally define the parallel synchronous parallel (BSP) [10, 13, 25] model, assume a external-memory model. Next we present algorithms for parallel architecture where each processor has its own in- several fundamental building blocks in parallel algorithm de- ternal memory of size at least N/P . In these models there sign: prefix sums and multi-way selection and partitioning. is no shared memory and the inter-processor communica- In section 4 we describe two PEM sorting algorithms: a dis- tion is assumed to occur via message passing through some tribution sort and a mergesort. In section 5 we prove the interconnection network. lower bounds on sorting in the PEM model. We conclude The benefit of utilizing local, faster memory in single- with some discussion and open problems. processor architectures was advocated by researchers almost 20 years ago. Aggarwal and Vitter [1] introduced the exter- 2. THE MODEL nal memory model (see also [5, 16, 27]), which counts the The model we propose is a cache-aware version of the number of block I/Os to and from external memory. They model proposed by Bender et al. [2]. It can be viewed as also describe a parallel disk model (PDM), where D blocks a variation of the single-disk, multiple-processor version of can simultaneously be read or written from/to D different the model of Nodine and Vitter [21]. We call it the parallel disks, however, they do not consider multiple processors. external memory (PEM) model to underline the similarity of Nodine and Vitter [22] describe several efficient sorting al- the relationship of the PEM model and the single-processor, gorithms for the parallel disk model. Interestingly, Nodine single-disk external memory (EM) model of Aggarwal and and Vitter [21] also consider a multiprocessor version of the Vitter [1] to the PRAM and the RAM models, respectively. parallel disk model, but not in a way that is appropriate The Parallel External Memory (PEM) model is a compu- for multicores, since they assume that the processors are tational model with P processors and a two-level memory interconnected via a PRAM-type network or share the en- hierarchy. The memory hierarchy consists of the external tire internal memory (see, e.g., [28, 29, 30]). Assuming that memory (main memory) shared by all the processors and P processors share internal memory does not fit the current internal memories (caches). Each cache is of size M, is parti- practice of multicore. But it does greatly simplify parallel tioned in blocks of size B and is exclusive to each processor, algorithms for problems like sorting, since it allows elements i.e., processors cannot access other processors’ caches. To in internal memory to be sorted using well-known parallel perform any operation on the data, a processor must have sorting algorithm, such as Cole’s parallel merge sort [6]. the data in its cache. The data is transferred between the Cormen and Goodrich [8] advocated in a 1996 position pa- main memory and the cache in blocks of size B. (See Fig- per that there was a need for a bridging model between the ure 1.) external-memory and bulk-synchronous models, but they did not mention anything specific. Dehne et al. [11] de- scribed a bulk-synchronous parallel external memory model, Figure 1: PEM Model but in their model the memory hierarchy was private to each processor with no shared memory, while the inter-processor communication was conducted via message passing as in reg- CPU 1 M/B ular BSP model. More recently, several models have been proposed empha- CPU 2 sizing the use of caches of modern CMPs [2, 4, 3]. Bender M/B Main et al. [2] propose a concurrent cache-oblivious model, but Memory focus on the correctness of the data structures rather than efficiency. Blellochet al. [3] (building on the work of Chen et al. [4]) consider thread scheduling algorithms with opti- CPU P mal cache utilization for a wide range of divide-and-conquer M/B problems. However, the solutions they consider are of a more moderate level of parallelism. B In contrast to this prior work, in this paper we consider designing massively parallel algorithms (in PRAM sense) Caches in the presence of caches. Indeed, our algorithms are ex- tremely scalable: if we set the cache sizes to be constant or The model’s complexity measure, the I/O complexity, is non-existent, our algorithms turn into corresponding PRAM the number of parallel block transfers between the main algorithms. At the other extreme, if we have only a single memory and the cache by the processors. For example, processor, our algorithms turn into solutions for the single- an algorithm with each of the P processors simultaneously disk, single-processor external-memory model of Aggarwal reading one (but potentially different) block from the main and Vitter [1].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us