Sequential Random Permutation, List Contraction and Tree Contraction Are Highly Parallel

Sequential Random Permutation, List Contraction and Tree Contraction Are Highly Parallel

Sequential Random Permutation, List Contraction and Tree Contraction are Highly Parallel Julian Shun∗ Yan Guy Guy E. Blellochz Jeremy T. Finemanx Phillip B. Gibbons{ Abstract question has rarely been studied. We show that simple sequential randomized iterative algo- As an example, which we will cover in this paper, con- rithms for random permutation, list contraction, and tree sider the well-known algorithm for randomly permuting contraction are highly parallel. In particular, if iterations of a sequence of n values [13, 26]. The algorithm iterates the algorithms are run as soon as all of their dependencies through the sequence from the end to the beginning (or the have been resolved, the resulting computations have log- other way) and for each location i, it swaps the value at i arithmic depth (parallel time) with high probability. Our with the value at a random target location j at or before i. proofs make an interesting connection between the depen- In the algorithm, each step can depend on previous steps dence structure of two of the problems and random binary since on step i the value at i and/or its target j might have trees. Building upon this analysis, we describe linear-work, already been swapped by a previous step. The question polylogarithmic-depth algorithms for the three problems. is: What does this dependence structure look like? Also, Although asymptotically no better than the many prior can the above approach be used to derive a highly parallel, parallel algorithms for the given problems, their advan- work-efficient parallelization of the sequential algorithm? tages include very simple and fast implementations, and In this paper, we study these questions for three funda- returning the same result as the sequential algorithm. Ex- mental problems: random permutation, list contraction and periments on a 40-core machine show reasonably good tree contraction [25, 24, 36]. For all three problems, we performance relative to the sequential algorithms. analyze the dependencies of simple randomized sequential algorithms, and show that the algorithms are efficient in 1 Introduction parallel, if we simply allow each step to run as soon as it no longer depends on previous steps. To this end, we define Over the past several decades there has been significant the notion of an iteration dependence graph that captures research on deriving new parallel algorithms for a variety the dependencies among the steps, and then we analyze the of problems, with the goal of designing highly parallel depth of these graphs, which we refer to as the iteration (polylogarithmic depth), work-efficient (linear in the se- depth. We also study how to use low-depth iteration de- quential running time) algorithms. For some problems, pendence graphs to design efficient implementations. This however, one might ask if perhaps a standard sequential involves being able to efficiently recognize when a step no algorithm is already highly parallel if we simply execute longer depends on any uncompleted previous step. sub-computations opportunistically when they no longer Beyond the intellectual curiosity of whether sequential depend on any other uncompleted sub-computations. This algorithms are inherently parallel, the approach has several approach is particularly applicable in iterative or greedy important benefits for the design of parallel algorithms. algorithms that iterate (loop) once through a sequence of Firstly, it can lead to very simple parallel algorithms. In steps (or elements), each step depending on the results particular, if there is an easy way to check for dependencies, or effects of only a subset of previous steps. In such al- then the parallel algorithm will be very similar to the gorithms, instead of waiting for its turn in the sequential sequential one. We use a framework based on deterministic order, a given step can run immediately once all previous reservations [6] that makes programming such algorithms steps it depends on have been completed. The approach al- particularly easy. Secondly, the approach can lead to very lows for steps to run in parallel while performing the same efficient parallel algorithms. We show that if a sufficiently computations on each step as the sequential algorithm, and small prefix of the uncompleted iterations are processed at consequently returning the same result. Surprisingly, this a time, then most steps do not depend on each other and can Downloaded 02/06/15 to 128.2.209.112. Redistribution subject SIAM license or copyright; see http://www.siam.org/journals/ojsa.php run immediately. This reduces the overhead for repeated ∗Carnegie Mellon University. [email protected] checks and leads to work which is hardly any greater than y Carnegie Mellon University. [email protected] the sequential algorithm. Finally, the parallelization of the zCarnegie Mellon University. [email protected] xGeorgetown University. jfi[email protected] sequential algorithm will be deterministic returning the {Intel Labs and Carnegie Mellon University. [email protected] same result on each execution (assuming the same source 431 Copyright © 2015. by the Society for Industrial and Applied Mathematics. of random numbers). The result of the algorithm will random bits. By making use of a pseudorandom gener- therefore be independent of how many threads are used, ator for space-bounded computation by Nisan [31], we how the scheduler works, or any other non-determinism show that the algorithms for random permutation and list in the underlying hardware and software, which can make contraction require only a polylogarithmic number of ran- debugging and reasoning about parallel programs much dom bits w.h.p. This result is based on showing that our easier [8, 6]. algorithms can be simulated in polylogarithmic space. For the random permutation problem we consider the We have implemented all three of our algorithms in algorithm described above. We show that the algorithm the deterministic reservations framework [6]. Our imple- leads to iteration dependence graphs that follow the same mentations for random permutation and list contraction distribution over the random choices as do random binary contain under a dozen lines of C code, and tree contraction search trees. Hence the iteration depth is Θ(log n) with is just a few dozen lines. We have experimented with these high probability.1 For this algorithm we also show that implementations on a shared-memory multi-core machine recognizing when a step no longer depends on previous with 40 cores, obtaining reasonably good speedups rela- steps is easy, leading to a straightforward linear-work tive to the sequential iterative algorithms, on problems for polylogarithmic-depth implementation. Therefore the which it is hard to compete with sequential algorithms. “sequential” algorithm is effectively parallel. Related Work. Beyond significant prior work on algo- The list contraction problem is to contract a set of rithms for the problems that we consider, which is men- linked lists each into a single node (possibly combining tioned in the sections on each problem, there has been values), and has many applications including list ranking prior work on understanding the parallelism in iterative (or and Euler tours [24]. The sequential algorithm that greedy) sequential algorithms, including work on the max- we consider simply iterates over the nodes in random imal independent set and maximal matching problems [7], order splicing each one out.2 We show that for this and on graph coloring [21]. algorithm each list has an iteration dependence graph that follows the same distribution as random binary search Contributions. The main contributions of this paper trees. The iteration depth is therefore O(log n) w.h.p. are as follows. We show that the standard sequential For this algorithm, determining when a step no longer algorithm for random permutation has low sequential depends on previous steps is trivial—it is simply when dependence (Θ(log n) w.h.p.). For list contraction and the node’s neighbors in the list are both later in the tree contraction, we show that the sequential algorithms ordering. This leads to a straightforward linear-work also have a dependence that is logarithmic w.h.p. given a parallel implementation of the algorithm. random ordering of the input. We show natural parallel The tree contraction problem is to contract a tree into implementations of these algorithms, where steps are a single node (possibly combining node values), and again processed as soon as all of their dependencies have been has many applications [28, 29, 24]. We assume that the resolved. These parallel algorithms give the same result as tree is a rooted binary tree. The sequential algorithm that the sequential algorithms, which is useful for deterministic we consider steps over the leaves of the tree in random parallel programming. We show linear-work parallel order and, for each leaf, it splices the leaf and its parent implementations of these algorithms using deterministic out. We show that the iteration depth for this algorithm is reservations and activation-based approaches. We also again O(log n) w.h.p. For this algorithm, however, there prove that our algorithms for random permutation and seems to be no easy on-line way to determine when a list contraction require only a polylogarithmic number step no longer depends on any other uncompleted steps. of random bits w.h.p., in contrast to O(n log n) random We show, however, that with some pre-processing we can bits in a straightforward implementation. Finally, our identify the dependencies. This leads to a linear-work experimental results show that our implementations of parallelization of the algorithm. We also show how to the parallel algorithms achieve good speedup on a 40-core relax the dependencies such that the contraction is still machine and outperform the sequential algorithms with correct and deterministic, but does not necessarily contract just a modest number of cores.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    18 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us