A High-Performance Connected Components Implementation for Gpus

A High-Performance Connected Components Implementation for Gpus

A High-Performance Connected Components Implementation for GPUs Jayadharini Jaiganesh Martin Burtscher Department of Computer Science Department of Computer Science Texas State University Texas State University San Marcos, TX 78666, USA San Marcos, TX 78666, USA [email protected] [email protected] ABSTRACT 1 INTRODUCTION Computing connected components is an important graph algo- A connected component of an undirected graph G = (V, E) is a rithm that is used, for example, in medicine, image processing, and maximal subset S of its vertices that are all reachable from each biochemistry. This paper presents a fast connected-components other. In other words, a, b S there exists a path between a and implementation for GPUs called ECL-CC. It builds upon the best b. The subset is maximal, i.e., no vertices are reachable from it that features of prior algorithms and augments them with GPU-spe- are not in the subset. The goal of the connected-components (CC) cific optimizations. For example, it incorporates a parallelism- algorithms we study in this paper is to identify all such subsets friendly version of pointer jumping to speed up union-find oper- and label each vertex with the ID of the component to which it ations and uses several compute kernels to exploit the multiple belongs. A non-empty graph must have at least one connected levels of hardware parallelism. The resulting CUDA code is asyn- component and can have as many as n, where n = |V|. chronous and lock free, employs load balancing, visits each edge For illustration purposes, assume the cities on Earth are the exactly once, and only processes edges in one direction. It is 1.8 vertices of a graph and the roads connecting the cities are the times faster on average than the fastest prior GPU implementation edges. In this example, the road network of an island without running on a Titan X and faster on most of the eighteen real-world bridges to it forms a connected component. and synthetic graphs we tested. Computing connected components is a fundamental graph al- gorithm with applications in many fields. For instance, in medi- CCS Concepts cine, it is used for cancer and tumor detection (the malignant cells are connected to each other) [16]. In computer vision, it is used • Computing methodologies → Concurrent algorithms; Mas- for object detection (the pixels of an object are typically con- sively parallel algorithms nected) [14]. In biochemistry, it is used for drug discovery and protein genomics studies (interacting proteins are connected in Keywords the PPI network) [39]. Fast CC implementations, in particular par- connected components; graph algorithm; union-find; parallelism; allel implementations, are desirable to speed up these and other GPU implementation important computations. Determining CCs serially is straightforward using the follow- ACM Reference format: ing algorithm. After marking all vertices as unvisited, visit them J. Jaiganesh and M. Burtscher. 2018. A High-Performance Connected Com- in any order. If a visited vertex is still marked as unvisited, use it ponents Implementation for GPUs. In Proceedings of the International ACM as the source of a depth-first search (DFS) or breadth-first search Symposium on High-Performance Parallel and Distributed Computing, (BFS) during which all reached vertices are marked as visited and Tempe, Arizona USA, June 2018 (HPDC’18), 13 pages. DOI: 10.1145/3208040.3208041 their labels are set to the unique ID of the source vertex. This takes O(n+m) time, where m = |E|, as each vertex must be visited and each edge traversed. Alternatively, CCs can be found with the following serial al- Permission to make digital or hard copies of all or part of this work for personal or class- gorithm. First, iterate over all vertices and place each of them in a room use is granted without fee provided that copies are not made or distributed for separate disjoint set using the vertex’s ID as the representative profit or commercial advantage and that copies bear this notice and the full citation on (label) of the set. Then, visit each edge (u, v) and combine the set the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post to which vertex u belongs with the set to which vertex v belongs. on servers or to redistribute to lists, requires prior specific permission and/or a fee. Re- This algorithm returns a collection of sets where each set is a con- quest permissions from [email protected]. nected component of the graph. Determining which set a vertex HPDC '18, June 11–15, 2018, Tempe, AZ, USA belongs to and combining it with another set can be done very © 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-5785-2/18/06…$15.00 efficiently using a disjoint-set data structure (aka a union-find https://doi.org/10.1145/3208040.3208041 data structure) [9]. Asymptotically, this algorithm is only slower than the DFS/BFS-based algorithm by the tiny factor α(n), i.e., the Fig. 1 illustrates the parallel label-propagation approach out- inverse of the Ackermann function. lined in the introduction, where the labels are propagated through Whereas there are efficient parallel implementations that are neighboring vertices until all vertices in the same component are based on the BFS approach, for example Ligra+ BFSCC [31], the labelled with the same ID. First, the labels (small digits) are initial- majority of the parallel CC algorithms are based on the following ized with the unique vertex IDs (large digits) as shown in panel “label propagation” approach. Each vertex has a label to hold the (a). Next, each vertex’s label is updated in parallel with the lowest component ID to which it belongs. Initially, this label is set to the label among the neighboring vertices and itself (b). This step is vertex ID, that is, each vertex is considered a separate component, iterated until all labels in the component have the same value (c). which can trivially be done in parallel. Then the vertices are iter- atively processed in parallel to determine the connected compo- nents. For instance, each vertex’s label can be updated with the smallest label among its neighbors. This process repeats until there are no more updates, at which point all vertices in a con- nected component have the same label. In this example, the ID of (a) (b) (c) the minimum vertex in each component serves as the component ID, which guarantees uniqueness. To speed up the computation, Fig. 1. Example of label propagation (large digits are unique ver- the labels are often replaced by “parent” pointers that form a un- tex IDs, small digits are label values): (a) initial state, (b) interme- ion-find data structure. diate state, (c) final state Our ECL-CC implementation described in this paper is based For performance reasons, the labels are typically not propa- on this approach. However, ECL-CC combines features and opti- gated in this manner. Instead, they are indirectly represented by a mizations in a new and unique way to yield a very fast GPU im- parent pointer in each vertex that, together, form a union-find plementation. The paper makes the following main contributions. data structure. Starting from any vertex, we can follow the parent • It presents the ECL-CC algorithm for finding and labeling pointers until we reach a vertex that points to itself. This final the connected components of an undirected graph. Its par- vertex is the “representative”. Every vertex’s implicit label is the allel GPU implementation is faster than the fastest prior CPU and GPU codes on average and on most of the eighteen ID of its representative and, as before, all vertices with the same tested graphs. label (aka representative) belong to the same component. Thus, • It shows that “intermediate pointer jumping”, an approach making the representative u point to v indirectly changes the la- to speed up the find operation in union-find computations, bels of all vertices whose representative used to be u. This makes is faster than other pointer-jumping versions and essential union operations very fast. A chain of parent pointers leading to a to the performance of our GPU implementation. representative is called a “path”. To also make the traversals of • It describes important code optimizations for CC computa- these paths, i.e., the find operations, very fast, the paths are spo- tions that are necessary for good performance but that have radically shortened by making earlier elements skip over some el- heretofore never been combined in the way ECL-CC does. ements and point directly to later elements. This process is re- The CUDA code is available to the research and education com- ferred to as “path compression”. munity at http://cs.txstate.edu/~burtscher/research/ECL-CC/. The rest of the paper is organized as follows. Section 2 provides background information and discusses related work. Section 3 de- scribes the ECL-CC implementation. Section 4 explains the evalu- ation methodology. Section 5 presents the experimental results. Section 6 concludes the paper with a summary. 2 BACKGROUND AND RELATED WORK A large amount of related work exists on computing connected (a) (b) components. In this section, we primarily focus on the shared- Fig. 2. Example of hooking operation: (a) before hooking, (b) after memory solutions that we compare against in the result section hooking as well as the work on which they are based. This paper does not consider strongly connected components as they are of interest in Shiloach and Vishkin proposed one of the first parallel CC al- directed graphs.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us