SWARMGRAPH: Analyzing Large-Scale In-Memory Graphs on Gpus

SWARMGRAPH: Analyzing Large-Scale In-Memory Graphs on Gpus

SWARMGRAPH: Analyzing Large-Scale In-Memory Graphs on GPUs Yuede Ji Hang Liu H. Howie Huang George Washington University Stevens Institute of Technology George Washington University [email protected] [email protected] [email protected] Abstract—Graph computation has attracted a significant 106 amount of attention since many real-world data come in the 105 format of graph. Conventional graph platforms are designed 4 to accommodate a single large graph at a time, while simul- 10 taneously processing a large number of in-memory graphs 103 whose sizes are small enough to fit into the device memory 102 are ignored. In fact, such a computation framework is needed 10 as the in-memory graphs are ubiquitous in many applications, e.g., code graphs, paper citation networks, chemical compound # of graphs (log10 scale) 1 graphs, and biology graphs. In this paper, we design the 0 500 1000 1500 2000 2500 # of vertices first large-scale in-memory graphs computation framework on Graphics Processing Units (GPUs), SWARMGRAPH. To Figure 1: Graph size distribution of the 1.8 million con- accelerate single graph computation, SWARMGRAPH comes trol flow graphs for binary code vulnerability detection with two new techniques, i.e., memory-aware data placement from [11], [12]. and kernel invocation reduction. These techniques leverage the fast memory components in GPU, remove the expensive in-memory graph framework is needed from the following global memory synchronization, and reduce the costly kernel three aspects. invocations. To rapidly compute many graphs, SWARMGRAPH First, in-memory graphs are popular in real-world ap- pipelines the graph loading and computation. The evalua- plications. Here, we will elaborate on four of them. (1) In tions on large-scale real-world and synthetic graphs show the paper citation, the graph sizes (in terms of vertex count) that SWARMGRAPH significantly outperforms the state-of-the- are smaller than 100 for the 19:5 thousand graphs from a art CPU- and GPU-based graph frameworks by orders of DBLP dataset [8]. (2) In the chemical compound, the graph magnitude. sizes are smaller than 200 for all the 8:2 thousand graphs from NCI1 and NCI109 dataset [8], [9]. (3) In biology, the 1. Introduction graph sizes of a protein dataset with 1:1 thousand graphs are smaller than 600 [8], [10]. (4) In binary code vulnerability Graph, made up of vertex and edge, is a natural represen- detection, the control flow graph (CFG) comes with a small tation for many real-world applications [1], [2], [3], [4]. As number of vertices as shown in Figure 1. Note, each function graph algorithms can help to mine useful and even hidden inside the binary code is represented as a CFG, where a knowledge, they are of particular importance. Towards that vertex denotes a code block and an edge denotes the control need, we have witnessed a surge of interests in graph flow relationship between code blocks [11], [12], [13]. computation in various directions, including shared memory, In addition, concurrently processing a collection of in- distributed system, and graphics processing unit (GPU) [5], memory graphs is widely needed, while the system dedi- [6], [7]. cated to this direction, to the best of our knowledge, is ab- sent. Still using the binary code vulnerability detection as an example, for each control flow graph, one needs to compute 1.1. Motivation the betweenness centrality (BC) and other algorithms to find the important vertex and edge [11], [12], [13]. However, While a vast majority of the graph computation frame- current graph frameworks, which run one algorithm on a work, by default, assumes to process a single large graph, single graph at a time, will take an absurdly long time to this paper unveils that rapidly processing a zoo of in- process a million control flow graphs which is the case in memory graphs is equally important. Here, we regard a the binary vulnerability detection works [11], [12], [13]. graph as in-memory if it is able to reside in the GPU shared Third, although recent graph computation methods on memory (up to 96KB in Nvidia Tesla V100 GPU). Such an GPU greatly improve the performance [14], [15], they usu- ally aim at a single large graph, which will cause low GPU_0 Memory- Kernel resource utilization when applied to a large number of in- aware data invocation Pipelining placement reduction memory graphs. To begin with, conventional graph frame- Task … Result works routinely assign all GPU threads to work on one scheduler … GPU_i graph at a time, while this single in-memory graph is usually A number of in- Memory- Kernel memory graphs aware data invocation not able to saturate all the GPU resources, i.e., threads. In Pipelining placement reduction addition, due to the limitation of global thread synchroniza- tion support, these frameworks have to either terminate the Figure 2: Overview of SWARMGRAPH. kernel after each iteration or use the expensive GPU device level synchronization, e.g., cooperative groups [16], to syn- chronize all the threads. Further, existing frameworks place 1 computation on the one million synthetic graphs, SWARM- computation metadata in slow global memory because it GRAPH is able to significantly reduce the total runtime of assumes they are too big to be stored in the fast but small the three algorithms from 248:5 to 34:4 minutes. GPU memory components, e.g., shared memory. Due to the page limit, we moved out some detailed de- scriptions, that can be found in the full technical report [17]. 1.2. Contribution The rest of the paper is organized as follows: Section 2 In this work, we have designed a new GPU-based graph introduces the background. Section 3 presents the design of computation framework for processing a large number of SWARMGRAPH. Section 4 evaluates SWARMGRAPH. Sec- in-memory graphs, namely SWARMGRAPH. As shown in tion 5 summarizes the related works. Section 6 discusses Figure 2, given many in-memory graphs, SWARMGRAPH and Section 7 concludes the paper. leverages the task scheduler to distribute the workload to each GPU. On each GPU, SWARMGRAPH applies our newly 2. Background designed pipelining technique to pipeline the graph loading and computation for many graphs. For each graph, our This section discusses the background of the CUDA memory-aware data placement strategy carefully places dif- programming model and GPU architecture. ferent data in different memory components to reduce the The Nvidia CUDA programming model includes grid, memory access latency. Further, our kernel invocation re- cooperative thread array (CTA), warp, and thread [18], [19]. duction (KIR) strategy carefully schedules the GPU threads A grid consists of multiple CTAs, which are also known to reduce the number of expensive kernel invocations and as blocks. A grid is mapped as one GPU chip from the avoid global memory synchronizations. The contributions hardware point as shown in Figure 3. A GPU chip includes are three-fold. many multiple processors (MPs), and each MP handles the First, observing the conventional GPU platforms are not computation of one or more blocks. Each MP is composed efficient for computing one in-memory graph, we design of many streaming processors (SMs) that handle the com- two new techniques, memory-aware data placement, and putation of one or more threads. kernel invocation reduction. They are able to leverage the fast memory components in GPU, remove the expensive In each grid, all the threads share the same global global memory synchronization, and reduce the number of memory, texture memory, and constant memory as shown in Figure 3. The texture memory and constant memory are costly kernel invocations. With these optimizations, SWAR- read-only, while global memory can be read and written. In MGRAPH is able to outperform both CPU- and GPU-based graph frameworks by several orders of magnitude. each block, all the threads share the same shared memory, Second, to accelerate the computation of many graphs, which can be read and written. It can be accessed by the we design a pipelining technique motivated by the fact that threads in the same block, but cannot be accessed by outside the data dependency only exists in the graph loading and threads. The configurable shared memory and L1 cache computation of one graph, while does not exist between share the total memory size (64KB for Tesla K40 GPU). different graphs. The pipelining technique is able to bring 1:9× speedup for the computation of many graphs. Grid 0 GPU Chip Third, we have implemented SWARMGRAPH with three Block 0 Block i Multiple Registers Registers Registers … Registers frequently used graph algorithms, all-pairs shortest path … Processor … … SM ... SM (APSP), closeness centrality (CC), and betweenness central- Thread 0 Thread i … Thread 0 Thread i ity (BC). We tested SWARMGRAPH with two graph bench- ... ... marks, i.e., protein graphs and one million synthetic graphs. Shared memory Shared memory ... For one graph computation on the protein graphs, SWAR- Global memory Multiple MGRAPH achieves 314×; 93×; 24×, and 19× speedup over Processor Texture memory the parallel CPU implementation, conventional GPU, Gun- SM ... SM rock, and Hybrid BC, respectively. Further, for many graphs Constant memory Programming Model Hardware 1. We regard the original vertex and edge information as the graph data, the intermediate information of the vertex and edge as metadata [14]. Figure 3: GPU architecture and CUDA programming. TABLE 1: Data placement comparison with conventional Depth = 0 a b c graph platforms. GDDR means GPU global memory, the a b 1 c d d f latency is in the clock cycle. a e h d e 2 d f a b e h g Memory (Tesla K40) Conventional graph SWARMGRAPH c 3 b e h g c i Name Size Latency platforms [14], [5] h g 4 i f Register 65,536 - on-demand Graph, metadata 5 g L1 + Shared 64KB ∼20 Hub vertex Metadata f i 6 i L2 cache 1.5MB ∼80 on-demand on-demand (graph) (a) A graph with edge GDDR 12GB ∼400 Graph, metadata Graph, metadata weight 1 (b) SSSP traversal from vertex a, b, and c Figure 4: Computing all-pairs shortest path on an example Each thread is assigned with its own registers for reading graph shown in (a), the single-source shortest path traversal and writing.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us