
ANALYZING HYBRID ARCHITECTURES FOR MASSIVELY PARALLEL GRAPH ANALYSIS A Dissertation Presented to The Academic Faculty by David Ediger In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Electrical and Computer Engineering Georgia Institute of Technology May 2013 Copyright c 2013 by David Ediger ANALYZING HYBRID ARCHITECTURES FOR MASSIVELY PARALLEL GRAPH ANALYSIS Approved by: Dr. George Riley, Committee Chair Dr. Linda Wills School of Electrical and Computer School of Electrical and Computer Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. David A. Bader, Advisor Dr. Rich Vuduc School of Electrical and Computer School of Computational Science and Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Bo Hong Date Approved: 25 March 2013 School of Electrical and Computer Engineering Georgia Institute of Technology To my wife, Michelle, and my parents, Mark and Merrilee, for encouraging a passion for learning. iii ACKNOWLEDGEMENTS I want to thank my friends and family that believed in me from the day I decided to go to graduate school. I want to make special mention of my fellow graduate students, past and present, for walking the road with me and sharing so much of themselves. Thank you to my high performance computing colleagues, especially Rob McColl, Oded Green, Jason Riedy, Henning Meyerhenke, Seunghwa Kang, Aparna Chan- dramowlishwaran, and Karl Jiang. I will miss solving the world's most pressing problems with all of you. Thank you to my advisor, David Bader, for introducing me to the world of graphs, and for inspiring and challenging me to complete my doctoral degree. Your encour- agement and wisdom has helped me grow immensely as a researcher. I want to thank the members of my committee: Professor Rich Vuduc, Profes- sor Linda Wills, Professor Bo Hong, and Professor George Riley, who chaired the committee. This work was supported in part by the Pacific Northwest National Lab (PNNL) Center for Adaptive Supercomputing Software for MultiThreaded Architectures (CASS- MT). I want to acknowledge the research staff at PNNL, Sandia National Laborato- ries, and Cray for their mentorship throughout my graduate studies. iv TABLE OF CONTENTS DEDICATION .................................. iii ACKNOWLEDGEMENTS .......................... iv LIST OF TABLES ............................... viii LIST OF FIGURES .............................. ix SUMMARY .................................... xii I INTRODUCTION ............................. 1 II ORIGIN AND HISTORY OF THE PROBLEM .......... 5 2.1 Relevant Applications . .6 2.1.1 Computational Biology . .6 2.1.2 Business Analytics . .7 2.1.3 Security . .8 2.1.4 Social Networks . .9 2.2 Architectures and Programming Models . 10 2.2.1 Cray XMT . 11 2.2.2 MapReduce . 12 2.3 High Performance Graph Research . 14 2.3.1 Parallel Boost Graph Library . 15 2.3.2 GraphCT . 15 2.3.3 Sandia Multithreaded Graph Library . 16 2.3.4 Knowledge Discovery Toolbox . 16 2.3.5 Google Pregel . 17 III GRAPHCT ................................. 19 3.1 Connected Components . 32 3.2 Clustering Coefficients . 35 3.3 k-Betweenness Centrality . 40 v IV STINGER .................................. 46 4.1 Experimental Setup . 48 4.2 Optimizations . 50 4.3 Streaming Clustering Coefficients . 54 4.4 Streaming Connected Components . 61 V ALTERNATIVE PROGRAMMING MODELS ........... 72 5.1 MapReduce . 72 5.1.1 Data-parallelism and Locality . 72 5.1.2 Load Balance . 73 5.1.3 Resilience . 74 5.1.4 Communication Costs . 75 5.1.5 Complexity . 76 5.1.6 Experimental Results . 78 5.2 Bulk Synchronous Parallel and Pregel . 81 5.2.1 Experimental Method . 82 5.2.2 Connected Components . 83 5.2.3 Breadth-first Search . 87 5.2.4 Clustering Coefficients . 92 5.2.5 Discussion . 94 5.3 Random Access and Storage Performance . 97 5.3.1 Measuring Data Access Time . 97 5.3.2 Building a Model for Access Time . 103 5.3.3 Estimating Concurrency versus I/O Time . 110 VI A HYBRID SYSTEM ARCHITECTURE .............. 113 6.1 A Hierarchy of Data Analytics . 113 6.2 Modeling a Hybrid System . 117 6.2.1 Experimental Method . 119 6.2.2 Results . 120 vi 6.2.3 Discussion . 125 VII RISE OF THE MACROARCHITECTURE ............ 130 7.1 Parallelism . 131 7.2 Shared Memory and BSP . 132 7.3 Latency-oriented Analytics . 133 7.4 Memory Technologies . 135 7.5 Reliability & Resilience . 136 7.6 Integration . 137 VIII CONCLUSION .............................. 140 REFERENCES .................................. 145 VITA ........................................ 154 vii LIST OF TABLES 1 Graph analysis packages and frameworks currently under development. 21 2 Running times in seconds for connected components on a 128-processor Cray XMT. 34 3 The number of vertices ranked in selected percentiles for k = 1 and k = 2 whose betweenness centrality score was 0 for k = 0 (traditional BC). There were 14,320 vertices whose traditional BC score was 0, but whose BCk score for k = 1 was greater than 0. The ND-www graph contains 325,729 vertices and 1,497,135 edges. 43 4 Summary of update algorithms . 60 5 Comparison of single edge versus batched edge updates on 32 Cray XMT processors, in updates per second, on a scale-free graph with approximately 16 million vertices and 135 million edges. 61 6 Updates per second on a graph starting with 16 million vertices and approximately 135 million edges on 32 processors of a Cray XMT. 68 7 Execution times on a 128-processor Cray XMT for an undirected, scale- free graph with 16 million vertices and 268 million edges. 95 8 Number of data elements that can be read per second as determined by microbenchmarks. 102 9 Estimated execution of breadth-first search on a graph with 17.2 billion vertices and 137 billion edges. 106 10 Experimental parameters . 119 viii LIST OF FIGURES 1 GraphCT is an open source framework for developing scalable multi- threaded graph analytics in a cross-platform environment. 20 2 An example user analysis workflow in which the graph is constructed, the vertices are labeled according to their connected components, and a single component is extracted for further analysis using several complex metrics, such as betweenness centrality. 21 3 This MTA pragma instructs the Cray XMT compiler that the loop iterations are independent. 24 4 Scalability of Shiloach-Vishkin connected components with and with- out tree-climbing optimization on the Cray XMT. The input graph is scale-free with approximately 2 billion vertices and 17 billion edges. The speedup is 113x on 128 processors. 33 5 There are two triplets around v in this unweighted, undirected graph. The triplet (m; v; n) is open, there is no edge hm; ni. The triplet (i; v; j) is closed. 35 6 Scalability of the local clustering coefficients kernel on the Cray XMT. On the left, the input graph is an undirected, scale-free graph with approximately 16 million vertices and 135 million edges. The speedup is 94x on 128 processors. On the right, the input graph is the USA road network with 24 million vertices and 58 million edges. The speedup is 120x on 128 processors. Execution times in seconds are shown in blue. 37 7 Scalability of the transitivity coefficients kernel on the Cray XMT. The input graph is a directed, scale-free graph with approximately 16 million vertices and 135 million edges. Execution times in seconds are shown in blue. On 128 processors, we achieve a speedup of 90x. 38 8 Parallel scaling on the Cray XMT, for a scale-free, undirected graph with approximately 16 million vertices and 135 million edges. Scaling is nearly linear up to 96 processors and speedup is roughly 78 on all 128 processors. k = 1 with 256 random sources (single node time 318 minutes). Execution times in seconds shown in blue. 41 9 Per-vertex betweenness centrality scores for k = 0, 1, and 2, sorted in ascending order for k = 0. Note the number of vertices whose score is several orders of magnitude larger for k = 1 or 2 than for traditional betweenness centrality. 44 10 Per-vertex betweenness centrality scores for k = 0, 1, and 2, sorted independently in ascending order for each value of k.......... 44 ix 11 Updates per second on a scale-free graph with approximately 16 million vertices and 270 million edges and a batch size of 100,000 edge updates. 51 12 Increasing batch size results in better performance on the 128-processor Cray XMT. The initial graph is a scale-free graph with approximatley 67 million vertices and 537 million edges. 51 13 There are two triplets around v in this unweighted, undirected graph. The triplet (m; v; n) is open, there is no edge hm; ni. The triplet (i; v; j) is closed. 55 14 Speedup of incremental, local updates relative to recomputing over the entire graph. 59 15 Updates per second by algorithm. 60 16 Update performance for a synthetic, scale-free graph with 1 million vertices (left) and 16 million vertices (right) and edge factor 8 on 32 processors of a 128 processor Cray XMT. 70 17 Workflow of a single MapReduce job. 76 18 The number of reads and writes per iteration performed by connected components algorithms on a scale-free graph with 2 million vertices and 131 million edges. 85 19 Execution time per iteration performed by connected components algo- rithms. Scale is the log base 2 of the number of vertices and the edge factor is 8. For Scale 21, shared memory completes in 8.3 seconds, while BSP takes 12.7 seconds. 86 20 Connected components execution time by iteration for an undirected, scale-free graph with 16 million vertices and 268 million edges. On the 128-processor Cray XMT, BSP execution time is 5.40 seconds and GraphCT execution time is 1.31 seconds.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages166 Page
-
File Size-