Graphs in the Language of Linear Algebra: Applications, Software, and Challenges

Graphs in the Language of Linear Algebra: Applications, Software, and Challenges

Graphs in the Language of Linear Algebra: Applications, Software, and Challenges John R. Gilbert University of California, Santa Barbara Graph Algorithm Building Blocks May 19, 2014 Support: Intel, Microsoft, DOE Office of Science, NSF 1 Thanks … Lucas Bang (UCSB), Jon Berry (Sandia), Eric Boman (Sandia), Aydin Buluc (LBL), John Conroy (CCS), Kevin Deweese (UCSB), Erika Duriakova (Dublin), Armando Fox (UCB), Shoaib Kamil (MIT), Jeremy Kepner (MIT), Tristan Konolige (UCSB), Adam Lugowski (UCSB), Tim Mattson (Intel), Brad McRae (TNC), Dave Mizell (YarcData), Lenny Oliker (LBL), Carey Priebe (JHU), Steve Reinhardt (YarcData), Lijie Ren (Google), Eric Robinson (Lincoln), Viral Shah (UIDAI), Veronika Strnadova (UCSB), Yun Teng (UCSB), Joshua Vogelstein (Duke), Drew Waranis (UCSB), Sam Williams (LBL) 2 Outline • A few sample applications • Sparse matrices for graph algorithms • Software: CombBLAS, KDT, QuadMat • Challenges, issues, and questions 3 Large-scale genomic mapping and sequencing [Strnadova, Buluc, Chapman, G, Gonzalez, Jegelska, Rokhsar, Oliker 2014] – Problem: scale to millions of markers times thousands of individuals, with “unknown” rates > 50% – Tools used or desired: spanning trees, approximate TSP, incremental connected components, spectral and custom clustering, k-nearest neighbors – Results: using more data gives better genomic maps 4 Alignment and matching of brain scans [Conroy, G, Kratzer, Lyzinski, Priebe, Vogelstein 2014] – Problem: match functional regions across individuals – Tools: Laplacian eigenvectors, geometric spectral partitioning, clustering, and more. 5 Landscape connectivity modeling [McRae et al.] • Habitat quality, gene flow, corridor identification, conservation planning • Targeting larger problems: Yellowstone-to-Yukon corridor • Tools: Graph contraction, connected components, Laplacian linear systems 6 Figures courtesy of Brad McRae, NCEAS Combinatorial acceleration of Laplacian solvers [Boman, Deweese, G 2014] (B-1/2 A B-1/2) (B1/2 x) = B-1/2 b – Problem: approximate target graph by sparse subgraph – Ax = b in nearly linear time in theory [ST08, KMP10, KOSZ13] – Tools: spanning trees, subgraph extraction and contraction, breadth-first search, shortest paths, . 7 The middleware challenge for graph analysis Continuous Discrete physical modeling structure analysis Linear algebra Graph theory Computers Computers 8 The middleware challenge for graph analysis • By analogy to Basic Linear Algebra Subroutines (BLAS): numerical Ops/Sec vs. Matrix Size scientific computing. C = A*B y = A*x µ = xT y • What should the combinatorial BLAS look like? 9 Sparse array primitives for graph manipulation Identification of Primitives Sparse matrix-matrix Sparse matrix-dense multiplication (SpGEMM) vector multiplication × × Element-wise operations Sparse matrix indexing .* Matrices over various semirings: (+ . x), (min . +), (or . and), … 10 Examples of semirings in graph algorithms Real field: (R, +, x) Classical numerical linear algebra Boolean algebra: ({0 1}, |, &) Graph traversal Tropical semiring: (R U {∞}, min, +) Shortest paths (S, select, select) Select subgraph, or contract nodes to form quotient graph ( edge/vertex attributes, Schema for user-specified vertex data aggregation, computation at vertices and edges edge data processing ) 11 Multiple-source breadth-first search 1 2 4 7 5 3 6 AT X 12 Multiple-source breadth-first search 1 2 à 4 7 5 3 6 AT X ATX • Sparse array representation => space efficient • Sparse matrix-matrix multiplication => work efficient • Three possible levels of parallelism: searches, vertices, edges 13 Graph contraction via sparse triple product A1 1 2 Contract A1 6 A3 4 A2 5 3 A2 A3 1 2 3 4 5 6 1 2 3 4 5 6 1 1 1 0 0 0 0 1 1 1 0 2 0 0 1 0 1 0 x 2 x 1 0 1 = 3 0 0 0 1 0 1 3 0 1 0 4 1 1 5 1 1 6 0 0 1 Subgraph extraction via sparse triple product 1 2 Extract 3 6 2 4 1 5 3 1 2 3 4 5 6 1 2 3 4 5 6 1 1 1 1 0 0 0 1 1 1 0 2 0 0 1 1 1 0 x 2 x 1 0 1 = 3 0 0 0 1 0 1 3 1 1 0 4 1 1 5 1 1 6 0 0 1 Counting triangles (clustering coefficient) A Clustering coefficient: 3 4 • Pr (wedge i-j-k makes a triangle with edge i-k) 5 • 3 * # triangles / # wedges 6 • 3 * 4 / 19 = 0.63 in example 1 2 • may want to compute for each vertex j 16 Counting triangles (clustering coefficient) A Clustering coefficient: 3 4 • Pr (wedge i-j-k makes a triangle with edge i-k) 5 • 3 * # triangles / # wedges 6 • 3 * 4 / 19 = 0.63 in example 1 2 • may want to compute for each vertex j Inefficient way to count triangles with matrices: • A = adjacency matrix A A3 • # triangles = trace(A3) / 6 • but A3 is likely to be pretty dense 17 Counting triangles (clustering coefficient) A Clustering coefficient: 3 4 • Pr (wedge i-j-k makes a triangle with edge i-k) 5 • 3 * # triangles / # wedges 6 • 3 * 4 / 19 = 0.63 in example 1 2 • may want to compute for each vertex j Cohen’s algorithm to count triangles: hi hi - Count triangles by lowest-degree vertex. lo hi hi - Enumerate “low-hinged” wedges. lo hi hi - Keep wedges that close. lo 18 Counting triangles (clustering coefficient) A B, C A = L + U (hi->lo + lo->hi) 3 3 1 4 4 5 L × U = B (wedge, low hinge) 5 ∧ 2 A B = C (closed wedge) 1 6 6 sum(C)/2 = 4 triangles 1 2 1 2 A L U C 1 1 1 2 1 2 19 A few other graph algorithms we’ve implemented in linear algebraic style • Maximal independent set (KDT/SEJITS) [BDFGKLOW 2013] • Peer-pressure clustering (SPARQL) [DGLMR 2013] • Time-dependent shortest paths (CombBLAS) [Ren 2012] • Gaussian belief propagation (KDT) [LABGRTW 2011] • Markoff clustering (CombBLAS, KDT) [BG 2011, LABGRTW 2011] • Betweenness centrality (CombBLAS) [BG 2011] • Hybrid BFS/bully connected components (CombBLAS) [Konolige, in progress] • Geometric mesh partitioning (Matlab J) [GMT 1998] 20 Graph algorithms in the language of linear algebra • Kepner et al. study [2006]: fundamental graph algorithms including min spanning tree, shortest paths, independent set, max flow, clustering, … • SSCA#2 / centrality [2008] • Basic breadth-first search / Graph500 [2010] • Beamer et al. [2013] direction- optimizing breadth-first search, implemented in CombBLAS 21 Combinatorial BLAS hp://gauss.cs.ucsb.edu/~aydin/CombBLAS An extensible distributed-memory library offering a small but powerful set of linear algebraic operaons specifically targeNng graph analyNcs. • Aimed at graph algorithm designers/programmers who are not expert in mapping algorithms to parallel hardware. • Flexible templated C++ interface. • Scalable performance from laptop to 100,000-processor HPC. • Open source soIware. • Version 1.4.0 released January 16, 2014. Some Combinatorial BLAS funcNons Func,on Parameters Returns Math Nota,on SpGEMM - sparse matrices A and B sparse matrix C = op(A) * op(B) - unary functors (op) SpM{Sp}V - sparse matrix A sparse/dense y = A * x (Sp: sparse) - sparse/dense vector x vector SpEWiseX - sparse matrices or vectors in place or sparse C = A .* B - binary functor and predicate matrix/vector Reduce - sparse matrix A and functors dense vector y = sum(A, op) SpRef - sparse matrix A sparse matrix B = A(p,q) - index vectors p and q SpAsgn - sparse matrices A and B none A(p,q) = B - index vectors p and q Scale - sparse matrix A none check manual - dense matrix or vector X Apply - any matrix or vector X none op(X) - unary functor (op) Combinatorial BLAS: Distributed-memory reference implementaon Combinatorial BLAS funcons and operators DistMat CommGrid FullyDistVec ... HAS A Polymorphism SpDistVec DenseDistVec DenseDistMat SpDistMat SpMat Enforces interface only DCSC CSC Triples CSB 2D layout for sparse matrices & vectors n pc x 1,1 Matrix/vector distribuNons, n p A A A x1,2 r 1,1 1,2 1,3 interleaved on each other. € x1,3 € x € 2,1 Default distribuNon in A A5 A x2,2 € € € 2,1 €2,2 2,3 € x2,3 Combinatorial BLAS. € x3,1 8 € A A x3,2 Scalable with increasing € €A 3,1 €3, 2 3,3 € x € 3,3 number of processes € € € € - 2D matrix layout wins over 1D with large core counts € and with limited bandwidth/compute - 2D vector layout someNmes important for load balance Combinatorial BLAS “users” (Sep 2013) • IBM (T.J. Watson, Zurich, & Tokyo) • King Fahd U • Microsoft • Tokyo Inst of Technology • Intel • Chinese Academy of • Cray Sciences • Stanford • U Ghent (Belgium) • UC Berkeley • Bilkent U (Turkey) • Carnegie-Mellon • U Canterbury (New Zealand) • Georgia Tech • Purdue • Ohio State • Indiana U • Columbia • Mississippi State • U Minnesota • UC Merced 26 QuadMat shared-memory data structure [Lugowski, G] subdivide by dimension on power of 2 indices m rows Blocks store enough matrix elements for meaningful computaon; denser parts of matrix have more blocks. 27 n columns QuadMat example: Scale-10 RMAT Scale 10 RMAT (887x887, 21304 non-nulls) up to 1024 non-nulls per block In order of increasing degree Blue blocks: uint16_t indices Green blocks: uint8_t indices 28 Pair-List QuadMat SpGEMM algorithm - Problem: Natural recursive matrix mulNplicaon is inefficient due to deep tree of sparse matrix addiNons. - Soluon: Rearrange into block inner product pair lists. - A single matrix element can parNcipate in pair lists with different block sizes. - Symbolic phase followed by computaonal phase 29 - MulNthreaded implementaon in Intel TBB QuadMat compared to Csparse & CombBLAS 30 knowledge Discovery Toolbox hp://kdt.sourceforge.net/ A general graph library with operaons based on linear algebraic primiNves • Aimed at domain experts who know their problem well but don’t know how to program a supercomputer • Easy-to-use Python interface • Runs on a laptop as well as a cluster with 10,000 processors • Open source soIware (New

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    38 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us