<<

The Graph BLAS effort and its implications for Exascale

David Bader (GA Tech), Aydın Buluç (LBNL), John Gilbert (UCSB), Joseph Gonzalez (UCB), Jeremy Kepner (MIT LL), Tim Mason (Intel)

SIAM Workshop on Exascale Applied Mathemacs Challenges and Opportunies (EX14) July 6, 2014 Graphs maer in Applied Math

Graph Theoretical Whole genome assembly analysis of Brain Connectivity

Verces: reads

Verces: k-mers

26 billion (8B of which are non-erroneous) unique k-mers (verces) in the hexaploit Potenally millions of wheat genome W7984 for k=51 neurons and billions of edges Schatz et al. (2010) Perspective: Assembly of Large Genomes with developing technologies w/2nd-Gen Seq. Genome Res. (figure reference) Graphs maer in Applied Math

1 2 3 4 5 1 2 3 4 5 1 1 1 4 2 2 2 5 3 3 3 3 4 1 4 4 5 2 5 5 A PA Matching in biparte graphs: Permung to heavy diagonal or block triangular form

Graph paroning: Dynamic load balancing in parallel simulaons

Picture (le) credit: Sanders and Schulz

Problem size: as big as the sparse linear system to be solved or the simulaon to be performed Graphs as

Continuous Discrete physical modeling structure analysis

Linear

Computers Graphs as middleware

By analogy to Basic Subroutines (BLAS): numerical Ops/Sec vs. Matrix Size scientific . . . = A*B

y = A*x T What should the µ = x y Combinatorial BLAS look like?

The case for sparse matrices

Many irregular applications contain coarse-grained parallelism that can be exploited by abstractions at the proper level.

Tradional graph Graphs in the language of computaons linear algebra Data driven, Fixed communicaon paerns unpredictable communicaon. Irregular and unstructured, Operaons on matrix blocks exploit poor locality of reference memory hierarchy Fine grained data accesses, Coarse grained parallelism, dominated by latency bandwidth limited Linear-algebraic primives for graphs

Sparse matrix-sparse Sparse matrix-sparse matrix mulplicaon vector mulplicaon

x x

Element-wise operaons Sparse matrix indexing

.*

The Combinatorial BLAS implements these, and more, on arbitrary semirings, e.g. (×, +), (and, or), (+, min)

Examples of semirings in graph

Real field: (R, +, x) Classical numerical linear algebra

Boolean algebra: ({0 1}, |, &) Graph traversal

Tropical semiring: (R U {∞}, min, +) Shortest paths (S, select, select) Select subgraph, or contract nodes to form quoent graph (edge/vertex aributes, vertex data Schema for user-specified aggregaon, edge data processing) computaon at verces and edges (R, max, +) Graph matching &network alignment (R, min, mes) Maximal independent

• Shortened semiring notaon: (Set, Add, Mulply). Both idenes omied. • Add: Traverses edges, Mulply: Combines edges/paths at a vertex • Neither add nor mulply needs to have an inverse. • Both add and mulply are associave, mulply distributes over add Mulple-source breadth-first search

1 2

4 7 5

3 6 AT B • Sparse array representation => space efficient • Sparse matrix-matrix multiplication => work efficient • Three possible levels of parallelism: searches, vertices, edges • Highly-parallel implementation for Betweenness Centrality* *: A measure of influence in graphs, based on shortest paths Mulple-source breadth-first search

1 2

4 à 7 5

3 6

AT B AT Ÿ B • Sparse array representation => space efficient • Sparse matrix-matrix multiplication => work efficient • Three possible levels of parallelism: searches, vertices, edges • Highly-parallel implementation for Betweenness Centrality* *: A measure of influence in graphs, based on shortest paths Graph comparison (LA: linear algebra) Slide inspiraon: Jeremy Kepner (MIT LL) Algorithm (Problem) Canonical LA-Based Critical Path Complexity Complexity (for LA) Breadth-first search Θ(m) Θ(m) Θ(diameter) Betweenness Centrality Θ(mn) Θ(mn) Θ(diameter) (unweighted) All-pairs shortest-paths Θ(n3) Θ(n3) Θ(n) (dense) Prim (MST) Θ(m+n log n) Θ(n2) Θ(n) Borůvka (MST) Θ(m log n) Θ(m log n) Θ(log2 n) Edmonds-Karp (Max Flow) Θ(m2n) Θ(m2n) Θ(mn) Greedy MIS (MIS) Θ(m+n log n) Θ(mn+n2) Θ(n) Luby (MIS) Θ(m+n log n) Θ(m log n) Θ(log n)

Majority of selected algorithms can be represented with (n = |V | and m = |E |) array-based constructs with equivalent complexity. Combinatorial BLAS hp://gauss.cs.ucsb.edu/~aydin/CombBLAS

An extensible distributed-memory offering a small but powerful set of linear algebraic operaons specifically targeng graph analycs.

• Aimed at graph algorithm designers/programmers who are not expert in mapping algorithms to parallel hardware. • Flexible templated C++ interface; 2D data decomposion • Scalable performance from laptop to 100,000- HPC. • Open source soware (v1.4.0 released January, 2014) Matrix mes Matrix over semiring

Inputs Implements C ⊕= A ⊕.⊗ B matrix A: MxN (sparse or dense) for j = 1 : N matrix B: NxL (sparse or dense) C(i,k) = C(i,k) ⊕ (A(i,j) ⊗ B(j,k)) Optional Inputs matrix C: MxL (sparse or dense) If input C is omitted, implements scalar “add” ⊕ C = A ⊕.⊗ B scalar “multiply” function ⊗ transpose flags for A, B, C Transpose flags specify T T T Outputs on A , B , and/or C instead matrix C: MxL (sparse or dense) Specific cases and function names: Notes SpGEMM: sparse matrix times sparse matrix is the set of scalars, user-specified SpMSpV: sparse matrix times sparse vector defaults to IEEE double float SpMV: Sparse matrix times dense vector ⊕ defaults to floating-point + SpMM: Sparse matrix times dense matrix ⊗ defaults to floating-point * Can we standardize a “Graph BLAS”?

No, it’s not reasonable to define a universal set of building blocks.

Huge diversity in matching graph algorithms to hardware platforms. No consensus on data structures or linguistic primitives. Lots of graph algorithms remain to be discovered. Early standardization can inhibit innovation.

Yes, it is reasonable to define a common set of building blocks… … for graphs as linear algebra.

Representing graphs in the language of linear algebra is a mature . Algorithms, high level interfaces, and implementations vary. But the core primitives are well established. The Graph BLAS effort

Abstract-- It is our view that the state of the art in constructing a large collection of graph algorithms in terms of linear algebraic operations is mature enough to support the emergence of a standard set of primitive building blocks. This paper is a position paper defining the problem and announcing our intention to launch an open effort to define this standard.

• The Graph BLAS Forum: http://istc-bigdata.org/GraphBlas/ • Graph Algorithms Building Blocks (GABB workshop at IPDPS’14): http://www.graphanalysis.org/workshop2014.html

Challenges at Exascale

“New algorithms need to be developed that idenfy and leverage more concurrency and that reduce synchronizaon and communicaon” - ASCR Applied Mathemacs Research for Exascale Compung Report

High-performance requirement is the invariant for {any}scale

Challenges specific to Exascale and beyond: • Power/Energy • Data Locality • Extreme Concurrency/Parallelism • Resilience/ Fault tolerance

Performance of Linear Algebraic Graph Algorithms

Combinatorial BLAS fastest among all tested graph processing frameworks on 3 out of 4 benchmarks in an independent study by Intel.

The linear algebra abstracon enables high performance, within 4X of nave performance for PageRank and Collaborave filtering.

Sash, Nadathur, et al. "Navigang the Maze of Graph Analycs Frameworks using Massive Graph Datasets”, in SIGMOD’14 Energy and data locality challenges

“Data movement is overtaking computaon as the most dominant cost of a system both in terms of dollars and in terms of energy consumpon. Consequently, we should be more explicit about reasoning about data movement.”

Data movement (communication) costs energy

Image courtesy of Kogge & Shalf

Exascale Compung Trends: Adjusng to the "New Normal" for Architecture Kogge, Peter and Shalf, John, Compung in Science & Engineering, 15, 16-26 (2013) Communicaon-avoidance movaon

Two kinds of costs: Develop faster algorithms: (FLOPs) minimize communicaon Communication: moving data (to lower bound if possible)

Running time = γ ⋅ #FLOPs + β ⋅ #Words + (α ⋅ #Messages)

Sequential Distributed 59%/Year 59%/Year Fast/local memory CPU CPU CPU M M M of size M 23%/Year 26%/Year

RAM 2004: trend transition CPU CPU into multi-core, further M M communication costs P processors Communicaon crucial for graphs

- Often no surface to volume ratio. - Very little data reuse in existing algorithmic formulations * - Already heavily communication bound

Computation Communication 100 2D sparse matrix-matrix mulply emulang: 80 - Graph contracon 60 - AMG restricon operaons

40 Scale 23 R-MAT (scale-free graph) 20 mes order 4 restricon operator Percentage of Time Spent 0 1 4 16 64

256 Cray XT4, Franklin, NERSC of Cores 1024 4096 B., Gilbert. Parallel sparse matrix-matrix mulplicaon and indexing: Implementaon and experiments. SIAM Journal of Scienfic Compung, 2012 Reduced Communicaon Graph Algorithms

Communicaon avoiding approaches in linear algebra: [A] Exploing extra available memory (2.5D algorithms) - typically applies to matrix-matrix operaons [B] Communicang once every k steps (k-step Krylov methods) - typically applies to iterave sparse methods

Good news: We successfully generalized A to sparse matrix- matrix mulplicaon (graph contracon, mul-source BFS, clustering, etc.) and all pairs shortest paths (Isomap).

Unknown: if B can be applied to iterave graph algorithms.

Manifold Learning

Isomap (Nonlinear dimensionality reducon): Preserves the intrinsic of the data by using the geodesic distances on manifold between all pairs of points Tools used or desired: - K-nearest neighbors - All pairs shortest paths (APSP) - Top-k eigenvalues

Tenenbaum, Joshua B., Vin De Silva, and John C. Langford. "A global geometric framework for nonlinear dimensionality reducon." Science 290.5500 (2000): 2319-2323. All pairs shortest paths at Scale

R-Kleene: A recursive APSP algorithm that is rich in semiring matrix-matrix mulplicaons + is “min”, × is “add” A = A*; % recursive call C A B B = AB; C = CA; A D D = D + CB; B C D D = D*; % recursive call B = BD; C = DC; V1 V2 A = A + BC;

Using the “right” and 2.5D communication avoiding matrix multiplication (with c replicas): Bandwidth=O(n2 cp) Latency=O( cp log2 (p)) Communicaon-avoiding APSP on distributed memory

65K vertex dense problem solved in about two minutes 1200 c=16 1000 c=4 c=1 800 c: replication in 2.5D semiring n=8192 matrix multiply 600

GFlops n=4096 400 Jaguar (Titan without GPUs) at ORNL. 200

Nodes have 24 0

1 4 1 4 cores each 16 64 16 64 256 256 1024 1024 Number of compute nodes

Solomonik, B., and J. Demmel. “Minimizing communication in all-pairs shortest paths”, IPDPS. 2013. Fault Tolerance of Linear Algebraic Graph Algorithms

Literature exists on fault tolerant linear algebra operaons. Overhead: O(dN) to detect/correct O(d) errors on N-by-N matrices

A x B = C checksum checksum

checksum checksum

Good news: Overhead can oen be tolerated in sparse matrix- matrix operaons for graph algorithms. Unknown: Techniques are for fields/rings, how do they apply to semiring algebra?

Extreme Concurrency/Parallelism

Linear algebra is the right abstraction for exploiting multiple levels of parallelism available in many graph algorithms 1 2

4 5 à 7

3 6

AT B AT Ÿ B Encapsulates three level of parallelism: 1. columns(B): mulple BFS searches in parallel 2. columns(AT)+rows(B): parallel over froner verces in each BFS 3. rows(AT): parallel over incident edges of each froner vertex B. and J.R. Gilbert. The Combinatorial BLAS: Design, implementaon, and applicaons. IJHPCA, 2011. Conclusions

• We believe the state of the art for “Graphs in the language of Linear algebra” is mature enough to define a common set of building blocks • Linear algebra did not solve its exascale challenges yet, but it is not clueless either. • All other graph abstracons (think like a vertex, gather-apply- scaer, visitors) are clueless/ignorant about addressing exascale challenges. • If graph algorithms ever scales to exascale, it will most likely be in the language of linear algebra. • Come join our next event at HPEC’14 Acknowledgments

• Ariful Azad (LBL) • Dan Rokhsar (JGI/UCB) • Grey Ballard (UCB) • Oded Schwartz (UCB) • Jarrod Chapman (JGI) • Harsha Simhadri (LBL) • Jim Demmel (UC Berkeley) • Edgar Solomonik (UCB) • John Gilbert (UCSB) • Veronika Strnadova (UCSB) • Evangelos Georganas (UCB) • Sivan Toledo (Tel Aviv Univ) • Laura Grigori (INRIA) • Dani Ushizima (Berkeley Lab) • Ben Lipshitz (UCB) • Kathy Yelick (Berkeley Lab/UCB) • Adam Lugowski (UCSB) My work is funded by: • Sang-Yuh Oh (LBL) • Lenny Oliker (Berkeley Lab) • Steve Reinhardt (Cray)