Chapel Hypergraph Library (CHGL)
Total Page:16
File Type:pdf, Size:1020Kb
Load more
										Recommended publications
									
								- 
												  Practical Parallel Hypergraph AlgorithmsPractical Parallel Hypergraph Algorithms Julian Shun [email protected] MIT CSAIL Abstract v While there has been signicant work on parallel graph pro- 0 cessing, there has been very surprisingly little work on high- e0 performance hypergraph processing. This paper presents v0 v1 v1 a collection of ecient parallel algorithms for hypergraph processing, including algorithms for betweenness central- e1 ity, maximal independent set, k-core decomposition, hyper- v2 trees, hyperpaths, connected components, PageRank, and v2 v3 e single-source shortest paths. For these problems, we either 2 provide new parallel algorithms or more ecient implemen- v3 tations than prior work. Furthermore, our algorithms are theoretically-ecient in terms of work and depth. To imple- (a) Hypergraph (b) Bipartite representation ment our algorithms, we extend the Ligra graph processing Figure 1. An example hypergraph representing the groups framework to support hypergraphs, and our implementa- , , , , , , and , , and its bipartite repre- { 0 1 2} { 1 2 3} { 0 3} tions benet from graph optimizations including switching sentation. between sparse and dense traversals based on the frontier size, edge-aware parallelization, using buckets to prioritize processing of vertices, and compression. Our experiments represented as hyperedges, can contain an arbitrary number on a 72-core machine and show that our algorithms obtain of vertices. Hyperedges correspond to group relationships excellent parallel speedups, and are signicantly faster than among vertices (e.g., a community in a social network). An algorithms in existing hypergraph processing frameworks. example of a hypergraph is shown in Figure 1a. CCS Concepts • Computing methodologies → Paral- Hypergraphs have been shown to enable richer analy- lel algorithms; Shared memory algorithms.
- 
												  Data StructureEDUSAT LEARNING RESOURCE MATERIAL ON DATA STRUCTURE (For 3rd Semester CSE & IT) Contributors : 1. Er. Subhanga Kishore Das, Sr. Lect CSE 2. Mrs. Pranati Pattanaik, Lect CSE 3. Mrs. Swetalina Das, Lect CA 4. Mrs Manisha Rath, Lect CA 5. Er. Dillip Kumar Mishra, Lect 6. Ms. Supriti Mohapatra, Lect 7. Ms Soma Paikaray, Lect Copy Right DTE&T,Odisha Page 1 Data Structure (Syllabus) Semester & Branch: 3rd sem CSE/IT Teachers Assessment : 10 Marks Theory: 4 Periods per Week Class Test : 20 Marks Total Periods: 60 Periods per Semester End Semester Exam : 70 Marks Examination: 3 Hours TOTAL MARKS : 100 Marks Objective : The effectiveness of implementation of any application in computer mainly depends on the that how effectively its information can be stored in the computer. For this purpose various -structures are used. This paper will expose the students to various fundamentals structures arrays, stacks, queues, trees etc. It will also expose the students to some fundamental, I/0 manipulation techniques like sorting, searching etc 1.0 INTRODUCTION: 04 1.1 Explain Data, Information, data types 1.2 Define data structure & Explain different operations 1.3 Explain Abstract data types 1.4 Discuss Algorithm & its complexity 1.5 Explain Time, space tradeoff 2.0 STRING PROCESSING 03 2.1 Explain Basic Terminology, Storing Strings 2.2 State Character Data Type, 2.3 Discuss String Operations 3.0 ARRAYS 07 3.1 Give Introduction about array, 3.2 Discuss Linear arrays, representation of linear array In memory 3.3 Explain traversing linear arrays, inserting & deleting elements 3.4 Discuss multidimensional arrays, representation of two dimensional arrays in memory (row major order & column major order), and pointers 3.5 Explain sparse matrices.
- 
												  Practical Parallel Hypergraph AlgorithmsPractical Parallel Hypergraph Algorithms Julian Shun [email protected] MIT CSAIL Abstract v0 While there has been significant work on parallel graph pro- e0 cessing, there has been very surprisingly little work on high- v0 v1 v1 performance hypergraph processing. This paper presents a e collection of efficient parallel algorithms for hypergraph pro- 1 v2 cessing, including algorithms for computing hypertrees, hy- v v 2 3 e perpaths, betweenness centrality, maximal independent sets, 2 v k-core decomposition, connected components, PageRank, 3 and single-source shortest paths. For these problems, we ei- (a) Hypergraph (b) Bipartite representation ther provide new parallel algorithms or more efficient imple- mentations than prior work. Furthermore, our algorithms are Figure 1. An example hypergraph representing the groups theoretically-efficient in terms of work and depth. To imple- fv0;v1;v2g, fv1;v2;v3g, and fv0;v3g, and its bipartite repre- ment our algorithms, we extend the Ligra graph processing sentation. framework to support hypergraphs, and our implementations benefit from graph optimizations including switching between improved compared to using a graph representation. Unfor- sparse and dense traversals based on the frontier size, edge- tunately, there is been little research on parallel hypergraph aware parallelization, using buckets to prioritize processing processing. of vertices, and compression. Our experiments on a 72-core The main contribution of this paper is a suite of efficient machine and show that our algorithms obtain excellent paral- parallel hypergraph algorithms, including algorithms for hy- lel speedups, and are significantly faster than algorithms in pertrees, hyperpaths, betweenness centrality, maximal inde- existing hypergraph processing frameworks.
- 
												  Data Structure InvariantsLecture 8 Notes Data Structures 15-122: Principles of Imperative Computation (Spring 2016) Frank Pfenning, André Platzer, Rob Simmons 1 Introduction In this lecture we introduce the idea of imperative data structures. So far, the only interfaces we’ve used carefully are pixels and string bundles. Both of these interfaces had the property that, once we created a pixel or a string bundle, we weren’t interested in changing its contents. In this lecture, we’ll talk about an interface that mimics the arrays that are primitively available in C0. To implement this interface, we’ll need to round out our discussion of types in C0 by discussing pointers and structs, two great tastes that go great together. We will discuss using contracts to ensure that pointer accesses are safe. Relating this to our learning goals, we have Computational Thinking: We illustrate the power of abstraction by con- sidering both the client-side and library-side of the interface to a data structure. Algorithms and Data Structures: The abstract arrays will be one of our first examples of abstract datatypes. Programming: Introduction of structs and pointers, use and design of in- terfaces. 2 Structs So far in this course, we’ve worked with five different C0 types — int, bool, char, string, and arrays t[] (there is a array type t[] for every type t). The character, Boolean and integer values that we manipulate, store LECTURE NOTES Data Structures L8.2 locally, and pass to functions are just the values themselves. For arrays (and strings), the things we store in assignable variables or pass to functions are addresses, references to the place where the data stored in the array can be accessed.
- 
												  Multilevel Hypergraph Partitioning with Vertex Weights RevisitedMultilevel Hypergraph Partitioning with Vertex Weights Revisited Tobias Heuer ! Karlsruhe Institute of Technology, Karlsruhe, Germany Nikolai Maas ! Karlsruhe Institute of Technology, Karlsruhe, Germany Sebastian Schlag ! Karlsruhe Institute of Technology, Karlsruhe, Germany Abstract The balanced hypergraph partitioning problem (HGP) is to partition the vertex set of a hypergraph into k disjoint blocks of bounded weight, while minimizing an objective function defined on the hyperedges. Whereas real-world applications often use vertex and edge weights to accurately model the underlying problem, the HGP research community commonly works with unweighted instances. In this paper, we argue that, in the presence of vertex weights, current balance constraint definitions either yield infeasible partitioning problems or allow unnecessarily large imbalances and propose a new definition that overcomes these problems. We show that state-of-the-art hypergraph partitioners often struggle considerably with weighted instances and tight balance constraints (even with our new balance definition). Thus, we present a recursive-bipartitioning technique that isable to reliably compute balanced (and hence feasible) solutions. The proposed method balances the partition by pre-assigning a small subset of the heaviest vertices to the two blocks of each bipartition (using an algorithm originally developed for the job scheduling problem) and optimizes the actual partitioning objective on the remaining vertices. We integrate our algorithm into the multilevel hypergraph
- 
												  On Decomposing a Hypergraph Into K Connected Sub-HypergraphsEgervary´ Research Group on Combinatorial Optimization Technical reportS TR-2001-02. Published by the Egrerv´aryResearch Group, P´azm´any P. s´et´any 1/C, H{1117, Budapest, Hungary. Web site: www.cs.elte.hu/egres . ISSN 1587{4451. On decomposing a hypergraph into k connected sub-hypergraphs Andr´asFrank, Tam´asKir´aly,and Matthias Kriesell February 2001 Revised July 2001 EGRES Technical Report No. 2001-02 1 On decomposing a hypergraph into k connected sub-hypergraphs Andr´asFrank?, Tam´asKir´aly??, and Matthias Kriesell??? Abstract By applying the matroid partition theorem of J. Edmonds [1] to a hyper- graphic generalization of graphic matroids, due to M. Lorea [3], we obtain a gen- eralization of Tutte's disjoint trees theorem for hypergraphs. As a corollary, we prove for positive integers k and q that every (kq)-edge-connected hypergraph of rank q can be decomposed into k connected sub-hypergraphs, a well-known result for q = 2. Another by-product is a connectivity-type sufficient condition for the existence of k edge-disjoint Steiner trees in a bipartite graph. Keywords: Hypergraph; Matroid; Steiner tree 1 Introduction An undirected graph G = (V; E) is called connected if there is an edge connecting X and V X for every nonempty, proper subset X of V . Connectivity of a graph is equivalent− to the existence of a spanning tree. As a connected graph on t nodes contains at least t 1 edges, one has the following alternative characterization of connectivity. − Proposition 1.1. A graph G = (V; E) is connected if and only if the number of edges connecting distinct classes of is at least t 1 for every partition := V1;V2;:::;Vt of V into non-empty subsets.P − P f g ?Department of Operations Research, E¨otv¨osUniversity, Kecskem´etiu.
- 
												  Hypergraph Packing and Sparse Bipartite Ramsey NumbersHypergraph packing and sparse bipartite Ramsey numbers David Conlon ∗ Abstract We prove that there exists a constant c such that, for any integer ∆, the Ramsey number of a bipartite graph on n vertices with maximum degree ∆ is less than 2c∆n. A probabilistic argument due to Graham, R¨odland Ruci´nskiimplies that this result is essentially sharp, up to the constant c in the exponent. Our proof hinges upon a quantitative form of a hypergraph packing result of R¨odl,Ruci´nskiand Taraz. 1 Introduction For a graph G, the Ramsey number r(G) is defined to be the smallest natural number n such that, in any two-colouring of the edges of Kn, there exists a monochromatic copy of G. That these numbers exist was proven by Ramsey [19] and rediscovered independently by Erd}osand Szekeres [10]. Whereas the original focus was on finding the Ramsey numbers of complete graphs, in which case it is known that t t p2 r(Kt) 4 ; ≤ ≤ the field has broadened considerably over the years. One of the most famous results in the area to date is the theorem, due to Chvat´al,R¨odl,Szemer´ediand Trotter [7], that if a graph G, on n vertices, has maximum degree ∆, then r(G) c(∆)n; ≤ where c(∆) is just some appropriate constant depending only on ∆. Their proof makes use of the regularity lemma and because of this the bound it gives on the constant c(∆) is (and is necessarily [11]) very bad. The situation was improved somewhat by Eaton [8], who proved, by using a variant of the regularity lemma, that the function c(∆) may be taken to be of the form 22c∆ .
- 
												  Hypernetwork Science: from Multidimensional Networks to Computational Topology∗Hypernetwork Science: From Multidimensional Networks to Computational Topology∗ Cliff A. Joslyn,y Sinan Aksoy,z Tiffany J. Callahan,x Lawrence E. Hunter,x Brett Jefferson,z Brenda Praggastis,y Emilie A.H. Purvine,y Ignacio J. Tripodix March, 2020 Abstract cations, and physical infrastructure often afford a rep- resentation as such a set of entities with binary rela- As data structures and mathematical objects used tionships, and hence may be analyzed utilizing graph for complex systems modeling, hypergraphs sit nicely theoretical methods. poised between on the one hand the world of net- Graph models benefit from simplicity and a degree of work models, and on the other that of higher-order universality. But as abstract mathematical objects, mathematical abstractions from algebra, lattice the- graphs are limited to representing pairwise relation- ory, and topology. They are able to represent com- ships between entities, while real-world phenomena plex systems interactions more faithfully than graphs in these systems can be rich in multi-way relation- and networks, while also being some of the simplest ships involving interactions among more than two classes of systems representing topological structures entities, dependencies between more than two vari- as collections of multidimensional objects connected ables, or properties of collections of more than two in a particular pattern. In this paper we discuss objects. Representing group interactions is not possi- the role of (undirected) hypergraphs in the science ble in graphs natively, but rather requires either more of complex networks, and provide a mathematical complex mathematical objects, or coding schemes like overview of the core concepts needed for hypernet- “reification” or semantic labeling in bipartite graphs.
- 
												  Adjacency MatrixCSE 373 Graphs 3: Implementation reading: Weiss Ch. 9 slides created by Marty Stepp http://www.cs.washington.edu/373/ © University of Washington, all rights reserved. 1 Implementing a graph • If we wanted to program an actual data structure to represent a graph, what information would we need to store? for each vertex? for each edge? 1 2 3 • What kinds of questions would we want to be able to answer quickly: 4 5 6 about a vertex? about edges / neighbors? 7 about paths? about what edges exist in the graph? • We'll explore three common graph implementation strategies: edge list , adjacency list , adjacency matrix 2 Edge list • edge list : An unordered list of all edges in the graph. an array, array list, or linked list • advantages : 1 2 3 easy to loop/iterate over all edges 4 5 6 • disadvantages : hard to quickly tell if an edge 7 exists from vertex A to B hard to quickly find the degree of a vertex (how many edges touch it) 0 1 2 3 4 56 7 8 (1, 2) (1, 4) (1, 7) (2, 3) 2, 5) (3, 6)(4, 7) (5, 6) (6, 7) 3 Graph operations • Using an edge list, how would you find: all neighbors of a given vertex? the degree of a given vertex? whether there is an edge from A to B? 1 2 3 whether there are any loops (self-edges)? • What is the Big-Oh of each operation? 4 5 6 7 0 1 2 3 4 56 7 8 (1, 2) (1, 4) (1, 7) (2, 3) 2, 5) (3, 6)(4, 7) (5, 6) (6, 7) 4 Adjacency matrix • adjacency matrix : An N × N matrix where: the non-diagonal entry a[i,j] is the number of edges joining vertex i and vertex j (or the weight of the edge joining vertex i and vertex j).
- 
												  A Framework for Transductive and Inductive Learning on HypergraphsDeep Hyperedges: a Framework for Transductive and Inductive Learning on Hypergraphs Josh Payne IBM T. J. Watson Research Center Yorktown Heights, NY 10598 [email protected] Abstract From social networks to protein complexes to disease genomes to visual data, hypergraphs are everywhere. However, the scope of research studying deep learning on hypergraphs is still quite sparse and nascent, as there has not yet existed an effective, unified framework for using hyperedge and vertex embeddings jointly in the hypergraph context, despite a large body of prior work that has shown the utility of deep learning over graphs and sets. Building upon these recent advances, we propose Deep Hyperedges (DHE), a modular framework that jointly uses contextual and permutation-invariant vertex membership properties of hyperedges in hypergraphs to perform classification and regression in transductive and inductive learning settings. In our experiments, we use a novel random walk procedure and show that our model achieves and, in most cases, surpasses state-of-the-art performance on benchmark datasets. Additionally, we study our framework’s performance on a variety of diverse, non-standard hypergraph datasets and propose several avenues of future work to further enhance DHE. 1 Introduction and Background As data becomes more plentiful, we find ourselves with access to increasingly complex networks that can be used to express many different types of information. Hypergraphs have been used to recommend music [1], model cellular and protein-protein interaction networks [2, 3], classify images and perform 3D object recognition [4, 5], diagnose Alzheimer’s disease [6], analyze social relationships [7, 8], and classify gene expression [9]—the list goes on.
- 
												  Introduction to GraphsMultidimensional Arrays & Graphs CMSC 420: Lecture 3 Mini-Review • Abstract Data Types: • Implementations: • List • Linked Lists • Stack • Circularly linked lists • Queue • Doubly linked lists • Deque • XOR Doubly linked lists • Dictionary • Ring buffers • Set • Double stacks • Bit vectors Techniques: Sentinels, Zig-zag scan, link inversion, bit twiddling, self- organizing lists, constant-time initialization Constant-Time Initialization • Design problem: - Suppose you have a long array, most values are 0. - Want constant time access and update - Have as much space as you need. • Create a big array: - a = new int[LARGE_N]; - Too slow: for(i=0; i < LARGE_N; i++) a[i] = 0 • Want to somehow implicitly initialize all values to 0 in constant time... Constant-Time Initialization means unchanged 1 2 6 12 13 Data[] = • Access(i): if (0≤ When[i] < count and Where[When[i]] == i) return Where[] = 6 13 12 Count = 3 Count holds # of elements changed Where holds indices of the changed elements. When[] = 1 3 2 When maps from index i to the time step when item i was first changed. Access(i): if 0 ≤ When[i] < Count and Where[When[i]] == i: return Data[i] else: return DEFAULT Multidimensional Arrays • Often it’s more natural to index data items by keys that have several dimensions. E.g.: • (longitude, latitude) • (row, column) of a matrix • (x,y,z) point in 3d space • Aside: why is a plane “2-dimensional”? Row-major vs. Column-major order • 2-dimensional arrays can be mapped to linear memory in two ways: 1 2 3 4 5 1 2 3 4 5 1 1 2 3 4 5 1 1 5 9 13 17 2 6 7 8 9 10 2 2 6 10 14 18 3 11 12 13 14 15 3 3 7 11 15 19 4 16 17 18 19 20 4 4 8 12 16 20 Row-major order Column-major order Addr(i,j) = Base + 5(i-1) + (j-1) Addr(i,j) = Base + (i-1) + 4(j-1) Row-major vs.
- 
												  Using Machine Learning to Improve Dense and Sparse Matrix Multiplication KernelsIowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2019 Using machine learning to improve dense and sparse matrix multiplication kernels Brandon Groth Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Applied Mathematics Commons, and the Computer Sciences Commons Recommended Citation Groth, Brandon, "Using machine learning to improve dense and sparse matrix multiplication kernels" (2019). Graduate Theses and Dissertations. 17688. https://lib.dr.iastate.edu/etd/17688 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Using machine learning to improve dense and sparse matrix multiplication kernels by Brandon Micheal Groth A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Applied Mathematics Program of Study Committee: Glenn R. Luecke, Major Professor James Rossmanith Zhijun Wu Jin Tian Kris De Brabanter The student author, whose presentation of the scholarship herein was approved by the program of study committee, is solely responsible for the content of this dissertation. The Graduate College will ensure this dissertation is globally accessible and will not permit alterations after a degree is conferred. Iowa State University Ames, Iowa 2019 Copyright c Brandon Micheal Groth, 2019. All rights reserved. ii DEDICATION I would like to dedicate this thesis to my wife Maria and our puppy Tiger.