Survey and Taxonomy of Lossless Graph Compression and Space-Efficient Graph Representations Towards Understanding of Modern Graph Processing, Storage, and Analytics
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Lecture 24 1 Kolmogorov Complexity
6.441 Spring 2006 24-1 6.441 Transmission of Information May 18, 2006 Lecture 24 Lecturer: Madhu Sudan Scribe: Chung Chan 1 Kolmogorov complexity Shannon’s notion of compressibility is closely tied to a probability distribution. However, the probability distribution of the source is often unknown to the encoder. Sometimes, we interested in compressibility of specific sequence. e.g. How compressible is the Bible? We have the Lempel-Ziv universal compression algorithm that can compress any string without knowledge of the underlying probability except the assumption that the strings comes from a stochastic source. But is it the best compression possible for each deterministic bit string? This notion of compressibility/complexity of a deterministic bit string has been studied by Solomonoff in 1964, Kolmogorov in 1966, Chaitin in 1967 and Levin. Consider the following n-sequence 0100011011000001010011100101110111 ··· Although it may appear random, it is the enumeration of all binary strings. A natural way to compress it is to represent it by the procedure that generates it: enumerate all strings in binary, and stop at the n-th bit. The compression achieved in bits is |compression length| ≤ 2 log n + O(1) More generally, with the universal Turing machine, we can encode data to a computer program that generates it. The length of the smallest program that produces the bit string x is called the Kolmogorov complexity of x. Definition 1 (Kolmogorov complexity) For every language L, the Kolmogorov complexity of the bit string x with respect to L is KL (x) = min l(p) p:L(p)=x where p is a program represented as a bit string, L(p) is the output of the program with respect to the language L, and l(p) is the length of the program, or more precisely, the point at which the execution halts. -
A Complexity Theory for Implicit Graph Representations
A Complexity Theory for Implicit Graph Representations Maurice Chandoo Leibniz Universität Hannover, Theoretical Computer Science, Appelstr. 4, 30167 Hannover, Germany [email protected] Abstract In an implicit graph representation the vertices of a graph are assigned short labels such that adjacency of two vertices can be algorithmically determined from their labels. This allows for a space-efficient representation of graph classes that have asymptotically fewer graphs than the class of all graphs such as forests or planar graphs. A fundamental question is whether every smaller graph class has such a representation. We consider how restricting the computational complexity of such representations affects what graph classes can be represented. A formal language can be seen as implicit graph representation and therefore complexity classes such as P or NP can be understood as set of graph classes. We investigate this complexity landscape and introduce complexity classes defined in terms of first order logic that capture surprisingly many of the graph classes for which an implicit representation is known. Additionally, we provide a notion of reduction between graph classes which reveals that trees and interval graphs are ‘complete’ for certain fragments of first order logic in this setting. 1998 ACM Subject Classification G.2.2 Graph Theory, F.1.3 Complexity Measures and Classes Keywords and phrases adjacency labeling schemes, complexity of graph properties We call a graph class small if it has at most 2O(n log n) graphs on n vertices. Many important graph classes such as interval graphs are small. Using adjacency matrices or lists to represent such classes is not space-efficient since this requires n2 bits (resp. -
Abstracting Abstraction in Search II: Complexity Analysis
Proceedings of the Fifth Annual Symposium on Combinatorial Search Abstracting Abstraction in Search II: Complexity Analysis Christer Backstr¨ om¨ and Peter Jonsson Department of Computer Science, Linkoping¨ University SE-581 83 Linkoping,¨ Sweden [email protected] [email protected] Abstract In our earlier publication (Backstr¨ om¨ and Jonsson 2012) we have demonstrated the usefulness of this framework in Modelling abstraction as a function from the original state various ways. We have shown that a certain combination of space to an abstract state space is a common approach in com- our transformation properties exactly captures the DPP con- binatorial search. Sometimes this is too restricted, though, cept by Zilles and Holte (2010). A related concept is that and we have previously proposed a framework using a more flexible concept of transformations between labelled graphs. of path/plan refinement without backtracking to the abstract We also proposed a number of properties to describe and clas- level. Partial solutions to capturing this concept have been sify such transformations. This framework enabled the mod- presented in the literature, for instance, the ordered mono- elling of a number of different abstraction methods in a way tonicity criterion by Knoblock, Tenenberg, and Yang (1991), that facilitated comparative analyses. It is of particular inter- the downward refinement property (DRP) by Bacchus and est that these properties can be used to capture the concept of Yang (1994), and the simulation-based approach by Bundy refinement without backtracking between levels; how to do et al. (1996). However, how to define general conditions this has been an open question for at least twenty years. -
Implicit Structures for Graph Neural Networks
Implicit Structures for Graph Neural Networks Fangda Gu Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2020-185 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-185.html November 23, 2020 Copyright © 2020, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement I would like to express my most sincere gratitude to Professor Laurent El Ghaoui and Professor Somayeh Sojoudi for their support in my academic life. The helpful discussions with them on my research have guided me through the past two successful years at University of California, Berkeley. I want to thank Heng Chang and Professor Wenwu Zhu for their help in the implementation and verification of the work. I also want to thank Shirley Salanio for her strong logistical and psychological support. Implicit Structures for Graph Neural Networks by Fangda Gu Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, in partial satisfaction of the requirements for the degree of Master of Science, Plan II. Approval for the Report and Comprehensive Examination: Committee: Professor Laurent El Ghaoui Research Advisor (Date) * * * * * * * Professor Somayeh Sojoudi Second Reader 11/19/2020 (Date) Implicit Structures for Graph Neural Networks Fangda Gu Abstract Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. -
Large-Scale Directed Model Checking LTL
Large-Scale Directed Model Checking LTL Stefan Edelkamp and Shahid Jabbar University of Dortmund Otto-Hahn Straße 14 {stefan.edelkamp,shahid.jabbar}@cs.uni-dortmund.de Abstract. To analyze larger models for explicit-state model checking, directed model checking applies error-guided search, external model check- ing uses secondary storage media, and distributed model checking exploits parallel exploration on multiple processors. In this paper we propose an external, distributed and directed on-the-fly model checking algorithm to check general LTL properties in the model checker SPIN. Previous attempts restricted to checking safety proper- ties. The worst-case I/O complexity is bounded by O(sort(|F||R|)/p + l · scan(|F||S|)), where S and R are the sets of visited states and transitions in the synchronized product of the B¨uchi automata for the model and the property specification, F is the number of accepting states, l is the length of the shortest counterexample, and p is the number of processors. The algorithm we propose returns minimal lasso-shaped counterexam- ples and includes refinements for property-driven exploration. 1 Introduction The core limitation to the exploration of systems are bounded main memory resources. Relying on virtual memory slows down the exploration due to excessive page faults. External algorithms [31] exploit hard disk space and organize the access to secondary memory. Originally designed for explicit graphs, external search algorithms have shown considerably good performances in the large-scale breadth-first and guided exploration of games [22, 12] and in the analysis of model checking problems [24]. Directed explicit-state model checking [13] enhances the error-reporting capa- bilities of model checkers. -
Shannon Entropy and Kolmogorov Complexity
Information and Computation: Shannon Entropy and Kolmogorov Complexity Satyadev Nandakumar Department of Computer Science. IIT Kanpur October 19, 2016 This measures the average uncertainty of X in terms of the number of bits. Shannon Entropy Definition Let X be a random variable taking finitely many values, and P be its probability distribution. The Shannon Entropy of X is X 1 H(X ) = p(i) log : 2 p(i) i2X Shannon Entropy Definition Let X be a random variable taking finitely many values, and P be its probability distribution. The Shannon Entropy of X is X 1 H(X ) = p(i) log : 2 p(i) i2X This measures the average uncertainty of X in terms of the number of bits. The Triad Figure: Claude Shannon Figure: A. N. Kolmogorov Figure: Alan Turing Just Electrical Engineering \Shannon's contribution to pure mathematics was denied immediate recognition. I can recall now that even at the International Mathematical Congress, Amsterdam, 1954, my American colleagues in probability seemed rather doubtful about my allegedly exaggerated interest in Shannon's work, as they believed it consisted more of techniques than of mathematics itself. However, Shannon did not provide rigorous mathematical justification of the complicated cases and left it all to his followers. Still his mathematical intuition is amazingly correct." A. N. Kolmogorov, as quoted in [Shi89]. Kolmogorov and Entropy Kolmogorov's later work was fundamentally influenced by Shannon's. 1 Foundations: Kolmogorov Complexity - using the theory of algorithms to give a combinatorial interpretation of Shannon Entropy. 2 Analogy: Kolmogorov-Sinai Entropy, the only finitely-observable isomorphism-invariant property of dynamical systems. -
Information Theory
Information Theory Professor John Daugman University of Cambridge Computer Science Tripos, Part II Michaelmas Term 2016/17 H(X,Y) I(X;Y) H(X|Y) H(Y|X) H(X) H(Y) 1 / 149 Outline of Lectures 1. Foundations: probability, uncertainty, information. 2. Entropies defined, and why they are measures of information. 3. Source coding theorem; prefix, variable-, and fixed-length codes. 4. Discrete channel properties, noise, and channel capacity. 5. Spectral properties of continuous-time signals and channels. 6. Continuous information; density; noisy channel coding theorem. 7. Signal coding and transmission schemes using Fourier theorems. 8. The quantised degrees-of-freedom in a continuous signal. 9. Gabor-Heisenberg-Weyl uncertainty relation. Optimal \Logons". 10. Data compression codes and protocols. 11. Kolmogorov complexity. Minimal description length. 12. Applications of information theory in other sciences. Reference book (*) Cover, T. & Thomas, J. Elements of Information Theory (second edition). Wiley-Interscience, 2006 2 / 149 Overview: what is information theory? Key idea: The movements and transformations of information, just like those of a fluid, are constrained by mathematical and physical laws. These laws have deep connections with: I probability theory, statistics, and combinatorics I thermodynamics (statistical physics) I spectral analysis, Fourier (and other) transforms I sampling theory, prediction, estimation theory I electrical engineering (bandwidth; signal-to-noise ratio) I complexity theory (minimal description length) I signal processing, representation, compressibility As such, information theory addresses and answers the two fundamental questions which limit all data encoding and communication systems: 1. What is the ultimate data compression? (answer: the entropy of the data, H, is its compression limit.) 2. -
Paper 1 Jian Li Kolmold
KolmoLD: Data Modelling for the Modern Internet* Dmitry Borzov, Huawei Canada Tim Tingqiu Yuan, Huawei Mikhail Ignatovich, Huawei Canada Jian Li, Futurewei *work performed before May 2019 Challenges: Peak Traffic Composition 73% 26% Content: sizable, faned-out, static Streaming Services Everything else (Netflix, Hulu, Software File storage (Instant Messaging, YouTube, Spotify) distribution services VoIP, Social Media) [1] Source: Sandvine Global Internet Phenomena reports for 2009, 2010, 2011, 2012, 2013, 2015, 2016, October 2018 [1] https://qz.com/1001569/the-cdn-heavy-internet-in-rich-countries-will-be-unrecognizable-from-the-rest-of-the-worlds-in-five-years/ Technologies to define the revolution of the internet ChunkStream Founded in 2016 Video codec Founded in 2014 Content-addressable Based on a 2014 MIT Browser-targeted Runtime Content-addressable network protocol based research paper network protocol based on cryptohash naming on cryptohash naming scheme Implemented and supported by all major scheme Based on the Open source project cryptohash naming browsers, an IETF standard Founding company is a P2P project scheme YCombinator graduate YCombinator graduate with backing of high profile SV investors Our Proposal: A data model for interoperable protocols KolmoLD Content addressing through hashes has become a widely-used means of Addressable: connecting layer, inspired by connecting data in distributed the principles of Kolmogorov complexity systems, from the blockchains that theory run your favorite cryptocurrencies, to the commits that back your code, to Compossible: sending data as code, where the web’s content at large. code efficiency is theoretically bounded by Kolmogorov complexity Yet, whilst all of these tools rely on Computable: sandboxed computability by some common primitives, their treating data as code specific underlying data structures are not interoperable. -
Maximizing T-Complexity 1. Introduction
Fundamenta Informaticae XX (2015) 1–19 1 DOI 10.3233/FI-2012-0000 IOS Press Maximizing T-complexity Gregory Clark U.S. Department of Defense gsclark@ tycho. ncsc. mil Jason Teutsch Penn State University teutsch@ cse. psu. edu Abstract. We investigate Mark Titchener’s T-complexity, an algorithm which measures the infor- mation content of finite strings. After introducing the T-complexity algorithm, we turn our attention to a particular class of “simple” finite strings. By exploiting special properties of simple strings, we obtain a fast algorithm to compute the maximum T-complexity among strings of a given length, and our estimates of these maxima show that T-complexity differs asymptotically from Kolmogorov complexity. Finally, we examine how closely de Bruijn sequences resemble strings with high T- complexity. 1. Introduction How efficiently can we generate random-looking strings? Random strings play an important role in data compression since such strings, characterized by their lack of patterns, do not compress well. For prac- tical compression applications, compression must take place in a reasonable amount of time (typically linear or nearly linear time). The degree of success achieved with the classical compression algorithms of Lempel and Ziv [9, 21, 22] and Welch [19] depend on the complexity of the input string; the length of the compressed string gives a measure of the original string’s complexity. In the same monograph in which Lempel and Ziv introduced the first of these algorithms [9], they gave an example of a succinctly describable class of strings which yield optimally poor compression up to a logarithmic factor, namely the class of lexicographically least de Bruijn sequences for each length. -
Shorter Implicit Representation for Planar Graphs and Bounded Treewidth Graphs
Shorter Implicit Representation for Planar Graphs and Bounded Treewidth Graphs Cyril Gavoille and Arnaud Labourel LaBRI, Universit´e de Bordeaux, France {gavoille,labourel}@labri.fr Abstract. Implicit representation of graphs is a coding of the structure of graphs using distinct labels so that adjacency between any two vertices can be decided by inspecting their labels alone. All previous implicit rep- resentations of planar graphs were based on the classical three forests de- composition technique (a.k.a. Schnyder’s trees), yielding asymptotically toa3logn-bit1 label representation where n is the number of vertices of the graph. We propose a new implicit representation of planar graphs using asymptotically 2 log n-bit labels. As a byproduct we have an explicit construction of a graph with n2+o(1) vertices containing all n-vertex pla- nar graphs as induced subgraph, the best previous size of such induced- universal graph was O(n3). More generally, for graphs excluding a fixed minor, we construct a 2logn + O(log log n) implicit representation. For treewidth-k graphs we give a log n + O(k log log(n/k)) implicit representation, improving the O(k log n) representation of Kannan, Naor and Rudich [18] (STOC ’88). Our representations for planar and treewidth-k graphs are easy to implement, all the labels can be constructed in O(n log n)time,and support constant time adjacency testing. 1 Introduction How to represent a graph in memory is a basic and fundamental data structure question. The two basic ways are adjacency matrices and adjacency lists. The latter representation is space efficient for sparse graphs, but adjacency queries require searching in the list, whereas matrices allow fast queries to the price of a super-linear space. -
1952 Washington UFO Sightings • Psychic Pets and Pet Psychics • the Skeptical Environmentalist Skeptical Inquirer the MAGAZINE for SCIENCE and REASON Volume 26,.No
1952 Washington UFO Sightings • Psychic Pets and Pet Psychics • The Skeptical Environmentalist Skeptical Inquirer THE MAGAZINE FOR SCIENCE AND REASON Volume 26,.No. 6 • November/December 2002 ppfjlffl-f]^;, rj-r ci-s'.n.: -/: •:.'.% hstisnorm-i nor mm . o THE COMMITTEE FOR THE SCIENTIFIC INVESTIGATION OF CLAIMS OF THE PARANORMAL AT THE CENTER FOR INQUIRY-INTERNATIONAL (ADJACENT TO THE STATE UNIVERSITY OF NEW YORK AT BUFFALO) • AN INTERNATIONAL ORGANIZATION Paul Kurtz, Chairman; professor emeritus of philosophy. State University of New York at Buffalo Barry Karr, Executive Director Joe Nickell, Senior Research Fellow Massimo Polidoro, Research Fellow Richard Wiseman, Research Fellow Lee Nisbet Special Projects Director FELLOWS James E. Alcock,* psychologist. York Univ., Consultants, New York. NY Irmgard Oepen, professor of medicine Toronto Susan Haack. Cooper Senior Scholar in Arts (retired), Marburg, Germany Jerry Andrus, magician and inventor, Albany, and Sciences, prof, of philosophy, University Loren Pankratz, psychologist, Oregon Health Oregon of Miami Sciences Univ. Marcia Angell, M.D., former editor-in-chief. C. E. M. Hansel, psychologist. Univ. of Wales John Paulos, mathematician, Temple Univ. New England Journal of Medicine Al Hibbs, scientist, Jet Propulsion Laboratory Steven Pinker, cognitive scientist, MIT Robert A. Baker, psychologist. Univ. of Douglas Hofstadter, professor of human Massimo Polidoro, science writer, author, Kentucky understanding and cognitive science, executive director CICAP, Italy Stephen Barrett, M.D., psychiatrist, author. Indiana Univ. Milton Rosenberg, psychologist, Univ. of consumer advocate, Allentown, Pa. Gerald Holton, Mallinckrodt Professor of Chicago Barry Beyerstein,* biopsychologist, Simon Physics and professor of history of science, Wallace Sampson, M.D., clinical professor of Harvard Univ. Fraser Univ., Vancouver, B.C., Canada medicine, Stanford Univ., editor, Scientific Ray Hyman.* psychologist, Univ. -
Marion REVOLLE
Algorithmic information theory for automatic classification Marion Revolle, [email protected] Nicolas le Bihan, [email protected] Fran¸cois Cayre, [email protected] Main objective Files : any byte string in a computer (text, music, ..) & Similarity measure in a non-probabilist context : Similarity metric Algorithmic information theory : Kolmogorov complexity % 1.1 Complexity 1.2 GZIP 1.3 Examples GZIP : compression algorithm = DEFLATE + Huffman. Given x a file string of size jxj define on the alphabet Ax A- x = ABCA BCCB CABC of size αx. A- DEFLATE L(x) = 6 Z(-1! 1) A- Simple complexity Dictionary compression based on LZ77 : make ref- x : L(A) L(B) L(C) Z(-3!3) L(C) Z(-6!5) K(x) : Kolmogorov complexity : the length of a erences from the past. ABC ABCC BCABC shortest binary program to compute x. B- y = ABCA BCAB CABC DEFLATE(x) generate two kinds of symbol : L(x) : Lempel-Ziv complexity : the minimal number L(y) = 5 of operations making insert/copy from x's past L(a) : insert the element a in Ax = Literal. y : L(A) L(B) L(C) Z(-3!3) Z(-6!6) to generate x. Z(-i ! j) : paste j elements, i elements before = Refer- ABC ABC ABCABC ence of length j. L(yjx) = 2 yjx : Z(-12!6) Z(-12!6) B- Conditional complexity ABCABC ABCABC K(xjy) : conditional Kolmogorov complexity : the B- Complexity C- z = MNOM NOMN OMNO length of a shortest binary program to compute Number of symbols to compress x with DEFLATE L(z) = 5 x is y is furnished as an auxiliary input.