On Using SVM and Kolmogorov Complexity for Spam Filtering

Total Page:16

File Type:pdf, Size:1020Kb

On Using SVM and Kolmogorov Complexity for Spam Filtering Proceedings of the Twenty-First International FLAIRS Conference (2008) On Using SVM and Kolmogorov Complexity for Spam Filtering Sihem Belabbes2 and Gilles Richard1;2 1British Institute of Technology and E-commerce, Avecina House,258-262 Romford Road London E7 9HZ, UK 2Institut de Recherche en Informatique de Toulouse, 118 Rte de Narbonne 31062 Toulouse, France [email protected] fbelabbes,[email protected] Abstract 2003)), and has so far proved to be a powerful filtering tool. First described by P. Graham (Graham 2002), the Bayesian As a side effect of e-marketing strategy the number of filters block over 90% of spam. Due to their statistical nature spam e-mails is rocketing, the time and cost needed to deal with spam as well. Spam filtering is one of the most they are adaptive but can suffer statistical attacks simply by difficult tasks among diverse kinds of text categoriza- adding a huge number of relevant ham words in a spam mes- tion, sad consequence of spammers dynamic efforts to sage (see (Lowd & Meek 2005)). Next-generation spam fil- escape filtering. In this paper, we investigate the use of tering systems may thus depend on merging rule-based prac- Kolmogorov complexity theory as a backbone for spam tices (like blacklists and dictionaries) and machine learning filtering, avoiding the burden of text analysis, keywords techniques. and blacklists update. Exploiting the fact that we can A useful underlying classification notion is the mathemati- estimate a message information content through com- cal concept of distance. Roughly speaking, this relies on the pression techniques, we represent an e-mail as a multi- fact that when 2 objects are close one to another, it means dimensional real vector and then we implement a sup- that they share some similarity. In spam filtering settings, if port vector machine classifier to classify new incoming e-mails. The first results we get exhibit interesting ac- an e-mail m is close to a junk e-mail, then m is likely a junk curacy rates and emphasize the relevance of our idea. e-mail as well. Such a distance takes parameters like sender domain or specific words occurrences. An e-mail is then represented as a point in a vector space and spam are identi- Introduction fied by simply implementing a distance-based classifier. At Detecting and filtering spam e-mails face a number of com- first glance an e-mail can be identified as spam by looking plex challenges due to the dynamic and malicious nature of at its origin. We definitely think that this can be refined by spam. A truly effective spam filter must block the maxi- considering the whole e-mail content, including header and mum unwanted e-mails while minimizing the number of le- body. An e-mail is classified as spam when its informative gitimate messages wrongly identified as spam (namely false content is similar or close to that of another spam. So we positives). However individual users may not share the same are interested in a distance between 2 e-mails ensuring that view on what really a spam is. when they are close, we can conclude that they have simi- A great deal of existing methods, generally rule-based, pro- lar informative content without distinction between header ceed by checking content of incoming e-mail looking for and body. One can wonder about the formal meaning of that specific keywords (dictionary approach), and/or comparing ‘informative content’ concept. Kolmogorov (also known as with blacklists of hosts and domains known as issuing spam Kolmogorov-Chaitin) complexity is a strong tool for this (see (Graham 2002) for a survey). In one case, the user can purpose. In its essence Kolmogorov’s work considers the define his own dictionary thus adapting the filter to his own informative content of a string s as the measure of the size use. In the other, the blacklist needs to be regularly updated. of the ultimate compression of s, noted K(s). Anyway, getting rid of spam remains a classification prob- In this paper we aim at showing that Kolmogorov complex- lem and it is quite natural to apply machine learning meth- ity and its associated distance can be pretty reliable in clas- ods. The main idea on which they rely is to train a learner on sifying spam. Most important is that this can be achieved: a sample e-mails set (known as the witness or training set) clearly identified as spam or ham (legitimate e-mail), and • without any body analysis then to use the system output as a classifier for next incom- ing e-mails. • without any header analysis Amongst the most successful approaches to deal with spam The originality of our approach is the use of support vec- is the so-called Bayesian technique which is based on the tor machines for classification such that we represent every probabilistic notion of Bayesian networks( (Pearl & Russell e-mail in the training set and in the testing set as a multi- Copyright c 2008, Association for the Advancement of Artificial dimensional real vector. This can be done by considering a Intelligence (www.aaai.org). All rights reserved. set of typical e-mails previously identified as spam or ham. 130 The rest of this paper is organized as follows: we briefly understand how to build a suitable distance starting from K. review the Kolmogorov theory and its very meaning for a This is our aim in the next section. given string s. Then we describe the main idea behind prac- tical applications of Kolmogorov theory which is defining a Information distance suitable information distance. Since K(s) is an ideal num- ber, we explain how to estimate K and the relationship with When considering data mining and knowledge discovery, commonly used compression techniques. We then exhibit mathematical distances are powerful tools to classify new our first experiment with a simple k-nearest neighbors algo- data. The theory around Kolmogorov complexity helps to rithm and show why we move to more sophisticated tools. define a distance called ‘Information Distance’ or ‘Bennett’s After introducing support vector machine classifiers, we de- distance’ (see (Bennett et al. 1998) for a complete study). scribe our multi-dimensional real vector representation of an The informal idea is that given two strings or files, a and b, e-mail. We exhibit and comment our new results. We dis- we can say: cuss related approaches before concluding and outlining fu- K(a) = K(a n b) + K(a \ b) ture perspectives. K(b) = K(b n a) + K(a \ b) What is Kolmogorov complexity? It is important to point out that those 2 equations do not be- long to the theoretical framework but are approximate for- The data a computer deals with are digitalized, namely de- mula allowing to understand the final computation. The first scribed by finite binary strings. In our context we focus on equation says that a’s complexity (its information content) is pieces of text. Kolmogorov complexity K(s) is a measure of the sum of the proper a’s information content denoted a n b, the descriptive complexity contained in an object or string and the common content with b denoted a \ b. Concatenat- s. A good introduction to Kolmogorov’s complexity is con- ing a with b yields a new file denoted a:b whose complexity tained in (Kolmogorov 1965) with a solid treatment in (Li & is K(a:b): Vitanyi´ 1997; Kirchherr, Li, & Vitanyi´ 1997). Kolmogorov’s complexity is related to Shannon entropy (Shannon 1948) in K(a:b) = K(a n b) + K(b n a) + K(a \ b); K(s) that the expected value of for a random sequence is since there is no redundancy with Kolmogorov compression. approximately the entropy of the source distribution for the So the following number: process generating this sequence. However, Kolmogorov’s complexity differs from entropy in that it is related to the m(a; b) = K(a) + K(b) − K(a:b) = K(a \ b) specific string being considered rather than to the source dis- is a relevant measure of the common information content to tribution. Kolmogorov complexity can be roughly described a and b. as follows, where T represents a universal computer (Turing Normalizing this number in order to avoid side effects due machine), p represents a program, and s represents a string: to strings size helps in defining the information distance: K(s) is the size jpj of the smallest program p s.t. T (p) = s d(a; b) = 1 − m(a; b)=max(K(a); K(b)) Thus p can be considered as the essence of s since there is Let us understand the very meaning of d. If a = b, then no way to get s with a shorter program than p. It is logical to K(a) = K(b) and m(a; b) = K(a) thus d(a; b) = 0. On consider p as the most compressed form of s. The size of p, the opposite side, if a and b do not have any common in- K(s), is a measure of the amount of information contained formation, m(a; b) = 0 and then d(a; b) = 1. Formally, in s. Consequently K(s) is the lower bound limit of all pos- d is a metric satisfying d(a; a) = 0, d(a; b) = d(b; a) and sible compressions of s: it is the ultimate compression size d(a; b) ≤ d(a; c) + d(c; b) over the set of finite strings. d of the string. Random strings have rather high Kolmogorov is called the information distance or the Bennett distance.
Recommended publications
  • Lecture 24 1 Kolmogorov Complexity
    6.441 Spring 2006 24-1 6.441 Transmission of Information May 18, 2006 Lecture 24 Lecturer: Madhu Sudan Scribe: Chung Chan 1 Kolmogorov complexity Shannon’s notion of compressibility is closely tied to a probability distribution. However, the probability distribution of the source is often unknown to the encoder. Sometimes, we interested in compressibility of specific sequence. e.g. How compressible is the Bible? We have the Lempel-Ziv universal compression algorithm that can compress any string without knowledge of the underlying probability except the assumption that the strings comes from a stochastic source. But is it the best compression possible for each deterministic bit string? This notion of compressibility/complexity of a deterministic bit string has been studied by Solomonoff in 1964, Kolmogorov in 1966, Chaitin in 1967 and Levin. Consider the following n-sequence 0100011011000001010011100101110111 ··· Although it may appear random, it is the enumeration of all binary strings. A natural way to compress it is to represent it by the procedure that generates it: enumerate all strings in binary, and stop at the n-th bit. The compression achieved in bits is |compression length| ≤ 2 log n + O(1) More generally, with the universal Turing machine, we can encode data to a computer program that generates it. The length of the smallest program that produces the bit string x is called the Kolmogorov complexity of x. Definition 1 (Kolmogorov complexity) For every language L, the Kolmogorov complexity of the bit string x with respect to L is KL (x) = min l(p) p:L(p)=x where p is a program represented as a bit string, L(p) is the output of the program with respect to the language L, and l(p) is the length of the program, or more precisely, the point at which the execution halts.
    [Show full text]
  • Shannon Entropy and Kolmogorov Complexity
    Information and Computation: Shannon Entropy and Kolmogorov Complexity Satyadev Nandakumar Department of Computer Science. IIT Kanpur October 19, 2016 This measures the average uncertainty of X in terms of the number of bits. Shannon Entropy Definition Let X be a random variable taking finitely many values, and P be its probability distribution. The Shannon Entropy of X is X 1 H(X ) = p(i) log : 2 p(i) i2X Shannon Entropy Definition Let X be a random variable taking finitely many values, and P be its probability distribution. The Shannon Entropy of X is X 1 H(X ) = p(i) log : 2 p(i) i2X This measures the average uncertainty of X in terms of the number of bits. The Triad Figure: Claude Shannon Figure: A. N. Kolmogorov Figure: Alan Turing Just Electrical Engineering \Shannon's contribution to pure mathematics was denied immediate recognition. I can recall now that even at the International Mathematical Congress, Amsterdam, 1954, my American colleagues in probability seemed rather doubtful about my allegedly exaggerated interest in Shannon's work, as they believed it consisted more of techniques than of mathematics itself. However, Shannon did not provide rigorous mathematical justification of the complicated cases and left it all to his followers. Still his mathematical intuition is amazingly correct." A. N. Kolmogorov, as quoted in [Shi89]. Kolmogorov and Entropy Kolmogorov's later work was fundamentally influenced by Shannon's. 1 Foundations: Kolmogorov Complexity - using the theory of algorithms to give a combinatorial interpretation of Shannon Entropy. 2 Analogy: Kolmogorov-Sinai Entropy, the only finitely-observable isomorphism-invariant property of dynamical systems.
    [Show full text]
  • Information Theory
    Information Theory Professor John Daugman University of Cambridge Computer Science Tripos, Part II Michaelmas Term 2016/17 H(X,Y) I(X;Y) H(X|Y) H(Y|X) H(X) H(Y) 1 / 149 Outline of Lectures 1. Foundations: probability, uncertainty, information. 2. Entropies defined, and why they are measures of information. 3. Source coding theorem; prefix, variable-, and fixed-length codes. 4. Discrete channel properties, noise, and channel capacity. 5. Spectral properties of continuous-time signals and channels. 6. Continuous information; density; noisy channel coding theorem. 7. Signal coding and transmission schemes using Fourier theorems. 8. The quantised degrees-of-freedom in a continuous signal. 9. Gabor-Heisenberg-Weyl uncertainty relation. Optimal \Logons". 10. Data compression codes and protocols. 11. Kolmogorov complexity. Minimal description length. 12. Applications of information theory in other sciences. Reference book (*) Cover, T. & Thomas, J. Elements of Information Theory (second edition). Wiley-Interscience, 2006 2 / 149 Overview: what is information theory? Key idea: The movements and transformations of information, just like those of a fluid, are constrained by mathematical and physical laws. These laws have deep connections with: I probability theory, statistics, and combinatorics I thermodynamics (statistical physics) I spectral analysis, Fourier (and other) transforms I sampling theory, prediction, estimation theory I electrical engineering (bandwidth; signal-to-noise ratio) I complexity theory (minimal description length) I signal processing, representation, compressibility As such, information theory addresses and answers the two fundamental questions which limit all data encoding and communication systems: 1. What is the ultimate data compression? (answer: the entropy of the data, H, is its compression limit.) 2.
    [Show full text]
  • Paper 1 Jian Li Kolmold
    KolmoLD: Data Modelling for the Modern Internet* Dmitry Borzov, Huawei Canada Tim Tingqiu Yuan, Huawei Mikhail Ignatovich, Huawei Canada Jian Li, Futurewei *work performed before May 2019 Challenges: Peak Traffic Composition 73% 26% Content: sizable, faned-out, static Streaming Services Everything else (Netflix, Hulu, Software File storage (Instant Messaging, YouTube, Spotify) distribution services VoIP, Social Media) [1] Source: Sandvine Global Internet Phenomena reports for 2009, 2010, 2011, 2012, 2013, 2015, 2016, October 2018 [1] https://qz.com/1001569/the-cdn-heavy-internet-in-rich-countries-will-be-unrecognizable-from-the-rest-of-the-worlds-in-five-years/ Technologies to define the revolution of the internet ChunkStream Founded in 2016 Video codec Founded in 2014 Content-addressable Based on a 2014 MIT Browser-targeted Runtime Content-addressable network protocol based research paper network protocol based on cryptohash naming on cryptohash naming scheme Implemented and supported by all major scheme Based on the Open source project cryptohash naming browsers, an IETF standard Founding company is a P2P project scheme YCombinator graduate YCombinator graduate with backing of high profile SV investors Our Proposal: A data model for interoperable protocols KolmoLD Content addressing through hashes has become a widely-used means of Addressable: connecting layer, inspired by connecting data in distributed the principles of Kolmogorov complexity systems, from the blockchains that theory run your favorite cryptocurrencies, to the commits that back your code, to Compossible: sending data as code, where the web’s content at large. code efficiency is theoretically bounded by Kolmogorov complexity Yet, whilst all of these tools rely on Computable: sandboxed computability by some common primitives, their treating data as code specific underlying data structures are not interoperable.
    [Show full text]
  • Maximizing T-Complexity 1. Introduction
    Fundamenta Informaticae XX (2015) 1–19 1 DOI 10.3233/FI-2012-0000 IOS Press Maximizing T-complexity Gregory Clark U.S. Department of Defense gsclark@ tycho. ncsc. mil Jason Teutsch Penn State University teutsch@ cse. psu. edu Abstract. We investigate Mark Titchener’s T-complexity, an algorithm which measures the infor- mation content of finite strings. After introducing the T-complexity algorithm, we turn our attention to a particular class of “simple” finite strings. By exploiting special properties of simple strings, we obtain a fast algorithm to compute the maximum T-complexity among strings of a given length, and our estimates of these maxima show that T-complexity differs asymptotically from Kolmogorov complexity. Finally, we examine how closely de Bruijn sequences resemble strings with high T- complexity. 1. Introduction How efficiently can we generate random-looking strings? Random strings play an important role in data compression since such strings, characterized by their lack of patterns, do not compress well. For prac- tical compression applications, compression must take place in a reasonable amount of time (typically linear or nearly linear time). The degree of success achieved with the classical compression algorithms of Lempel and Ziv [9, 21, 22] and Welch [19] depend on the complexity of the input string; the length of the compressed string gives a measure of the original string’s complexity. In the same monograph in which Lempel and Ziv introduced the first of these algorithms [9], they gave an example of a succinctly describable class of strings which yield optimally poor compression up to a logarithmic factor, namely the class of lexicographically least de Bruijn sequences for each length.
    [Show full text]
  • 1952 Washington UFO Sightings • Psychic Pets and Pet Psychics • the Skeptical Environmentalist Skeptical Inquirer the MAGAZINE for SCIENCE and REASON Volume 26,.No
    1952 Washington UFO Sightings • Psychic Pets and Pet Psychics • The Skeptical Environmentalist Skeptical Inquirer THE MAGAZINE FOR SCIENCE AND REASON Volume 26,.No. 6 • November/December 2002 ppfjlffl-f]^;, rj-r ci-s'.n.: -/: •:.'.% hstisnorm-i nor mm . o THE COMMITTEE FOR THE SCIENTIFIC INVESTIGATION OF CLAIMS OF THE PARANORMAL AT THE CENTER FOR INQUIRY-INTERNATIONAL (ADJACENT TO THE STATE UNIVERSITY OF NEW YORK AT BUFFALO) • AN INTERNATIONAL ORGANIZATION Paul Kurtz, Chairman; professor emeritus of philosophy. State University of New York at Buffalo Barry Karr, Executive Director Joe Nickell, Senior Research Fellow Massimo Polidoro, Research Fellow Richard Wiseman, Research Fellow Lee Nisbet Special Projects Director FELLOWS James E. Alcock,* psychologist. York Univ., Consultants, New York. NY Irmgard Oepen, professor of medicine Toronto Susan Haack. Cooper Senior Scholar in Arts (retired), Marburg, Germany Jerry Andrus, magician and inventor, Albany, and Sciences, prof, of philosophy, University Loren Pankratz, psychologist, Oregon Health Oregon of Miami Sciences Univ. Marcia Angell, M.D., former editor-in-chief. C. E. M. Hansel, psychologist. Univ. of Wales John Paulos, mathematician, Temple Univ. New England Journal of Medicine Al Hibbs, scientist, Jet Propulsion Laboratory Steven Pinker, cognitive scientist, MIT Robert A. Baker, psychologist. Univ. of Douglas Hofstadter, professor of human Massimo Polidoro, science writer, author, Kentucky understanding and cognitive science, executive director CICAP, Italy Stephen Barrett, M.D., psychiatrist, author. Indiana Univ. Milton Rosenberg, psychologist, Univ. of consumer advocate, Allentown, Pa. Gerald Holton, Mallinckrodt Professor of Chicago Barry Beyerstein,* biopsychologist, Simon Physics and professor of history of science, Wallace Sampson, M.D., clinical professor of Harvard Univ. Fraser Univ., Vancouver, B.C., Canada medicine, Stanford Univ., editor, Scientific Ray Hyman.* psychologist, Univ.
    [Show full text]
  • Marion REVOLLE
    Algorithmic information theory for automatic classification Marion Revolle, [email protected] Nicolas le Bihan, [email protected] Fran¸cois Cayre, [email protected] Main objective Files : any byte string in a computer (text, music, ..) & Similarity measure in a non-probabilist context : Similarity metric Algorithmic information theory : Kolmogorov complexity % 1.1 Complexity 1.2 GZIP 1.3 Examples GZIP : compression algorithm = DEFLATE + Huffman. Given x a file string of size jxj define on the alphabet Ax A- x = ABCA BCCB CABC of size αx. A- DEFLATE L(x) = 6 Z(-1! 1) A- Simple complexity Dictionary compression based on LZ77 : make ref- x : L(A) L(B) L(C) Z(-3!3) L(C) Z(-6!5) K(x) : Kolmogorov complexity : the length of a erences from the past. ABC ABCC BCABC shortest binary program to compute x. B- y = ABCA BCAB CABC DEFLATE(x) generate two kinds of symbol : L(x) : Lempel-Ziv complexity : the minimal number L(y) = 5 of operations making insert/copy from x's past L(a) : insert the element a in Ax = Literal. y : L(A) L(B) L(C) Z(-3!3) Z(-6!6) to generate x. Z(-i ! j) : paste j elements, i elements before = Refer- ABC ABC ABCABC ence of length j. L(yjx) = 2 yjx : Z(-12!6) Z(-12!6) B- Conditional complexity ABCABC ABCABC K(xjy) : conditional Kolmogorov complexity : the B- Complexity C- z = MNOM NOMN OMNO length of a shortest binary program to compute Number of symbols to compress x with DEFLATE L(z) = 5 x is y is furnished as an auxiliary input.
    [Show full text]
  • On One-Way Functions and Kolmogorov Complexity
    On One-way Functions and Kolmogorov Complexity Yanyi Liu Rafael Pass∗ Cornell University Cornell Tech [email protected] [email protected] September 24, 2020 Abstract We prove that the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n) ≥ (1 + ")n; " > 0, the following are equivalent: • One-way functions exists (which in turn is equivalent to the existence of secure private-key encryption schemes, digital signatures, pseudorandom generators, pseudorandom functions, commitment schemes, and more); • t-time bounded Kolmogorov Complexity, Kt, is mildly hard-on-average (i.e., there exists a t 1 polynomial p(n) > 0 such that no PPT algorithm can compute K , for more than a 1 − p(n) fraction of n-bit strings). In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography. ∗Supported in part by NSF Award SATC-1704788, NSF Award RI-1703846, AFOSR Award FA9550-18-1-0267, and a JP Morgan Faculty Award. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2019-19-020700006. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. 1 Introduction We prove the equivalence of two fundamental problems in the theory of computing: (a) the exis- tence of one-way functions, and (b) mild average-case hardness of the time-bounded Kolmogorov Complexity problem.
    [Show full text]
  • Complexity Measurement Based on Information Theory and Kolmogorov Complexity
    Leong Ting Lui** Complexity Measurement Based † Germán Terrazas on Information Theory and University of Nottingham Kolmogorov Complexity ‡ Hector Zenil University of Sheffield Cameron Alexander§ University of Nottingham Natalio Krasnogor*,¶ Abstract In the past decades many definitions of complexity Newcastle University have been proposed. Most of these definitions are based either on Shannonʼs information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata Keywords and simulations of the self-organization of porphyrin molecules. Complex systems, measures of complexity, open-ended evolution, cellular automata, molecular computation 1 Introduction The notion of complexity has been used in various disciplines. However, the very definition of complexity1 is still without consensus. One of the reasons for this is that there are many intuitive interpretations to the concept of complexity that can be applied in different domains of science. For instance, the Lorentz attractor can be simply described by three interaction differential equations, but it is often seen as producing complex chaotic dynamics. The iconic fractal image generated by the Mandelbrot set can be seen as a simple mathematical algorithm having a com- plex phase diagram. Yet another characteristic of complex systems is that they consist of many interacting components such as genetic regulatory networks, electronic components, and ecological food chains. Complexity is also associated with processes that generate complex outcomes, evolution being the best example.
    [Show full text]
  • Kolmogorov Complexity and Its Applications
    CHAPTER 4 Kolmogorov Complexity and its Applications Ming LI Aiken Computation Laboratory, Han•ard University, Cambridge, MA 02138, USA Paul M.B. VIT ANYI Centrum voor Wiskunde en biformatica, Kruislaan 413, 1098 SJ Amsterdam, Netherlands, and Facu/teit Wiskunde en Jnjormatica, Universiteic van Amsterdam, Amsterdam, Netherlands Contents L Introduction 189 2. Mathematical theory 196 3. Applications of compressibility 214 4. Example of an application in mathematics: weak prime number theorems. 221 5. Applications of incompressibility: proving lower bounds 222 6. Resource-bounded Kolmogorov complexity and its applications 236 7. Conclusion 247 Acknowledgment 247 References :48 HANDBOOK OF THEORETICAL COMPUTER SCIENCE Edited by J. van Leeuwen ((' Elsevier Science Publishers B.V., 1990 KOLMOGOROV COMPLEXITY AND ITS APPLICATIONS 189 t. Introduction In everyday language we identify the information in an individual object with the essentials of a description for it. We can formalize this by defining the amount of information in a finite object (like a string) as the size (i.e., number of bits) of the smallest program that, starting with a blank memory, outputs the string and then terminates. A similar definition can be given for infinite strings, but in this case the program produces element after element forever. Thus, 1n (a string of nones) contains little information because a program of size about log n outputs it (like "print n ls"). Likewise, the transcendental number rr = 3.1415 ... , an infinite sequence of seemingly "random" decimal digits, contains O(l) information. (There is a short program that produces the consecutive digits of rr forever.) Such a definition would appear to make the amount of information in a string depend on the particular programming language used.
    [Show full text]
  • Minimum Complexity Pursuit: Stability Analysis
    Minimum Complexity Pursuit: Stability Analysis Shirin Jalali Arian Maleki Richard Baraniuk Center for Mathematics of Information Department of Electrical Engineering Department of Electrical Engineering California Institute of Technology Rice University Rice University Pasadena, California, 91125 Houston, Texas, 77005 Houston, Texas, 77005 Abstract— A host of problems involve the recovery of struc- their linear measurements without having the knowledge of tured signals from a dimensionality reduced representation the underlying structure? Does this ignorance incur a cost in such as a random projection; examples include sparse signals the sampling rate? (compressive sensing) and low-rank matrices (matrix comple- tion). Given the wide range of different recovery algorithms In algorithmic information theory, Kolmogorov complex- developed to date, it is natural to ask whether there exist ity, introduced by Solomonoff [12], Kolmogorov [13], and “universal” algorithms for recovering “structured” signals from Chaitin [14], defines a universal notion of complexity for their linear projections. We recently answered this question in finite-alphabet sequences. Given a finite-alphabet sequence the affirmative in the noise-free setting. In this paper, we extend x, the Kolmogorov complexity of x, K(x), is defined as the our results to the case of noisy measurements. length of the shortest computer program that prints x and I. INTRODUCTION halts. In [15], extending the notion of Kolmogorov complex- ity to real-valued signals1 by their proper quantization, we Data compressors are ubiquitous in the digital world. They addressed some of the above questions. We introduced the are built based on the premise that text, images, videos, minimum complexity pursuit (MCP) algorithm for recover- etc.
    [Show full text]
  • Prefix-Free Kolmogorov Complexity
    Prefix-free complexity Properties of Kolmogorov complexity Information Theory and Statistics Lecture 7: Prefix-free Kolmogorov complexity Łukasz Dębowski [email protected] Ph. D. Programme 2013/2014 Project co-financed by the European Union within the framework of the European Social Fund Prefix-free complexity Properties of Kolmogorov complexity Towards algorithmic information theory In Shannon’s information theory, the amount of information carried by a random variable depends on the ascribed probability distribution. Andrey Kolmogorov (1903–1987), the founding father of modern probability theory, remarked that it is extremely hard to imagine a reasonable probability distribution for utterances in natural language but we can still estimate their information content. For that reason Kolmogorov proposed to define the information content of a particular string in a purely algorithmic way. A similar approach has been proposed a year earlier by Ray Solomonoff (1926–2009), who sought for optimal inductive inference. Project co-financed by the European Union within the framework of the European Social Fund Prefix-free complexity Properties of Kolmogorov complexity Kolmogorov complexity—analogue of Shannon entropy The fundamental object of algorithmic information theory is Kolmogorov complexity of a string, an analogue of Shannon entropy. It is defined as the length of the shortest program for a Turing machine such that the machine prints out the string and halts. The Turing machine itself is defined as a deterministic finite state automaton which moves along one or more infinite tapes filled with symbols from a fixed finite alphabet and which may read and write individual symbols. The concrete value of Kolmogorov complexity depends on the used Turing machine but many properties of Kolmogorov complexity are universal.
    [Show full text]