
Algorithmic Information Theory Peter D. Gr¨unwald Paul M.B. Vit´anyi CWI, P.O. Box 94079 CWI , P.O. Box 94079 NL-1090 GB Amsterdam, The Netherlands NL-1090 GB Amsterdam E-mail: [email protected] The Netherlands E-mail: [email protected] July 30, 2007 Abstract We introduce algorithmic information theory, also known as the theory of Kol- mogorov complexity. We explain the main concepts of this quantitative approach to defining ‘information’. We discuss the extent to which Kolmogorov’s and Shan- non’s information theory have a common purpose, and where they are fundamen- tally different. We indicate how recent developments within the theory allow one to formally distinguish between ‘structural’ (meaningful) and ‘random’ information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam’s razor in inductive inference. We end by discussing some of the philosophical implications of the theory. Keywords Kolmogorov complexity, algorithmic information theory, Shannon infor- mation theory, mutual information, data compression, Kolmogorov structure function, Minimum Description Length Principle. 1 Introduction How should we measure the amount of information about a phenomenon that is given to us by an observation concerning the phenomenon? Both ‘classical’ (Shannon) in- formation theory (see the chapter by Harremo¨es and Topsøe [2007]) and algorithmic information theory start with the idea that this amount can be measured by the mini- mum number of bits needed to describe the observation. But whereas Shannon’s theory considers description methods that are optimal relative to some given probability distri- bution, Kolmogorov’s algorithmic theory takes a different, nonprobabilistic approach: any computer program that first computes (prints) the string representing the observa- tion, and then terminates, is viewed as a valid description. The amount of information in the string is then defined as the size (measured in bits) of the shortest computer program that outputs the string and then terminates. A similar definition can be given 1 for infinite strings, but in this case the program produces element after element forever. Thus, a long sequence of 1’s such as 10000 times 11 . 1 (1) contains little information because a programz }| { of size about log 10000 bits outputs it: for i := 1 to 10000 ; print 1. Likewise, the transcendental number π = 3.1415..., an infinite sequence of seemingly ‘random’ decimal digits, contains but a few bits of information (There is a short program that produces the consecutive digits of π forever). Such a definition would appear to make the amount of information in a string (or other object) depend on the particular programming language used. Fortunately, it can be shown that all reasonable choices of programming languages lead to quantifi- cation of the amount of ‘absolute’ information in individual objects that is invariant up to an additive constant. We call this quantity the ‘Kolmogorov complexity’ of the object. While regular strings have small Kolmogorov complexity, random strings have Kolmogorov complexity about equal to their own length. Measuring complexity and information in terms of program size has turned out to be a very powerful idea with applications in areas such as theoretical computer science, logic, probability theory, statistics and physics. This Chapter Kolmogorov complexity was introduced independently and with dif- ferent motivations by R.J. Solomonoff (born 1926), A.N. Kolmogorov (1903–1987) and G. Chaitin (born 1943) in 1960/1964, 1965 and 1966 respectively [Solomonoff 1964; Kolmogorov 1965; Chaitin 1966]. During the last forty years, the subject has devel- oped into a major and mature area of research. Here, we give a brief overview of the subject geared towards an audience specifically interested in the philosophy of informa- tion. With the exception of the recent work on the Kolmogorov structure function and parts of the discussion on philosophical implications, all material we discuss here can also be found in the standard textbook [Li and Vit´anyi 1997]. The chapter is struc- tured as follows: we start with an introductory section in which we define Kolmogorov complexity and list its most important properties. We do this in a much simplified (yet formally correct) manner, avoiding both technicalities and all questions of motivation (why this definition and not another one?). This is followed by Section 3 which pro- vides an informal overview of the more technical topics discussed later in this chapter, in Sections 4– 6. The final Section 7, which discusses the theory’s philosophical impli- cations, as well as Section 6.3, which discusses the connection to inductive inference, are less technical again, and should perhaps be glossed over before delving into the technicalities of Sections 4– 6. 2 2 Kolmogorov Complexity: Essentials The aim of this section is to introduce our main notion in the fastest and simplest possible manner, avoiding, to the extent that this is possible, all technical and motiva- tional issues. Section 2.1 provides a simple definition of Kolmogorov complexity. We list some of its key properties in Section 2.2. Knowledge of these key properties is an essential prerequisite for understanding the advanced topics treated in later sections. 2.1 Definition The Kolmogorov complexity K will be defined as a function from finite binary strings of arbitrary length to the natural numbers N. Thus, K : {0, 1}∗ → N is a function defined on ‘objects’ represented by binary strings. Later the definition will be extended to other types of objects such as numbers (Example 3), sets, functions and probability distributions (Example 7). As a first approximation, K(x) may be thought of as the length of the shortest computer program that prints x and then halts. This computer program may be written in Fortran, Java, LISP or any other universal programming language. By this we mean a general-purpose programming language in which a universal Turing Machine can be implemented. Most languages encountered in practice have this property. For concreteness, let us fix some universal language (say, LISP) and define Kolmogorov complexity with respect to it. The invariance theorem discussed below implies that it does not really matter which one we pick. Computer programs often make use of data. Such data are sometimes listed inside the program. An example is the bitstring "010110..." in the program print”01011010101000110...010” (2) In other cases, such data are given as additional input to the program. To prepare for later extensions such as conditional Kolmogorov complexity, we should allow for this possibility as well. We thus extend our initial definition of Kolmogorov complexity by considering computer programs with a very simple input-output interface: programs are provided a stream of bits, which, while running, they can read one bit at a time. There are no end-markers in the bit stream, so that, if a program p halts on input y and outputs x, then it will also halt on any input yz, where z is a continuation of y, and still output x. We write p(y) = x if, on input y, p prints x and then halts. We define the Kolmogorov complexity relative to a given language as the length of the shortest program p plus input y, such that, when given input y, p computes (outputs) x and then halts. Thus: K(x) := min l(p)+ l(y), (3) y,p:p(y)=x where l(p) denotes the length of input p, and l(y) denotes the length of program y, both expressed in bits. To make this definition formally entirely correct, we need to assume that the program P runs on a computer with unlimited memory, and that the 3 language in use has access to all this memory. Thus, while the definition (3) can be made formally correct, it does obscure some technical details which need not concern us now. We return to these in Section 4. 2.2 Key Properties of Kolmogorov Complexity To gain further intuition about K(x), we now list five of its key properties. Three of these concern the size of K(x) for commonly encountered types of strings. The fourth is the invariance theorem, and the fifth is the fact that K(x) is uncomputable in general. Henceforth, we use x to denote finite bitstrings. We abbreviate l(x), the length of a given bitstring x, to n. We use boldface x to denote an infinite binary string. In that case, x[1:n] is used to denote the initial n-bit segment of x. 1(a). Very Simple Objects: K(x)= O(log n). K(x) must be small for ‘simple’ or ‘regular’ objects x. For example, there exists a fixed-size program that, when input n, outputs the first n bits of π and then halts. As is easy to see (Section 4.2), specification of n takes O(log n) bits. Thus, when x consists of the first n bits of π, its complexity is O(log n). Similarly, we have K(x) = O(log n) if x represents the first n bits of a sequence like (1) consisting of only 1s. We also have K(x) = O(log n) for the first n bits of e, written in binary; or even for the first n bits of a sequence whose i-th bit is the i-th bit of e2.3 if the i − 1-st bit was a one, and the i-th bit of 1/π if the i − 1-st bit was a zero. For certain ‘special’ lengths n, we may have K(x) even substantially smaller than O(log n). For example, suppose n = 2m for some m ∈ N. Then we can describe n by first describing m and then describing a program implementing the function f(z) = 2z.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages37 Page
-
File Size-