DISCRETE AND CONTINUOUS Website: http://math.smsu.edu/journal DYNAMICAL SYSTEMS Volume 7, Number 3, July 2001 pp. 477–486

ORBIT AND DATA COMPRESSION

Stefano Galatolo Dipartimento di Matematica Universit`adi Pisa Via Buonarroti, 2, 56127 Pisa, Italy

(Communicated by Konstantin Mischaikow)

Abstract. We consider data compression algorithms as a tool to get an approximate measure for the quantity of information contained in a string. By this it is possible to give a notion of orbit complexity for topological dynamical systems. In compact ergodic dynamical systems, entropy is almost everywhere equal to orbit complexity. The use of compression algorithms allows a direct estimation of the information content of the orbits.

1. Introduction. In [6] Brudno gives a definition of orbit complexity for a topo- logical . The complexity of an orbit is a measure of the amount of information that is necessary to describe the orbit. In the literature many relations are proved between orbit complexity and other measures of the complexity and of the chaotic behavior of a dynamical system. One of the most important ([6]) states that in an ergodic,compact dynamical system the complexity of almost each orbit is equal to the metric entropy of the system. Brudno’s construction translates the orbit into a of strings whose complexity defines the complexity of the orbit. The complexity of those strings is defined by the tools of algorithmic information theory (Kolmogorov complexity)(see e.g. [8][18]). Unfortunately, the Kolmogorov complexity of a string is not a computable function and for this reason Brudno’s orbit complexity is not in general directly computable and useful on concrete ap- plications. One could think for example to have an estimation of the entropy of an unknown system by an estimation of the complexity of some orbit (whose be- havior is given by an experimental observation or a computer simulation), this is encouraged by the above relation between orbit complexity and entropy but is not possible by the uncomputability of Kolmogorov complexity. In this paper we use data compression algorithms to give a definition of orbit complexity which has the same relation with entropy as Brudno’s one and it is computable. We consider an universal coding procedure E and we define the E-orbit complexity by replacing Kolmogorov complexity of a string with the length of the string after it was compressed by E, which we consider as an approximate estimation of the information contained in the string. The E-complexity of the orbit of a point is invariant under isomorphisms of dynamical systems and is defined independently of the choice of an invariant measure or of the knowledge of other global features of the system under consideration. For this reason we think that our definition of orbit complexity can give an estimation of the complexity of a dynamical system from

1991 Subject Classification. 28D20, 58F13, 68P30. Key words and phrases. Topological dynamical systems, information, coding, entropy, orbit complexity.

477 478 STEFANO GALATOLO its behavior. Moreover orbit complexity can give informations on the complexity of the dynamics even in the cases when other traditional measures of complexity (entropy, Lyapunov exponents...) are not defined or trivial (see e.g. [3] or [11]). The main result we prove is the analogous of the Brudno’s relation between entropy and orbit complexity: in an ergodic, compact dynamical system the E-complexity of almost each orbit is equal to the metric entropy of the system. In Section 2 we give some notion about algorithmic information theory and the definition of orbit complexity given by Brudno [6] , with related results. In Section 3 we present some essential information about universal coding al- gorithms. A universal coding algorithm is an algorithm for which the coding of a string depends only on the string itself and not on the statistical properties of the information source which outputs the string. We give the definition of ideal coding schemes: a class of coding procedures in which the classical Lempel-Ziv 1978, LZ78 [13] coding scheme is included. Each ideal coding scheme will give a definition of orbit complexity for which the main theorems holds. In Section 4 we give the definition of E orbit complexity for dynamical systems. In Section 5 we prove the main results, linking orbit complexity to entropy. Theorem 21 is analogous to the main theorem of [6] (Theorem3.1).

2. Algorithmic Information Theory and orbit complexity. We give the ba- sic definitions of the concepts coming from algorithmic information theory that will be used in the following sections, a detailed exposition can be found in [8] or [18]. Let Σ = {0, 1}∗ be the set of finite (possibly empty) binary strings. Let us consider a Turing machine C. By the notation C(p) = s we mean that C starting with input p (the program ) stops with output s (C defines a recursive function C :Σ → Σ). If the input gives a never ending computation the output (the value of the recursive function) is not defined. If some input gives a never ending computation (and the function C will be not defined for all p ∈ Σ ) we say that C defines a partial recursive function C :Σ → Σ. If the computation performed by C stops for each input then we say that C defines a total recursive function C :Σ → Σ. The following definition due to Kolmogorov and Chaitin is the basis of the algo- rithmic information theory. Definition 1. The Kolmogorov complexity or algorithmic information content of a string s given C is the length of the smallest program p giving s as the output:

KC (s) = min |p| C(p)=s if s is not a possible output for the computer C then KC (s) = ∞ . Definition 2. A Turing machine U is said to be asymptotically optimal if for each Turing machine F and each binary string s we have KU (s) ≤ KF (s) + cU,F where the constant c depends on F and not on s. It can be proved that an asymptotically optimal Turing machine exists. If we chose an asymptotically optimal Turing machine the complexity of a string will be defined independently of the given Turing machine up to a constant. In the KU (s) next sections we consider the value of fractions like |s| , when the length of s goes to infinity the value of the constant will become irrelevant. For the rest of the paper we will suppose that an asymptotically optimal Turing machine U is chosen once forever. ORBIT COMPLEXITY AND DATA COMPRESSION 479

2.1. Brudno’s definition of orbit complexity. Here we sketch Brudno’s defi- nition of orbit complexity. For a more detailed introduction see [6] or [15]. As it was stated in the introduction, the orbit is translated into a set of strings. Let us consider a topological dynamical system (X,T ). X is a metric space and T is a continuous onto mapping X → X. Let us consider a finite cover β = {B0,B1, ..., BN−1} of X. The sets Bi are measurable sets whose union is X and may have non empty intersections. We use this cover to code the orbits of (X,T ) into a set of infinite strings. If x ∈ X let us define the set of symbolic orbits of x with respect to β as: N n ϕβ(x) = {ω ∈ {0, 1, ..., N − 1} : ∀n ∈ N,T (x) ∈ Bω(n)}.

The set ϕβ(x) is the set of all the possible codings of the orbit of x relative to the cover β. Many codings are indeed possible since the sets may have non empty intersection. The complexity of the orbit of x ∈ X relative to β is defined as:

K (ωn) K(x, T |β) =limsup min U n→∞ ω∈ϕβ (x) n where ωn is the string containing the first n digits of ω. We remark that ωn is not a binary string. It is easy to imagine how the definition of Kolmogorov complexity can be extended to strings made of digits coming from a finite alphabet. This was the definition of orbit complexity with respect to a general measurable cover, this definition includes both the two interesting cases of measurable covers: measurable partitions and open covers. Open covers are important to give a mean- ingful definition of orbit complexity which is independent of a choice of a given cover. Taking the supremum over the set of all finite open covers β of the metric space X it is possible to define the complexity of the orbit of x:

K(x, T ) =sup (supK(x, T |β)) β This definition associates to a point belonging to X a real numbers which is a measure of the complexity of the orbit of x. For example, if a point is periodic or its orbit converges to some fixed point then orbit complexity is 0. We remark that it is important to suppose that sets in the covers are open; if we allow for non-open covers there are dynamical systems with points having high orbit complexity while the orbit converges to a point (that is on the boundary of more than one set of the cover). Orbit complexity is invariant under topological conjugation: Theorem 3. ([6]) If the dynamical systems (X,T ) and (Y,S) are topologically conjugate, and π : X → Y is the conjugating homeomorphism, and π(x) = y then supK(x, T ) = supK(y, S). In the literature (see e.g. [10], [6], [15], [4], [11]) many relations have been proved between orbit complexity and other forms of complexity of a dynamical system (Kolmogorov entropy, topological entropy and others) and with other problems concerning orbits of a dynamical system. The following is of particular interest: Theorem 4. ([6]) Let (X,T ) be a dynamical system over a compact space. If µ is an ergodic probability measure on (X,T ), then 480 STEFANO GALATOLO

K(x, T ) = hµ(T ) for µ-almost each x ∈ X.

Where hµ(T ) is the Kolmogorov entropy of (X,T ) with respect to the invariant measure µ.

3. Universal coding algorithms. A universal coding scheme E is a family of ∗ total recursive functions El : A → Σ with l ∈ N. By writing El(s) = p we mean ∗ that the coding scheme El gets a string s ∈ A made of symbols from a finite alphabet A as an input and outputs a binary string p which codifies the initial string s. 1 l is a sort of accuracy parameter, for example in some coding scheme l is the length of a buffer where the words of the string to be coded are memorized, or in some other coding scheme the code for a word w is decided only in function of the l characters preceding w in the string. Since El is recursive there is a partial recursive Fl that decodes the strings coded by El i.e. such that Fl(p) = s ⇒ El(s) = p. E Definition 5. The El-complexity K (s, l) of s is the length of El(s). As it can be expected, E-complexity is greater than Kolmogorov complexity. n Lemma 6. For each El There is a constant cl such that for all finite strings s , E n n K (s , l) + cl ≥ KU (s ).

If Fl is a partial recursive function which decodes the strings coded by El we have n n n that by asymptotic optimality of U: ∀s KFl (s )+c(U, Fl) ≥ KU (s ). But from the n E n n n n other hand KF (s ) = min |p| ≤ K (s , l) = |El(s )| because Fl(El(s )) = s . l n Fl(p)=s

Let s ∈ AN be an infinite string: Definition 7. The asymptotic compression ratio of the E coding scheme on the string s is defined as:

KE (sn, l) ρE (s, l) =lim sup n→∞ n

ρE (s) =lim sup ρE (s, l). l→∞ where sn is the string containing the first n digits of s. A universal coding scheme is called ideal if it satisfies the following two prop- erties: the first property says that the coding scheme is asymptotically optimal, asymptotically it reaches the best possible average compression rate, the entropy of the source. Property 8. If s is drawn from a stationary ergodic source, then

Pr({ρE (s) = H}) = 1 where H is the entropy of the source. For example the LZ77 and LZ78 coding schemes satisfies Property 8 (see [17], [13], [16]). The second property states that if we simplify a single string by simpli- fying our alphabet the compression ratio does not increases.

1The coding does not depend on the properties of the information source which outputs the string but only on the string s (universal coding). ORBIT COMPLEXITY AND DATA COMPRESSION 481

Property 9. If A, B are two alphabets and f : B → A is surjective, (si)i∈N is an N 0 0 N infinite string in B and (si)i∈N such that si = f(si) is a string in A obtained by identifying each element of B to some element of A then

0 ρE (s ) ≤ ρE (s) The set of ideal schemes is not empty: Lempel-Ziv 78 finite coding scheme is ideal, as we will see in what follows. In [13] Lempel and Ziv described their very famous algorithm for data compression. We do not describe here his internal running but we only state the features of the algorithm that are used in this paper. We refer to [13] for detailed informations. Among the many algorithms for universal coding, Lempel-Ziv algorithms are very fast and allows the coding of very long strings, for this reason among others this algorithms or one of their version are used in almost all data compression computer software (Gzip, Winzip...). ∗ By writing LZ78l(s) = p we mean that the algorithm LZ78 gets a string s ∈ A as an input and outputs a binary string p. The LZ78 algorithm first divides the string s in blocks of length l. Then each block is parsed in words of variable length and each word is codified by binary strings in some efficient way. When a block is codified the machine resets and begin to codify the next block independently of the previous. This procedure allows the algorithm to be executed by a finite state machine. Roughly speaking, when we consider very long (infinite) strings the coding becomes more and more efficient increasing the length of the blocks. Now, in order to show that LZ78 is an ideal coding scheme we recall the main results of [13]. We define the quantity Hˆ (s). If s is an infinite string Hˆ (s) is a sort of Shannon entropy of the single string. Let us consider w ∈ Al, a string from the alphabet A with lenght l. Let α = #(A) be the cardinality of A. Let s(n,m) ∈ An−m be the string containing the segment of s starting from the m-th digit to the n-th digit. Let  1, if s(i+1,i+l) = w σ(s(i+1,i+l), w) = ( 0 ≤ i ≤ n − l) 0, otherwise the relative frequency of w in s(1,n) is n−l 1 X P (s(1,n), w) = σ(s(i+1,i+l), w). n − l + 1 i=0 This can be interpreted as an “empirical” probability measure of w relative to the string s(1,n). Then the corresponding entropy is 1 X Hˆ (s(1,n)) = − P (s(1,n), w) log P (s(1,n), w) l l log α w∈Al then

(1,n) Hˆl(s) =limsup Hˆl(s ) n→∞ and

Hˆ (s) = lim Hˆl(s). l→∞ The existence of this limit is proved in [13]. Let ρ(s) be the finite state compressibility of s, that is the best compression ratio attainable coding the infinite string s by a finite state machine (see [13] pag 482 STEFANO GALATOLO

531 for a precise definition). Let ρLZ78(s) be the compression ratio of the LZ78 coding scheme (definition 7). The main results of [13] can be summarized with our notations as follows: Theorem 10. ([13], theorem 2 statement (iii),theorem 3) For every infinite se- quence s

ρLZ78(s) = ρ(s) = Hˆ (s). Theorem 11. ([13],theorem 4) If s is drawn from an ergodic source with entropy H then Pr[ρ(s) = H] = 1. Theorem 11 shows directly that LZ78 satisfies property 8. The LZ coding scheme also satisfies property 9:

Proposition 12. If A, B are two alphabets and f : B → A is surjective, (si)i∈N is N 0 0 an infinite string in B and (si)i∈N such that si = f(si) then

0 ρLZ78(s ) ≤ ρLZ78(s) Proof. The proof follows immediately from theorem 10 and the easy observation that from the standard convexity properties of the entropy we have Hˆ (s0) ≤ Hˆ (s).

4. Definition of E-orbit complexity. The definition is similar to the Brudno’s definition of orbit complexity. The geometrical construction is almost the same, the main difference is the way to measure the complexity of the strings. In the following E will be an ideal coding scheme. Using notations from section 2 and 3 we define orbit complexity as: Definition 13. The E complexity of the orbit of x ∈ X relative to β is defined as:

E K (x, T |β) = inf ρE (ω) ω∈ϕβ (x) The complexity of an orbit with respect to different covers α, β has the following property: Proposition 14. If E is ideal and β is a refinement of α then KE (x, T |α) ≤ KE (x, T |β)

Proof. Suppose α = {A1, ..., An} and β = {B1, ..., Bm} are covers of X and β ≥ α let us choose a function I : {1, ..., m} → {1, ..., n} such that I(j) = i implies Bj ⊂ Ai. By renaming the sets of α we can suppose that I({1, ..., m}) = {1, ..., k} with k ≤ n. Let us fix  > 0, let us choose ωβ, a symbolic orbit for the cover β such that E ρE (ωβ)− < inf ρE (ω) = K (x, T |β). ωβ is an infinite string of symbols coming ω∈ϕβ (x) from the alphabet {1, ..., m} and the string ωα = ((ωα)i)i∈N such that (ωα)i = I((ωβ)i) is a symbolic orbit for x with respect to α. By property 9 we have that E E E ρE (ωβ) ≥ ρE (ωα). Since K (x, T |α) ≤ ρE (ωα) then K (x, T |α) < K (x, T |β) + , for each  and the statement is proved. E-orbit complexity is greater than Brudno’s orbit complexity, this will be used in the proof of Theorem 21. ORBIT COMPLEXITY AND DATA COMPRESSION 483

Lemma 15. KE (x, T |β) ≥ K(x, T |β) where K(x, T |β) is the Brudno’s orbit complexity of x .

n E n Proof. For all finite string s we have from Lemma 6 that ∀l K (s , l) + cl ≥ n KU (s ), but

KE (sn, l) KE (x, T |β) = inf lim sup lim sup s∈ϕβ l→∞ n→∞ n and

K(sn) K(x, T |β) =lim sup min n→∞ s∈ϕβ n if s is a symbolic orbit such that ρE (s) ≤ inf ρE (ω) +  we have that fixing l, for ω∈ϕβ (x) each n

KE (sn, l) + c K (ωn) l ≥ min U , n ω∈ϕβ n KE (sn, l) + c K (ωn) lim sup l ≥lim sup min U n→∞ n n→∞ ω∈ϕβ n and then ρE (s) ≥ K(x, T |β) and we have that for each  KE (x, T |β) +  ≥ K(x, T |β) As it was done for the definition Brudno’s orbit complexity by considering the supremum over all finite open covers we define the complexity of the orbit of x: Definition 16. The E orbit complexity of the orbit of x is defined as KE (x, T ) = sup (KE (x, T |β)). β∈{finite open covers of X} For example if the point x is periodic or its orbit converges to some fixed point then its orbit complexity is 0. E-Orbit complexity is also invariant under topological conjugation: Theorem 17. If the dynamical systems (X,T ) and (Y,S) are topologically con- jugate, and π : X → Y is the conjugating homeomorphism, and π(x) = y then KE (x, T ) = KE (y, S). It is also easy to see that by Lemma 15 the following holds: Lemma 18. ∀ x in X, KE (x, T ) ≥ K(x, T ).

5. E-orbit complexity and entropy. 5.1. Symbolic Dynamics. Let AN be the set of infinite strings of symbols coming from the alphabet A. A is a finite set, equipped with the discrete topology. AN will be equipped with the product topology. On AN is defined the shift map σ : AN → N A in the following way: if s = (si)i∈N, t = (ti)i∈N the t = σ(s) ⇐⇒ si = ti+1. If X ⊂ AN is a Borel set invariant under σ and µ is an invariant Borel measure we call (X, µ, σ) a symbolic dynamical system. 0 To (X, µ, σ) it is naturally associated an information source (Xi, Ω, µ ). Xi is a random sequence on the probability space (Ω, µ0) defined in the following way: N 0 N 0 Ω = A , µ is defined as follows B ⊂ A ⇒ µ (B) = µ(B ∩ X) and finally the Xi 484 STEFANO GALATOLO are the projections on Ω: if ω = (ωi)i∈N ∈ Ω then Xi(ω) = ωi. The properties of the dynamical systems translates into properties of the source: 0 • if µ is invariant then (Xi, Ω, µ ) is a stationary source 0 • if (X, µ, T ) is ergodic then (Xi, Ω, µ ) is an ergodic source If (X, µ, T ) is a symbolic dynamical system and x ∈ X then x is an infinite j sequence (xi) of symbols in A. The cylinders C0 = {x0 = Aj} are Borel sets. 1 j Moreover they are a measurable partition β = {C0 , ..., C0 } of X. From Property 8 we have Lemma 19. If E is ideal (X, µ, σ) is an ergodic symbolic dynamical system then for µ almost each x E K (x, σ|β) = hµ(σ|β) = hµ(σ) where hµ(σ) is the metric entropy of (X, µ, σ). Proof. As seen before to X it is associated a stationary ergodic information source and the orbits of X are trajectories of the source. By definition we have E E that K (x, σ|β) = ρE (x). Property 8 implies that for µ almost all x, K (x, σ|β) = ρE (x) = H (the Shannon entropy of the associated source) but H = hµ(σ|β) = hµ(σ) (the Kolmogorov entropy of (X, µ, σ)). 5.2. Measurable Partitions. We recall that if (X, µ, T ) is a dynamical system and α = {A1, ..., An} is a measurable partition of X then we can associate to (X, µ, T ) a symbolic dynamical system (Ωα, µα, σ) ( called a symbolic model of N (X,T )). Ωα is a subset of {1, ..., n} (the space of infinite strings made of symbols 2 from the alphabet {1, ..., n}. To a point x ∈ X it is associated a string ω = ϕα(x) , (ω = (ωi)i∈N) and Ωα = ∪ ϕα(x). We define a notation for finite segments of x∈X strings and associated cylinders (k,n) ω = (ωi)k≤i≤n = (ωk, ωk+1, ..., ωn) (k,n) C(ω ) = {ω = (ωi): ωi = ωi for k ≤ i ≤ n − 1} C(ω(k,n)) is the cylinder with center in ω(k,n). (k,n) The measure µα is defined first on the cylinders C(ω ) by (k,n) n−1 −i µα(C(ω )) = µ(∩k T (Aωi )) and then extended by the classical Kolmogorov theorem to a measure µα on Ωα. Theorem 20. If E is an ideal coding scheme, (X, µ, T ) is an ergodic dynamical system and α is a measurable partition of X then for µ-almost all x E K (x, T |α) = hµ(T |α) where hµ(T |α) is the metric entropy of (X, µ, T ) with respect to the measurable partition α. Proof. To a dynamical system (X, µ, T ) and a measurable partition α it is associated a symbolic dynamical system (Ωα, µα, σ) as seen before. If (X, µ, T ) is ergodic then (Ωα, µα, σ) is ergodic. And hµ(X,T |α) = hµα (Ωα, σ) (see e.g.[6]). Now by Lemma 19 the E-orbit complexity of almost all points in Ωα is equal to hµα (Ωα, σ). We define QΩα = {ω ∈ Ωα : ρE (ω) = hµα (Ωα, σ)}. If we consider −1 Q = ϕα (QΩα ) we have E ∀x ∈ QK (x, T |α) = ρE (ϕα(x)) = hµα (Ωα, σ)

2see section 3. ORBIT COMPLEXITY AND DATA COMPRESSION 485

by the way the measure µα is constructed we have µ(Q) = µα(QΩα ) = 1. 5.3. Topological Dynamics. Now we prove the main theorem: we prove that if we assume compactness and of the system the E-complexity (Definition 16) of the orbit of almost all points is equal to the entropy of the system. Theorem 21. If E is an ideal compression scheme,(X, µ, T ) is ergodic and X is compact then for µ almost all x E K (x, t) = hµ(T ) . Before to present the proof of Theorem 21 we present some preliminary remarks and notations. By proposition 14 and the compactness of X it follows:

Remark 22. If X is compact and for a family of finite open covers αi we have limi→∞d(αi) = 0 where d(αi) is the diameter of the cover αi then E E K (x, T ) = sup K (x, T |αi) i∈N

If U = {U1, ..., Un} is an open cover we denote with β(U) = {B1, ..., Bn} the partition generated by U in the following way:

B1 = U1

B2 = U2 − U1 n−1 Bn = Un − ∪i=1 Ui. Proof of theorem 21 We divide the proof in two steps: first we prove that E K (x, T ) ≤ hµ(T ) for µ almost all x ∈ X. The proof can be obtained by the easy observation that if U is an open cover of X then KE (x, T |U) ≤ KE (x, T |β(U)) which follows directly from the definition of E orbit complexity. If Ui is a family of covers such that lim d(Ui) = 0 then by Remark 22 we have i→∞ that E E K (x, T ) = sup(K (x, T |Ui)) i E E by the above observation supi(K (x, T |Ui)) ≤ supi(K (x, T |β(Ui))) then we have that

E E K (x, T ) ≤ sup(K (x, T |β(Ui))) ≤ sup hµ(T |β(Ui)) ≤ hµ(T ) i a.e. i E Now we prove that K (x, T ) ≥ hµ(T ) for µ almost all x ∈ X. By Lemma 18 we have that KE (x, T ) ≥ K(x, T ) but by Lemma 3.15 [6] we have that the set {x ∈ X|K(x, T ) < hµ(T )} has zero measure and the statement is proved.

6. Final remarks. As it was said in the introduction our results allows to have a estimation of the entropy of a dynamical system from its behavior. Even more interesting is the case of the study of dynamical systems where invariant measures are trivial or gives 0-entropy dynamics. This is for example the case of sporadic dynamics [11]. Such systems, as the Manneville map are of physical interest (see e.g. [1], [2],[14] ). In [3] orbit complexity approach was applied to the Manneville map. By using a particular coding scheme we obtained a direct measure of the information content of the orbit that accords with the theoretical predictions of Gaspard and Wang ([11]). 486 STEFANO GALATOLO

REFERENCES . [1] Allegrini P., Barbi M., Grigolini P., West B.J., Dynamical model for DNA sequences, Phys. Rev. E, vol.52 nr.5, 5281-5297 (1995). [2] Allegrini P., Grigolini P., West B.J., A dynamical approach to DNA sequences, Phys. Lett. A, 211, 217-222 (1996). [3] Argenti F. , Benci V. , Cerrai P. , Cordelli A. , Galatolo S. , Menconi G. Information and dynamical systems: a concrete measurement on sporadic dynamics. Work in preparation. [4] Batterman R.; White H. Chaos and algorithmic complexity Found. Phys. 26 (1996), no. 3, 307–336. [5] Benci V. Alcune riflessioni su informazione, entropia e complessit`a. Modelli matematici nelle scienze biologiche, P.Freguglia ed., QuattroVenti, Urbino, 1998. [6] Brudno A.A. Entropy and the complexity of the trajectories of a dynamical system Trans. Moscow Math. Soc. 2 127-151 (1983) [7] Carlini P., La compressione dei dati, Hoepli Informatica (1999). [8] Chaitin G.J. Information, and incompleteness. Papers on algorithmic infor- mation theory. World Scientific, Singapore 1987. [9] Ford J. Directions in classical chaos in Directions in chaos, Vol. 1, 1–16, World Sci. Pub- lishing, Singapore, 1987. [10] Galatolo S. Orbit Complexity by Computable Structures Preprint of Dipartimento di Matem- atica, Universit´adi Pisa (1999). [11] Gaspard P., Wang X.-J., Sporadicity: between periodic and chaotic dynamical behaviours, Proc. Natl. Acad. Sci. USA 85, 4591-4595 (1988). [12] Khinchin A.I. Mathematical foundations of Information Theory Dover Publications , New York. [13] Lempel A., Ziv J., Compression of individual sequences via variable-rate coding, IEEE Transactions on Information Theory, vol. IT-24, 530-536 (1978). [14] Manneville P., Intermittency, self-similarity and 1/f spectrum in dissipative dynamical sys- tems, J. Physique 41, 1235-1243 (1980). [15] White H. Algorithmic complexity of points in dynamical systems Dynam. Syst. 13 807-830 (1993) [16] Wyner A., Ziv J. The sliding-window Lempel-Ziv algorithm is asymptotically optimal Proc. IEEE, 82 (1994), 872-877. [17] Ziv J., Lempel A. A universal algorithm for sequential data compression. IEEE Trans. Information Theory IT-23 (1977), no. 3, 337–343. [18] Zvorkin A.K., Levin L.A. The complexity of finite objects and the algorithmic-theoretic foundations of the notion of information and randomness Russ. Math. Surv. 25 (1970) Revised version received from the editor November 2000. E-mail address: [email protected]