Finite Automata and Random Sequence C. P. Schnorr and H. Stimm Institut für Angewandte mathematick der Johann Wolfgang Goethe-Universiät D-6000 Frankfurt 1, Robert-Mayer-Str. 10 Bundesrepublik Deutschland Abstract. We consider the behaviour of finite automata on infinite bi- nary sequence and study the class of random tests which can be carried out by finite automata. We give several equivalent characterizations of those infinite binary sequences which are random relative to finite au- tomata. These characterizations are based on the concepts of selection rules, martingales and invariance properties defined by finite automata. 1 Introduction Stimulated by Kolmogorov and and Chaitin, several attempts have been made recently, by Martin-Löf, Loveland, and Schnorr, to conceive the notion of infinite binary random sequences on the basis of the theory of algorithms. In Schnorr [13] an intuitive concept of the effective random test is proposed. The hitherto investigated, different approaches to this one has proven to be equivalent. It thus provides an robust definition of the notion of randomness. We say a sequence is random if it can pass any effective random test. Random sequences can not necessarily be calculated effectively, i. e., they are not recursive. The problem, which is important for the application, to calculate the possible “random” sequences can therefore only be investigated in the sense that random sequences can be approximated by recursive sequences. Recursive sequences that behave “random” are called pseudo-random sequences. From a pseudo-random sequence one would expect that it at least fulfills the“most im- portant” probability laws. The assumption is that the algorithmically simple random tests correspond to particularly important probability laws. Among the algorithm, those represented by finite automata are to be regarded as particu- larly simple. This raises the question of whether there is an excellent class of randomness which can be tested by finite automata. To answer this question, we restricted some of the approaches described in [13] to the implementation of effective random tests and to consider only those that can be carried out through finite automata. We consider the following ran- dom concepts from [13]. (I) The principle of dichotomy of game system, (II) The principle of unpredictability and (III) Invariability characteristics of the random sequence 2 C. P. Schnorr and H. Stimm In each of these cases the so-called Bernoulli sequences (also called normal, non-reactive sequences or admissible numbers) fulfill all random tests which can be carried out by finite automata. Thus, precisely the Bernoulli sequences are classified as random against finite automata. We restrict ourselves here to the case of uniform distribution over a finite alphabet. A transfer to finite probability spaces with arbitrary probabilities is obvious. The results of this work generalize the results of the investigations by Agafonoff [1] on the selection rules generated by finite automata. 2 Bernoulli sequences and ergodic states of finite automata Let X be a finite set, let X∗ (X1) be the set of finite (infinite) sequences over X. Let Λ 2 X∗ be the empty sequence. juj designate the length of u 2 X∗. For u 2 X∗ (z 2 X1) and i 2 N (N is the set of nonnegative integers), u(i) (z(i)) is the initial sequence of length i of u or z. For u 2 X∗, y 2 X∗ [ X1, denote uy 2 X∗ [ X1 the concatenation of u and y. This naturally results in a ∗ 1 ∗ ∗ 1 product AB ⊆ X [ X of set A ⊆ X and B ⊆ X [ X . χA : B ! f0; 1g be the characteristic function (indicator function) of set A ⊆ B. kAk denotes the number of elements of set A. In the following kXk = p. We assume p ≥ 2 by default. Definition 2.1. We say z 2 X1 is Bernoulli sequence (for uniform distri- bution) if n 1 X −|uj lim χ ∗ (z(i)) = p n!1 n X u i=1 for all u 2 X∗. Let B ⊆ X1 be the set of Bernoulli sequence. That is the relative frequency of u occurring as a subsequence of z, is just the probability of u. Bernoulli sequences we defined here are also studied by Borel, Copeland, Reichenbach and Popper under the names of normal number, admissible numbers, and non-reactive sequences. Let µ¯ denote the product probability measure on X1 of the uniform dis- tribution on X. Then we have the following result from standard theory of probabilistic theory. Corollary 2.2. µ¯(B) = 1: Now let us establish a first connection between Bernoulli sequences and finite automata. Definition 2.3. A finite automaton is a 4-tuple U = (X; S; δ; s1), where X and S are finite sets, namely the input alphabet X and state set S. s1 2 S is the initial state. δ : X × S ! S is the transfer function. Finite Automata and Random Sequence 3 If the automaton reads the input symbol x 2 X in state s 2 S, it goes into the state δ(x; s). We describe the behavior of the automaton on a sequence recursively: δ∗(Λ, s) = s δ∗(ux; s) = δ(x; δ∗(u; s)) Now let us prove proposition 2.5, which later makes it possible to restrict our- selves to strictly connected automata by testing finite automata. Some prelimi- nary considerations are necessary. Let U be a finite automaton, and s; t 2 S. s is transformable to t () (9u 2 X∗)δ∗(u; s) = t. (or we said t is reachable from s). We denote this as s ! t. The relation ! is reflexive and transitive. ! can be restricted to an equivalence relation $ as follows: s $ t () s ! t ^ t ! s: For s 2 S, we denote s¯ ⊆ S as the equivalence class of s of the equivalence relation $. Denote S=$ to be the set of equivalence classes. We say U is strong connected if S has exactly one equivalence class. We can define the following partial order in S=$: s¯ > t¯ () (9u 2 s¯)(9v 2 t¯)u ! v By the definition of $, this is the same as: s¯ > t¯ () (8u 2 s¯)(8v 2 t¯)u ! v Since S is finite, there is a minimal element in S. the minimal element in S=$ has the following ergodic properties: Corollary 2.4. Let s¯ 2 S=$ minimal. Then: (8a 2 s¯)(8u 2 X∗)δ∗(u; a) 2 s:¯ Proof. Assume there are a 2 s¯ and u 2 X∗ with δ(u; a) 62 s¯. Let t := δ∗(u; a). We then know t¯6=s ¯ and s¯ > t¯. This is in conflict with the minimality of s¯. The minimal elements of S=$ are called ergodic set in the theory of Markov chain. We say elements in such sets are ergordic. This notion is important in the following statement. Proposition 2.5. Let z be a Bernoulli sequence and U be a finite automaton. Then there is an n for each state s such that δ∗(z(n); s) is an ergodic state. The proof of the proposition is based on: Lemma 2.6. For every finite automaton there isa u 2 X∗, so that δ∗(u; s) is an ergordic state for all states s. 4 C. P. Schnorr and H. Stimm Proof. Let S = fs1; ··· ; smg be the state set of the automaton. We prove by (i) ∗ (i) induction on i such that 8i ≤ m there is a u such that for all j ≤ i, δ (u ; sj) is ergodic. Case i = 1: Let t¯2 S=$ be a minimal element such that s¯1 > t¯. Then s1 ! t. (1) ∗ ∗ (1) ∗ (1) Hence there exists u 2 X so that δ (u ; s1) = t. Then δ (u ; s1) is ergodic. ∗ (i) From i to i + 1: Let t¯ 2 S=$ a minimal element with δ (u ; si+1) > t¯. ∗ (i) ∗ ∗ ∗ (i) Then δ (u ; si+1) > t. Choose v 2 X so that δ (v; δ (u ; si+1)) = t. Set (i+1) (i) ∗ (i+1) u := u v. Then δ (u ; si+1) = t. Since t 2 S=$ is minimal, it follows ∗ (i+1) that it is ergodic. Note that for all j ≤ i, δ (u ; sj) is also ergodic, since by ∗ (i) induction hypothesis, δ (u ; sj) with j ≤ i is ergodic, and ergodic states can only transfer to ergodic states. Proof of Proposition 2.5. Let z be a Bernoulli sequence and U a finite automaton. Let u 2 X∗ fulfills Lemma 2.6. Then u occurs as a subsequence of z. Choose n so that z(n) 2 X∗u. Then it follows from 2.6 that δ∗(z(n); s) is ergodic. 3 Finite automata generated asset function We want to consider games (in naive sense) about sequence u 2 X∗. The result of a game over the sequence u is a non-negative real number V (u), which indicated the capital of the player after he played over the sequence. Overall, a function ∗ + + V : X ! R ; where R is the set of non-negative real number, is generated by a fixed “game strategy” of the player, which he follows atplay over a sequence u 2 X∗. A game is generally regarded as “fair” if the function V satisfies the Martingale Property (M): 1 X V (u) = V (ux)(u 2 X∗): (M) p x2X In this case the chance to win is just as rough as the opportunity to lose. In the following, whenever we say asset function, we mean functions that satisfy property (M). Or in today’s terminology, they are martingale. Intuition of the asset function: at the beginning of the game the gambler has capital V (Λ) 2 R+.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-