Finite Automata and Random Sequence

Total Page:16

File Type:pdf, Size:1020Kb

Finite Automata and Random Sequence Finite Automata and Random Sequence C. P. Schnorr and H. Stimm Institut für Angewandte mathematick der Johann Wolfgang Goethe-Universiät D-6000 Frankfurt 1, Robert-Mayer-Str. 10 Bundesrepublik Deutschland Abstract. We consider the behaviour of finite automata on infinite bi- nary sequence and study the class of random tests which can be carried out by finite automata. We give several equivalent characterizations of those infinite binary sequences which are random relative to finite au- tomata. These characterizations are based on the concepts of selection rules, martingales and invariance properties defined by finite automata. 1 Introduction Stimulated by Kolmogorov and and Chaitin, several attempts have been made recently, by Martin-Löf, Loveland, and Schnorr, to conceive the notion of infinite binary random sequences on the basis of the theory of algorithms. In Schnorr [13] an intuitive concept of the effective random test is proposed. The hitherto investigated, different approaches to this one has proven to be equivalent. It thus provides an robust definition of the notion of randomness. We say a sequence is random if it can pass any effective random test. Random sequences can not necessarily be calculated effectively, i. e., they are not recursive. The problem, which is important for the application, to calculate the possible “random” sequences can therefore only be investigated in the sense that random sequences can be approximated by recursive sequences. Recursive sequences that behave “random” are called pseudo-random sequences. From a pseudo-random sequence one would expect that it at least fulfills the“most im- portant” probability laws. The assumption is that the algorithmically simple random tests correspond to particularly important probability laws. Among the algorithm, those represented by finite automata are to be regarded as particu- larly simple. This raises the question of whether there is an excellent class of randomness which can be tested by finite automata. To answer this question, we restricted some of the approaches described in [13] to the implementation of effective random tests and to consider only those that can be carried out through finite automata. We consider the following ran- dom concepts from [13]. (I) The principle of dichotomy of game system, (II) The principle of unpredictability and (III) Invariability characteristics of the random sequence 2 C. P. Schnorr and H. Stimm In each of these cases the so-called Bernoulli sequences (also called normal, non-reactive sequences or admissible numbers) fulfill all random tests which can be carried out by finite automata. Thus, precisely the Bernoulli sequences are classified as random against finite automata. We restrict ourselves here to the case of uniform distribution over a finite alphabet. A transfer to finite probability spaces with arbitrary probabilities is obvious. The results of this work generalize the results of the investigations by Agafonoff [1] on the selection rules generated by finite automata. 2 Bernoulli sequences and ergodic states of finite automata Let X be a finite set, let X∗ (X1) be the set of finite (infinite) sequences over X. Let Λ 2 X∗ be the empty sequence. juj designate the length of u 2 X∗. For u 2 X∗ (z 2 X1) and i 2 N (N is the set of nonnegative integers), u(i) (z(i)) is the initial sequence of length i of u or z. For u 2 X∗, y 2 X∗ [ X1, denote uy 2 X∗ [ X1 the concatenation of u and y. This naturally results in a ∗ 1 ∗ ∗ 1 product AB ⊆ X [ X of set A ⊆ X and B ⊆ X [ X . χA : B ! f0; 1g be the characteristic function (indicator function) of set A ⊆ B. kAk denotes the number of elements of set A. In the following kXk = p. We assume p ≥ 2 by default. Definition 2.1. We say z 2 X1 is Bernoulli sequence (for uniform distri- bution) if n 1 X −|uj lim χ ∗ (z(i)) = p n!1 n X u i=1 for all u 2 X∗. Let B ⊆ X1 be the set of Bernoulli sequence. That is the relative frequency of u occurring as a subsequence of z, is just the probability of u. Bernoulli sequences we defined here are also studied by Borel, Copeland, Reichenbach and Popper under the names of normal number, admissible numbers, and non-reactive sequences. Let µ¯ denote the product probability measure on X1 of the uniform dis- tribution on X. Then we have the following result from standard theory of probabilistic theory. Corollary 2.2. µ¯(B) = 1: Now let us establish a first connection between Bernoulli sequences and finite automata. Definition 2.3. A finite automaton is a 4-tuple U = (X; S; δ; s1), where X and S are finite sets, namely the input alphabet X and state set S. s1 2 S is the initial state. δ : X × S ! S is the transfer function. Finite Automata and Random Sequence 3 If the automaton reads the input symbol x 2 X in state s 2 S, it goes into the state δ(x; s). We describe the behavior of the automaton on a sequence recursively: δ∗(Λ, s) = s δ∗(ux; s) = δ(x; δ∗(u; s)) Now let us prove proposition 2.5, which later makes it possible to restrict our- selves to strictly connected automata by testing finite automata. Some prelimi- nary considerations are necessary. Let U be a finite automaton, and s; t 2 S. s is transformable to t () (9u 2 X∗)δ∗(u; s) = t. (or we said t is reachable from s). We denote this as s ! t. The relation ! is reflexive and transitive. ! can be restricted to an equivalence relation $ as follows: s $ t () s ! t ^ t ! s: For s 2 S, we denote s¯ ⊆ S as the equivalence class of s of the equivalence relation $. Denote S=$ to be the set of equivalence classes. We say U is strong connected if S has exactly one equivalence class. We can define the following partial order in S=$: s¯ > t¯ () (9u 2 s¯)(9v 2 t¯)u ! v By the definition of $, this is the same as: s¯ > t¯ () (8u 2 s¯)(8v 2 t¯)u ! v Since S is finite, there is a minimal element in S. the minimal element in S=$ has the following ergodic properties: Corollary 2.4. Let s¯ 2 S=$ minimal. Then: (8a 2 s¯)(8u 2 X∗)δ∗(u; a) 2 s:¯ Proof. Assume there are a 2 s¯ and u 2 X∗ with δ(u; a) 62 s¯. Let t := δ∗(u; a). We then know t¯6=s ¯ and s¯ > t¯. This is in conflict with the minimality of s¯. The minimal elements of S=$ are called ergodic set in the theory of Markov chain. We say elements in such sets are ergordic. This notion is important in the following statement. Proposition 2.5. Let z be a Bernoulli sequence and U be a finite automaton. Then there is an n for each state s such that δ∗(z(n); s) is an ergodic state. The proof of the proposition is based on: Lemma 2.6. For every finite automaton there isa u 2 X∗, so that δ∗(u; s) is an ergordic state for all states s. 4 C. P. Schnorr and H. Stimm Proof. Let S = fs1; ··· ; smg be the state set of the automaton. We prove by (i) ∗ (i) induction on i such that 8i ≤ m there is a u such that for all j ≤ i, δ (u ; sj) is ergodic. Case i = 1: Let t¯2 S=$ be a minimal element such that s¯1 > t¯. Then s1 ! t. (1) ∗ ∗ (1) ∗ (1) Hence there exists u 2 X so that δ (u ; s1) = t. Then δ (u ; s1) is ergodic. ∗ (i) From i to i + 1: Let t¯ 2 S=$ a minimal element with δ (u ; si+1) > t¯. ∗ (i) ∗ ∗ ∗ (i) Then δ (u ; si+1) > t. Choose v 2 X so that δ (v; δ (u ; si+1)) = t. Set (i+1) (i) ∗ (i+1) u := u v. Then δ (u ; si+1) = t. Since t 2 S=$ is minimal, it follows ∗ (i+1) that it is ergodic. Note that for all j ≤ i, δ (u ; sj) is also ergodic, since by ∗ (i) induction hypothesis, δ (u ; sj) with j ≤ i is ergodic, and ergodic states can only transfer to ergodic states. Proof of Proposition 2.5. Let z be a Bernoulli sequence and U a finite automaton. Let u 2 X∗ fulfills Lemma 2.6. Then u occurs as a subsequence of z. Choose n so that z(n) 2 X∗u. Then it follows from 2.6 that δ∗(z(n); s) is ergodic. 3 Finite automata generated asset function We want to consider games (in naive sense) about sequence u 2 X∗. The result of a game over the sequence u is a non-negative real number V (u), which indicated the capital of the player after he played over the sequence. Overall, a function ∗ + + V : X ! R ; where R is the set of non-negative real number, is generated by a fixed “game strategy” of the player, which he follows atplay over a sequence u 2 X∗. A game is generally regarded as “fair” if the function V satisfies the Martingale Property (M): 1 X V (u) = V (ux)(u 2 X∗): (M) p x2X In this case the chance to win is just as rough as the opportunity to lose. In the following, whenever we say asset function, we mean functions that satisfy property (M). Or in today’s terminology, they are martingale. Intuition of the asset function: at the beginning of the game the gambler has capital V (Λ) 2 R+.
Recommended publications
  • A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications
    Special Publication 800-22 Revision 1a A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications AndrewRukhin,JuanSoto,JamesNechvatal,Miles Smid,ElaineBarker,Stefan Leigh,MarkLevenson,Mark Vangel,DavidBanks,AlanHeckert,JamesDray,SanVo Revised:April2010 LawrenceE BasshamIII A Statistical Test Suite for Random and Pseudorandom Number Generators for NIST Special Publication 800-22 Revision 1a Cryptographic Applications 1 2 Andrew Rukhin , Juan Soto , James 2 2 Nechvatal , Miles Smid , Elaine 2 1 Barker , Stefan Leigh , Mark 1 1 Levenson , Mark Vangel , David 1 1 2 Banks , Alan Heckert , James Dray , 2 San Vo Revised: April 2010 2 Lawrence E Bassham III C O M P U T E R S E C U R I T Y 1 Statistical Engineering Division 2 Computer Security Division Information Technology Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899-8930 Revised: April 2010 U.S. Department of Commerce Gary Locke, Secretary National Institute of Standards and Technology Patrick Gallagher, Director A STATISTICAL TEST SUITE FOR RANDOM AND PSEUDORANDOM NUMBER GENERATORS FOR CRYPTOGRAPHIC APPLICATIONS Reports on Computer Systems Technology The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) promotes the U.S. economy and public welfare by providing technical leadership for the nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analysis to advance the development and productive use of information technology. ITL’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the cost-effective security and privacy of sensitive unclassified information in Federal computer systems.
    [Show full text]
  • Kolmogorov Complexity and Its Applications
    CHAPTER 4 Kolmogorov Complexity and its Applications Ming LI Aiken Computation Laboratory, Han•ard University, Cambridge, MA 02138, USA Paul M.B. VIT ANYI Centrum voor Wiskunde en biformatica, Kruislaan 413, 1098 SJ Amsterdam, Netherlands, and Facu/teit Wiskunde en Jnjormatica, Universiteic van Amsterdam, Amsterdam, Netherlands Contents L Introduction 189 2. Mathematical theory 196 3. Applications of compressibility 214 4. Example of an application in mathematics: weak prime number theorems. 221 5. Applications of incompressibility: proving lower bounds 222 6. Resource-bounded Kolmogorov complexity and its applications 236 7. Conclusion 247 Acknowledgment 247 References :48 HANDBOOK OF THEORETICAL COMPUTER SCIENCE Edited by J. van Leeuwen ((' Elsevier Science Publishers B.V., 1990 KOLMOGOROV COMPLEXITY AND ITS APPLICATIONS 189 t. Introduction In everyday language we identify the information in an individual object with the essentials of a description for it. We can formalize this by defining the amount of information in a finite object (like a string) as the size (i.e., number of bits) of the smallest program that, starting with a blank memory, outputs the string and then terminates. A similar definition can be given for infinite strings, but in this case the program produces element after element forever. Thus, 1n (a string of nones) contains little information because a program of size about log n outputs it (like "print n ls"). Likewise, the transcendental number rr = 3.1415 ... , an infinite sequence of seemingly "random" decimal digits, contains O(l) information. (There is a short program that produces the consecutive digits of rr forever.) Such a definition would appear to make the amount of information in a string depend on the particular programming language used.
    [Show full text]
  • Randomness and Degrees of Irregularity (Approximate Entropy/Computable/Maximally Random Sequences/Normal Numbers/Chaos) STEVE PINCUS*T and BURTON H
    Proc. Natl. Acad. Sci. USA Vol. 93, pp. 2083-2088, March 1996 Mathematics Randomness and degrees of irregularity (approximate entropy/computable/maximally random sequences/normal numbers/chaos) STEVE PINCUS*t AND BURTON H. SINGERt *990 Moose Hill Road. Guilford, CT 06437; and tOffice of Population Research, Princeton University. Princeton, NJ 08544 Contribluted by, Buirton H. Singer, November 22, 1995 ABSTRACT The fundamental question "Are sequential Furthermore, there appears to be a critical need to develop data random?" arises in myriad contexts, often with severe a (computable) formulation of "closer to random," to grade data length constraints. Furthermore, there is frequently a the large class of decidedly nonrandom sequences. It appears critical need to delineate nonrandom sequences in terms of that many analysts looking for chaos simply require a means of closeness to randomness-e.g., to evaluate the efficacy of calibrating (short) sequences from this perspective. As another therapy in medicine. We address both these issues from a example, the capability to linearly order by randomness may be computable framework via a quantification of regularity. essential, e.g., in evaluating the efficacy of therapy in medical ApEn (approximate entropy), defining maximal randomness settings (8). Thirdly, segmenting "junk" DNA (9) into distinct for sequences of arbitrary length, indicating the applicability subclasses may bring new insights to genetic function. While to sequences as short as N = 5 points. An infinite sequence classical statistical tests can reject a null hypothesis of ran- formulation of randomness is introduced that retains the domness based on various types of violations (10-14), they fail operational (and computable) features ofthe finite case.
    [Show full text]
  • A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications
    Special Publication 800-22 Revision 1a A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications Andrew Rukhin, Juan Soto, James Nechvatal, Miles Smid, Elaine Barker, Stefan Leigh, Mark Levenson, Mark Vangel, David Banks, Alan Heckert, James Dray, San Vo Revised: April 2010 Lawrence E Bassham III A Statistical Test Suite for Random and Pseudorandom Number Generators for NIST Special Publication 800-22 Revision 1a Cryptographic Applications Andrew Rukhin1, Juan Soto2, James Nechvatal2, Miles Smid2, Elaine Barker2, Stefan Leigh1, Mark Levenson1, Mark Vangel1, David Banks1, Alan Heckert1, James Dray2, San Vo2 Revised: April 2010 Lawrence E Bassham III2 C O M P U T E R S E C U R I T Y 1Statistical Engineering Division 2Computer Security Division Information Technology Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899-8930 Revised: April 2010 U.S. Department of Commerce Gary Locke, Secretary National Institute of Standards and Technology Patrick Gallagher, Director A STATISTICAL TEST SUITE FOR RANDOM AND PSEUDORANDOM NUMBER GENERATORS FOR CRYPTOGRAPHIC APPLICATIONS Reports on Computer Systems Technology The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) promotes the U.S. economy and public welfare by providing technical leadership for the nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analysis to advance the development and productive use of information technology. ITL’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the cost-effective security and privacy of sensitive unclassified information in Federal computer systems.
    [Show full text]
  • Random Subgroups of Rationals Ziyuan Gao, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, Alexander Melnikov, Karen Seidel, Frank Stephan
    Random Subgroups of Rationals Ziyuan Gao, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, Alexander Melnikov, Karen Seidel, Frank Stephan To cite this version: Ziyuan Gao, Sanjay Jain, Bakhadyr Khoussainov, Wei Li, Alexander Melnikov, et al.. Random Subgroups of Rationals. 2019. hal-01970683v3 HAL Id: hal-01970683 https://hal.archives-ouvertes.fr/hal-01970683v3 Preprint submitted on 24 Jun 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Random Subgroups of Rationals Ziyuan Gao Department of Mathematics, National University of Singapore, Singapore [email protected] Sanjay Jain School of Computing, National University of Singapore, Singapore [email protected] Bakhadyr Khoussainov Department of Computer Science, University of Auckland, New Zealand [email protected] Wei Li Department of Mathematics, National University of Singapore, Singapore [email protected] Alexander Melnikov Institute of Natural and Mathematical Sciences, Massey University, New Zealand [email protected] Karen Seidel Hasso Plattner Institute, University of Potsdam, Germany [email protected] Frank Stephan Department of Mathematics, National University of Singapore, Singapore [email protected] Abstract This paper introduces and studies a notion of algorithmic randomness for subgroups of rationals.
    [Show full text]
  • Kolmogorov Complexity
    Kolmogorov Complexity Pei Wang Kolmogorov Complexity Pei Wang School of Mathematical Sciences Peking University [email protected] May 15, 2012 Kolmogorov Outline Complexity Pei Wang Definition Models of Computation Definition Models of Computation Definitions Definitions Examples Examples Kolmogorov Complexity and Kolmogorov Complexity and Entropy Entropy Prefix Code Kolmogorov Complexity and Prefix Code Entropy Kolmogorov Complexity and Entropy Algorithmically Random and Incompressible Algorithmically Random and Incompressible Random and Incompressible Random and Incompressible Asymptotic Equipartition Property A Random Bernoulli Asymptotic Equipartition Property Sequence A Random Bernoulli Sequence Ω Kolmogorov Ω Complexity and Universal Kolmogorov Complexity and Universal Probability Probability Summary Summary References Kolmogorov A Turing Machine Complexity Pei Wang I a finite-state machine operating on a finite symbol set, Definition Models of Computation I at each unit of time, the machine inspects the Definitions program tape, writes some symbols on a work tape, Examples Kolmogorov changes its state according to its transition table, and Complexity and Entropy calls for more program. Prefix Code Kolmogorov Complexity and I reads from left to right only. Entropy Algorithmically Random and Incompressible Random and Incompressible Asymptotic Equipartition Property A Random Bernoulli Sequence Ω Kolmogorov Complexity and Universal Probability Figure: A Turing Machine Summary References Most of the materials is copy-edited from[Cover et al., 1991]. Kolmogorov A Turing Machine Complexity Pei Wang I a finite-state machine operating on a finite symbol set, Definition Models of Computation I at each unit of time, the machine inspects the Definitions program tape, writes some symbols on a work tape, Examples Kolmogorov changes its state according to its transition table, and Complexity and Entropy calls for more program.
    [Show full text]
  • Kolmogorov Complexity in Perspective. Part I: Information Theory and Randomnes Marie Ferbus-Zanda, Serge Grigorieff
    Kolmogorov Complexity in perspective. Part I: Information Theory and Randomnes Marie Ferbus-Zanda, Serge Grigorieff To cite this version: Marie Ferbus-Zanda, Serge Grigorieff. Kolmogorov Complexity in perspective. Part I: Information Theory and Randomnes. Synthese, Springer Verlag (Germany), 2010, pp.00. hal-00525504 HAL Id: hal-00525504 https://hal.archives-ouvertes.fr/hal-00525504 Submitted on 12 Oct 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Kolmogorov Complexity in perspective Part I: Information Theory and Randomness∗ Marie Ferbus-Zanda Serge Grigorieff LIAFA, CNRS & Universit´eParis 7 LIAFA, CNRS & Universit´eParis 7 case 7014 case 7014 75205 Paris Cedex 13 France 75205 Paris Cedex 13 France [email protected] [email protected] Abstract We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kol- mogorov complexity are presented: randomness and classification. The survey is divided in two parts in the same volume. Part I is dedicated to information theory and the mathematical formaliza- tion of randomness based on Kolmogorov complexity. This last applica- tion goes back to the 60’s and 70’s with the work of Martin-L¨of, Schnorr, Chaitin, Levin, and has gained new impetus in the last years.
    [Show full text]
  • Algorithmically Random Closed Sets and Probability
    ALGORITHMICALLY RANDOM CLOSED SETS AND PROBABILITY A Dissertation Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Logan M. Axon Peter Cholak, Director Graduate Program in Mathematics Notre Dame, Indiana April 2010 ALGORITHMICALLY RANDOM CLOSED SETS AND PROBABILITY Abstract by Logan M. Axon Algorithmic randomness in the Cantor space, 2!, has recently become the subject of intense study. Originally defined in terms of the fair coin measure, algorithmic randomness has since been extended, for example in Reimann and Slaman [22, 23], to more general measures. Others have meanwhile developed definitions of algorithmic randomness for different spaces, for example the space of continuous functions on the unit interval (Fouch´e[8, 9]), more general topological spaces (Hertling and Weihrauch [12]), and the closed subsets of 2! (Barmpalias et al. [1], Kjos-Hanssen and Diamondstone [14]). Our work has also been to develop a definition of algorithmically random closed subsets. We take a very different approach, however, from that taken by Barmpalias et al. [1] and Kjos- Hanssen and Diamondstone [14]. One of the central definitions of algorithmic randomness in Cantor space is Martin-L¨ofrandomness. We use the probability theory of random closed sets (RACS) to prove that Martin-L¨ofrandomness can be defined in the space of closed subsets of any locally compact, Hausdorff, second countable space. We then explore the Martin-L¨ofrandom closed subsets of the spaces N, 2!, and R under different measures. In the case of 2! we prove that the definitions of Barm- palias et al.
    [Show full text]
  • On Initial Segment Complexity and Degrees of Randomness
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 360, Number 6, June 2008, Pages 3193–3210 S 0002-9947(08)04395-X Article electronically published on January 10, 2008 ON INITIAL SEGMENT COMPLEXITY AND DEGREES OF RANDOMNESS JOSEPH S. MILLER AND LIANG YU Abstract. One approach to understanding the fine structure of initial seg- ment complexity was introduced by Downey, Hirschfeldt and LaForte. They define X ≤K Y to mean that (∀n) K(X n) ≤ K(Y n)+O(1). The equivalence classes under this relation are the K-degrees.Weprovethatif X ⊕ Y is 1-random, then X and Y have no upper bound in the K-degrees (hence, no join). We also prove that n-randomness is closed upward in the K-degrees. Our main tool is another structure intended to measure the degree of randomness of real numbers: the vL-degrees. Unlike the K-degrees, many basic properties of the vL-degrees are easy to prove. We show that X ≤K Y implies X ≤vL Y , so some results can be transferred. The reverse implication is proved to fail. The same analysis is also done for ≤C , the analogue of ≤K for plain Kolmogorov complexity. Two other interesting results are included. First, we prove that for any Z ∈ 2ω, a 1-random real computable from a 1-Z-random real is automatically 1-Z-random. Second, we give a plain Kolmogorov complexity characterization of 1-randomness. This characterization is related to our proof that X ≤C Y implies X ≤vL Y . 1. Introduction This paper is part of an ongoing project to understand the initial segment com- plexity of random real numbers (by which we mean elements of 2ω).
    [Show full text]
  • Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?
    > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 2 Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice? Fenglei Fan, Ge Wang1 Biomedical Imaging Center, BME/CBIS Rensselaer Polytechnic Institute, Troy, New York, USA, 12180 Abstract — Inspired by the fact that the neural network, as the learning. mainstream method for machine learning, has brought successes in many application areas, here we propose to use this approach As an initial investigation along this direction, we will use a for decoding hidden correlation among pseudo-random data and popular multilayer neural network to predict the “next number” predicting events accordingly. With a simple neural network in a 0-1 pseudo-random number sequence. Pseudo-random structure and a typical training procedure, we demonstrate the numbers are closely related to mathematical chaotic behaviors learning and prediction power of the neural network in pseudo- [11-12]. Although chaotic time series prediction was random environments. Finally, we postulate that the high sensitivity and efficiency of the neural network may allow to learn investigated using machine learning methods [13-16], few peer- on a low-dimensional manifold in a high-dimensional space of reviewed studies [17] were reported on pseudo-random number pseudo-random events and critically test if there could be any prediction, which were neither systematically designed nor fundamental difference between quantum randomness and rigorously tested. pseudo randomness, which is equivalent to the classic question: Does God play dice? In this paper, we provide a representative (0-1 random numbers), systematic (big datasets) and definitive (in the sense Classification ü Physical Sciences of 5σ) analysis on this topic.
    [Show full text]
  • A Martingale Approach to the Study of Occurrence of Sequence Patterns in Repeated Experiments Author(S): Shuo-Yen Robert Li Source: the Annals of Probability, Vol
    A Martingale Approach to the Study of Occurrence of Sequence Patterns in Repeated Experiments Author(s): Shuo-Yen Robert Li Source: The Annals of Probability, Vol. 8, No. 6 (Dec., 1980), pp. 1171-1176 Published by: Institute of Mathematical Statistics Stable URL: https://www.jstor.org/stable/2243018 Accessed: 28-03-2019 19:59 UTC JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at https://about.jstor.org/terms Institute of Mathematical Statistics is collaborating with JSTOR to digitize, preserve and extend access to The Annals of Probability This content downloaded from 137.99.108.192 on Thu, 28 Mar 2019 19:59:53 UTC All use subject to https://about.jstor.org/terms The Annals of Probability 1980, Vol. 8, No. 6, 1171-1176 A MARTINGALE APPROACH TO THE STUDY OF OCCURRENCE OF SEQUENCE PATTERNS IN REPEATED EXPERIMENTS1 BY SHUO-YEN ROBERT Li University of Illinois at Chicago We apply the concept of stopping times of martingales to problems in classical probability theory regarding the occurrence of sequence patterns in repeated experiments. For every finite collection of sequences of possible out- comes, we compute the expected waiting time till one of them is observed in a run of experiments.
    [Show full text]
  • Memoryless Sequences for General Losses
    Journal of Machine Learning Research 21 (2020) 1-28 Submitted 1/18; Revised 3/20; Published 6/20 Memoryless Sequences for General Losses Rafael Frongillo [email protected] CU Boulder Andrew Nobel [email protected] UNC Chapel Hill Editor: Manfred Warmuth Abstract One way to define the randomness of a fixed individual sequence is to ask how hard it is to predict relative to a given loss function. A sequence is memoryless if, with respect to average loss, no continuous function can predict the next entry of the sequence from a finite window of previous entries better than a constant prediction. For squared loss, memoryless sequences are known to have stochastic attributes analogous to those of truly random sequences. In this paper, we address the question of how changing the loss function changes the set of memoryless sequences, and in particular, the stochastic attributes they possess. For convex differentiable losses we establish that the statistic or property elicited by the loss determines the identity and stochastic attributes of the corresponding memoryless sequences. We generalize these results to convex non-differentiable losses, under additional assumptions, and to non-convex Bregman divergences. In particular, our results show that any Bregman divergence has the same set of memoryless sequences as squared loss. We apply our results to price calibration in prediction markets. Keywords: algorithmic randomness, property elicitation, prediction markets. 1. Introduction Since the beginnings of probability theory there has been interest in understanding fixed, complex objects through the lens of randomness. One manifestation of this interest is the problem of assessing and defining the randomness of an individual numerical sequence, a representative question being whether, and in what sense, the digits of π are random.
    [Show full text]