
RELIABLE CELLULAR AUTOMATA WITH SELF-ORGANIZATION PETER GACS´ ABSTRACT. In a probabilistic cellular automaton in which all local transitions have positive probability, the problem of keeping a bit of information indefinitely is nontrivial, even in an infinite automaton. Still, there is a solution in 2 dimensions, and this solution can be used to construct a simple 3-dimensional discrete-time universal fault-tolerant cellular automaton. This technique does not help much to solve the following problems: remembering a bit of information in 1 dimension; computing in dimensions lower than 3; computing in any dimension with non-synchronized transitions. Our more complex technique organizes the cells in blocks that perform a reliable simula- tion of a second (generalized) cellular automaton. The cells of the latter automaton are also organized in blocks, simulating even more reliably a third automaton, etc. Since all this (a possibly infinite hierarchy) is organized in “software”, it must be under repair all the time from damage caused by errors. A large part of the problem is essentially self-stabilization recovering from a mess of arbitrary size and content. The present paper constructs an asynchronous one-dimensional fault-tolerant cellular automaton, with the further feature of “self-organization”. The latter means that unless a large amount of input information must be given, the initial configuration can be chosen homogeneous. Date: May 5, 2005 (last printing). 1991 Mathematics Subject Classification. 60K35, 68Q80, 82C22, 37B15. Key words and phrases. probabilistic cellular automata, interacting particle systems, renormalization, ergod- icity, reliability, fault-tolerance, error-correction, simulation, hierarchy, self-organization. Partially supported by NSF grant CCR-920484. The author also thanks the IBM Almaden Research Center and the Center for Wiskunde and Informatica (Amsterdam) for their support during the long gestation of this project. 1 2 PETER GACS´ CONTENTS 1. Introduction 4 1.1. Historical remarks 4 1.2. Hierarchical constructions 5 2. Cellular automata 8 2.1. Deterministic cellular automata 9 2.2. Fields of a local state 10 2.3. Probabilistic cellular automata 12 2.4. Continuous-time probabilistic cellular automata 13 2.5. Perturbation 13 3. Codes 15 3.1. Colonies 15 3.2. Block codes 16 3.3. Generalized cellular automata (abstract media) 17 3.4. Block simulations 19 3.5. Single-fault-tolerant block simulation 21 3.6. General simulations 22 3.7. Remembering a bit: proof from an amplifier assumption 25 4. Hierarchy 27 4.1. Hierarchical codes 27 4.2. The active level 37 4.3. Major difficulties 38 5. Main theorems in discrete time 41 5.1. Relaxation time and ergodicity 41 5.2. Information storage and computation in various dimensions 44 6. Media 48 6.1. Trajectories 48 6.2. Canonical simulations 51 6.3. Primitive variable-period media 55 6.4. Main theorems (continuous time) 56 6.5. Self-organization 57 7. Some simulations 58 7.1. Simulating a cellular automaton by a variable-period medium 58 7.2. Functions defined by programs 60 7.3. The rule language 62 7.4. A basic block simulation 65 8. Robust media 70 8.1. Damage 70 8.2. Computation 72 8.3. Simulating a medium with a larger reach 76 9. Amplifiers 78 9.1. Amplifier frames 78 9.2. The existence of amplifiers 82 9.3. The application of amplifiers 83 10. Self-organization 87 10.1. Self-organizing amplifiers 87 10.2. Application of self-organizing amplifiers 88 11. General plan of the program 91 RELIABLE CELLULAR AUTOMATA 3 11.1. Damage rectangles 91 11.2. Timing 93 11.3. Cell kinds 94 11.4. Refreshing 95 11.5. A colony work period 95 11.6. Local consistency 97 11.7. Plan of the rest of the proof 98 12. Killing and creation 100 12.1. Edges 100 12.2. Killing 100 12.3. Creation, birth, and arbitration 101 12.4. Animation, parents, growth 104 12.5. Healing 105 12.6. Continuity 107 13. Gaps 110 13.1. Paths 110 13.2. Running gaps 112 14. Attribution and progress 119 14.1. Non-damage gaps are large 119 14.2. Attribution 122 14.3. Progress 124 15. Healing 125 15.1. Healing a gap 125 16. Computation and legalization 128 16.1. Coding and decoding 128 16.2. Refreshing 130 16.3. Computation rules 133 16.4. Finishing the work period 134 16.5. Legality 137 17. Communication 141 17.1. Retrieval rules 141 17.2. Applying the computation rules 144 17.3. The error parameters 149 18. Germs 150 18.1. Control 150 18.2. The program of a germ 153 18.3. Proof of self-organization 154 19. Some applications and open problems 161 19.1. Non-periodic Gibbs states 161 19.2. Some open problems 161 References 163 Notation 165 Index 169 4 PETER GACS´ 1. INTRODUCTION A cellular automaton is a homogenous array of identical, locally communicating finite- state automata. The model is also called interacting particle system. Fault-tolerant computa- tion and information storage in cellular automata is a natural and challenging mathemati- cal problem but there are also some arguments indicating an eventual practical significance of the subject since there are advantages in uniform structure for parallel computers. Fault-tolerant cellular automata (FCA) belong to the larger category of reliable comput- ing devices built from unreliable components, in which the error probability of the indi- vidual components is not required to decrease as the size of the device increases. In such a model it is essential that the faults are assumed to be transient: they change the local state but not the local transition function. A fault-tolerant computer of this kind must use massive parallelism. Indeed, informa- tion stored anywhere during computation is subject to decay and therefore must be ac- tively maintained. It does not help to run two computers simultaneously, comparing their results periodically since faults will occur in both of them between comparisons with high probability. The self-correction mechanism must be built into each part of the computer. In cellular automata, it must be a property of the transition function of the cells. Due to the homogeneity of cellular automata, since large groups of errors can destroy large parts of any kind of structure, “self-stabilization” techniques are needed in conjunc- tion with traditional error-correction. 1.1. Historical remarks. The problem of reliable computation with unreliable components was addressed in [29] in the context of Boolean circuits. Von Neumann's solution, as well as its improved versions in [9] and [23], rely on high connectivity and non-uniform con- structs. The best currently known result of this type is in [25] where redundancy has been substantially decreased for the case of computations whose computing time is larger than the storage requirement. Of particular interest to us are those probabilistic cellular automata in which all local transition probabilities are positive (let us call such automata noisy), since such an automa- ton is obtained by way of “perturbation” from a deterministic cellular automaton. The automaton may have e.g. two distinguished initial configurations: say x0 in which all cells have state 0 and x1 in which all have state 1 (there may be other states besides 0 and 1). Let pi(x, t) be the probability that, starting from initial configuration xi, the state of cell x at time t is i. If pi(x, t) is bigger than, say, 2/3 for all x, t then we can say that the automaton remembers the initial configuration forever. Informally speaking, a probabilistic cellular automaton is called mixing if it eventually forgets all information about its initial configuration. Finite noisy cellular automata are always mixing. In the example above, one can define the “relaxation time” as the time by which the probability decreases below 2/3. If an infinite automaton is mixing then the relaxation time of the corresponding finite automaton is bounded independently of size. A minimal requirement of fault-tolerance is therefore that the infinite automaton be non-mixing. The difficulty in constructing non-mixing noisy one-dimensional cellular automata is that eventually large blocks of errors which we might call islands will randomly occur. We can try to design a transition function that (except for a small error probability) at- tempts to decrease these islands. It is a natural idea that the function should replace the state of each cell, at each transition time, with the majority of the cell states in some neigh- borhood. However, majority voting among the five nearest neighbors (including the cell RELIABLE CELLULAR AUTOMATA 5 itself) seems to lead to a mixing transition function, even in two dimensions, if the “fail- ure” probabilities are not symmetric with respect to the interchange of 0's and 1's, and has not been proved to be non-mixing even in the symmetric case. Perturbations of the one-dimensional majority voting function were actually shown to be mixing in [16] and [17]. Non-mixing noisy cellular automata for dimensions 2 and higher were constructed in [27]. These automata are also non-ergodic: an apparently stronger property (see formal definition later). All our examples of non-mixing automata will also be non-ergodic. The paper [14] applies Toom's work [27] to design a simple three-dimensional fault-tolerant cellular automaton that simulates arbitrary one-dimensional arrays. Toom's original proof was simplified and adapted to strengthen these results in [5]. Remark 1.1. A three-dimensional fault-tolerant cellular automaton cannot be built to ar- bitrary size in the physical space. Indeed, there will be an (inherently irreversible) error- correcting operation on the average in every constant number of steps in each cell.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages172 Page
-
File Size-