IMPROVED PATTERN GROWTHAND RECONFIGURATION METHODS FOR A

FAULT-TOLERANT CELLULAR ARCHITECTURE

by

Bryan Arthur Brighton

Thesis submitted to the Faculty of the

Virginia Polytechnic Institute and State University

in partial fulfillment of the requirements for the degree of

Master of Science

in

Electrical Engineering

Approved:

Dr. J. C. McKeeman, Chairman Dr. F. G. Gray

Dr. J. R. Armstrong Dr. M. Nadler

November, 1987

Blacksburg, Virginia DEDICATION

This thesis is dedicated to the memory of my wife, Dr.

Marian Hahn Kim, who died of cancer on Aug. 18 1987, two months after completing her PhD. in Pharmacology at the

University of Michigan. IMPROVED PATTERN GROWTHAND RECONFIGURATION METHODS FOR A

FAULT TOLERANT CELLULAR ARCHITECTURE

by

Bryan Arthur Brighton

Committee Chairman: Dr. John C. McKeeman

Electrical Engineering

(ABSTRACT)

The subject of three dissertations and a thesis written at Va. Tech under the direction of Dr. Gray, has been the development of a fault tolerant parallel architecture. The main thrust of this research has been on distributing the control of the parallel architecture through the use of the ideas behind cellular automata. The control has been distributed, because having a single control unit constitutes a "hard-core," in that a single failure in the control unit can bring down the whole system.

The parallel architectures for which these distributed control methods are relevant are described in Chapter I.

These architectures include systolic arrays and ensemble architectures, such as Snyder's CHiP computer. It may be possible to extend the same ideas to the distributed control of other architectures, but this has not been investigated here. Chapter II gives the mathematical background of cellular automata on which the control methods are couched.

Chapter III describes a new method for growing patterns of control states and analyzes the new method and previous methods. Chapter IV describes the corrections and improvements to Kumar's distributed reconfiguration methods that were found necessary after simulation. Chapter V documents the simulator used to verify the pattern growth and reconfiguration algorithms. The simulator may be easily extended to include solutions to the problems of distributed fault diagnosis and distributed I/0 after the solutions to these problems are fully developed.

iii ACKNOWLEDGMENTS

I appreciate the time every member of the committee has taken to read and comment on this thesis. I would like to thank Dr. Gray for his invaluable course instruction and long distance advice. Comments from both Dr. Gray and Dr.

Armstrong motivated the addition of more sections analyzing and comparing my work to previous work. Dr. Nadler's valuable comments on the thesis are also greatly appreciated. Dr. John C. McKeeman advised and aided in the preparation of the thesis while serving as the local principle investigator during Dr. Gray's sabbatical year.

Dr. Gray gave some excellent editorial advice on the pattern growth and reconfiguration chapters during Dr. McKeeman's year in industry.

I would like to thank my wife, Marian H. Kim, and my parents, Arthur and Helen Brighton, for their encouragement.

This research was financially supported, in part, by the

Army Research Office under grant DAAG29-82-K-0102 and the author gratefully acknowledges this support.

iv TABLE OF CONTENTS

Abstract ...... ii Acknowledgements ...... iv

Table of Contents ...... V List of Figures ...... viii List of Tables ...... ix

Chapter I. Introduction 1

Parallel Architectures 1

Systolic Arrays 1

Ensemble Architectures 2

Fault Tolerance in Parallel Architectures 7

Chapter II. Mathematical Background 10

Motivation for Mathematical Abstraction 10

Definition of Tessellation Automata 11

History of Tessellation Automata ... 13

Chapter III. Pattern Growth 24 Overview ...... 24 Review and Analysis of the GPGM 27

Proposed Method for Pattern Growth 43

V Analysis of the BPGM 54 Discussion of Pattern Growth Complexity 59

Chapter IV. Changes in Kumar's Rules 63 Introduction 63 Test Result Interpretation 66 The Determination of Fault-Free Spaces 69 The Need for a Diagonal Space Value 71 Passing Diagonal Space Values 76 Computing Diagonal Space Values 79 Communication of Control Information 79 Reconfiguration Step Sequencing Mechanism 87

Neutralization of Superfluous Reconfiguration Sources 90

The Clearing of State Registers in an Array 96

Internal Seeding 98 Seed Migration . 102 Collisions between Patterns and Quarantine Regions . 110 External Seeding into Faulty Regions ...... 111

Chapter V. The Simulator .. 114 Overview . 114 Choice of Pascal as a Simulation Language . 115 The User Interface . 118 Data Structures . 125

Routine Descriptions . 133 Example of Pattern Growth and Reconfiguration. . 153 vi References ...... 171

Appendix A. ARRAYSIM ...... 179

Appendix B. Pattern Growth Parameters and Tables . . 213

Appendix B. Maximum Dimension and Bloomtime ...... 215

Vita ...... 216

vii FIGURE LIST

Figure 1. CHiP Family Architectures ...... 4 Figure 2. Binary Tree Configuration ...... 5 Figure 3. Computation and Control Hyperplanes . . . 8 Figure 4. ...... 15 Figure 5. Moore Neighborhood ...... 16

Figure 6. Pattern Growth, Stability, and er 19

Figure 7. Embedding of Banyan Network 28

Figure 8. Switch State Assignment 29

Figure 9. First 3 Intermediate Patterns using GPGM 30

Figure 10. Last Intermediate Pattern using GPGM 31 Figure 11. Final Pattern ...... 32 Figure 12. Next State Mapping Table for GPGM 34

Figure 13. Final State Table for the BPGM 47

Figure 14. Pattern Growth using BPGM t=l, .. , 3 48

Figure 15. Last Intermediate Pattern of Positions 49

Figure 16. Space Values in a Fault-Free Array 72

Figure 17. Space Values in an Array with Faults 73 Figure 18. Array Space Wasted using GPGM . . . 75 Figure 19. Passing DSVs without Diagonal Connections 77

Figure 20. Passing Information North and West 81

Figure 21. Passing Information South and East 82 Figure 22. Registers and Buffers . .113 Figure 23. Call Structure of Simulator .134

viii TABLE LIST

TABLE 1: Number of Distinct Intermediate Neighborhoods to

Grow a Small Banyan Network Using the GPGM ..... 37

TABLE 2: Number of Distinct Final Neighborhoods

in Banyan Network Example . . . . . 38

ix CHAPTER I

INTRODUCTION

1.1 PARALLEL ARCHITECTURES

1.1.1 SYSTOLIC ARRAYS

Kung and Leiserson[23,24] have proposed a paradigm for parallel architectures that allows a high processor utilization and a minimum of time spent in I/O activities.

The idea is to pump data through the array of processors much the same as the heart pumps blood through the body. To do this, specific algorithms are analyzed and the processors are then interconnected in a manner that limits the data movement to adjacent processing elements. The new systolic array version of the algorithm has a reduced time complexity due to the increased parallelism and reduced I/O. The cost of such parallel architectures can be minimal if the processors used are all identical and the interconnection network is regular - uniformity is a great source of leverage in VLSI design. Systolic arrays might be included in a broader class of computers called algorithmically specialized processors. Example applications of such

Chapter I. Introduction 1 computers include LU decomposition[23], Fast Fourier Transformations [24) (and other similar recurrence relations), and searching and sorting in tree connected processors[6]. The only major drawback of such machines is that they are overly specialized. Different programs, or even separate portions of the same program, may require more than one such machine.

1.1.2 ENSEMBLE ARCHITECTURES

A number of parallel computers have been proposed that allow us to embed more than one algorithmically specialized processor in the same machine. Seitz [ 4 3] gives a good comparison of several of these "ensemble architectures." Probably the best known and best conceived of these architectures is Snyder's CHiP computer [46].

The configurable, highly parallel, or CHiP computer is a multiprocessor architecture that provides a programmable interconnection structure integrated with the processing elements. Its objective is to provide the flexibility needed to compose general solutions while retaining the benefits of uniformity prevalent in algorithmically specialized processors.

Chapter I. Introduction 2 The CHiP computer is actually a family of architectures, each constructed from three components: a collection of homogeneous microprocessors, a switch lattice, and a controller. Connections between the microprocessors (PEs) are created through the switch lattice. The switch lattice is a regular structure, formed from programmable switches connected by data paths. Each switch in the lattice contains local memory capable of storing several configuration settings. A configuration setting enables the switch to establish a direct circuit connection between two or more of its incident data paths. Figure 1 provides examples of possible switch lattice structures and figure 2 shows a possible configuration of one of these structures.

Dr. Lawrence Snyder has been researching the potential advantages of, and trade-offs between, various lattice structures for the CHiP computer family of architectures[46,47,48]. A parallel programming environment called Poker (running on a VAX 11/780) was developed for writing and running parallel programs meant to be executed on the CHiP computer. A 64 processor MIMD computer called The Pringle has been built to run programs written for the

CHiP computer. "The Pringle is not a CHiP," but it gives the illusion of the CHiP machines conflict free,

Chapter I. Introduction 3 r-c-:-:..._:-.------::41-0 o-J-J-b-6-~--0-¢~~

t~TI I U

I C c~I I I ~' _, I I ,..._ --i,__ - ,-., J~

(a) -../" ~- _,... {b) >0 I -, ~l-- . "

' I (fVTVTVr'x(I~"1X7T .__~ (c)

Figure 1. CHiP Family Architectures

Chapter I. Introduction 4 0 0 0 0 0 0 0 0 0 0 C O O O O O O O O O O O 0 0 0 0 0 0 0 0 0 0 0

Q • r~· 0 ( 0 0 0 0 0 0 0

0 0 0 D-<>-Q--<>-0 0 0 o o o A o o

0 0 0 offi.0 } 0 0 <,; 0 0 0 0 0 I 0 c~illEO> 0 0 0 [} 0 0 . 00000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Figure 2. Binary Tree Configuration of Lattice Struc- ture.

Chapter I. Introduction 5 point-to-point communication using a 64 Mbit internal polled bus. The Pringle has two main advantages: it can run programs written for the CHiP computer without the time and expense of software emulation; and, it can emulate various members of the CHiP family of architectures simply by changing the table by which the fast polled bus knows which processors are connected. This latter feature allows researchers the chance to gain insight into the correct design choices for CHiP computers without committing the time and expense of experimental implementations.

Gollakota [ 14] wrote a chapter in his thesis on the various interconnection networks embeddable in a processor switch lattice with some focus on the fault tolerance of such interconnection networks. Gollakota also discussed the choice of switch complexities, exploring the trade-offs between capabilities of more complex switches versus the memory required to contain the states corresponding to switch settings and the pin limitations in passing these states. He concluded that for a corridor width of 2, a switch crossover capability of g=2 was a good design choice. I have kept my methods general enough to accommodate any corridor width or crossover capability.

Chapter I. Introduction 6 1.2 FAULT-TOLERANCE IN PARALLEL ARCHITECTURES

An external controller for a CHiP computer or a systolic array constitutes a potential hard-core. A single failure in the controller could mean the failure of the entire system. Common approaches to avoid such a failure are to build the controller to be ultra-reliable, or use triple modular redundancy for the controller and vote on the proper controller output.

A novel approach to eliminate the hard-core controller is to distribute the control throughout the entire array so that there is no longer a single overlooking controller, but an array of controllers, each communicating with its neighbors to determine the proper common action. In the architecture proposed by Dr. Gray, with each processing element (computation cell) we associate a control unit

(control cell). Figure 3 shows the conceptual picture of the relationship between the distributed control units and the processing elements in the parallel architecture.

Distributed methods have been developed that allow the control cells to perform all the functions of the single, hard-core controller, including configuring switches and assigning functions to processors. The distributed

Chapter I. Introduction 7 .t CONTROL HYPERPLANE

I _. .._ I ,, ..._ ,_ - -- ....- ' ... I - " ------/ .... / I \ I \ ' -- .; .... -- - -- I ------

COMPUTATION HYPERPI.Ai.'-lE

Figure 3. Computation and Control Hyperplanes

Chapter I. Introduction 8 method by which the switches are configured and the processors are assigned functions is termed pattern growth and is fully explained in Chapters II, and III.

Another important aspect of fault tolerance in parallel

architectures is the switching out of failed or intermittently failing processors. Kumar[22] has developed

a distributed control scheme that allows the array to quarantine bad processors and to move the configuration to a

fault free portion of the array, in a distributed manner. In Chapter IV, I give the correct ions, improvements, and clarifications that I found necessary in Kumar's Rules governing reconfiguration in two-dimensional arrays.

Lastly, in Chapter V, I present the simulator that I developed and used to debug and improve the pattern growth and reconfiguration algorithms. The simulator may be used in future research and development of a distributed I/0 method and for further improving the reconfiguration and

pattern growth algorithms.

Chapter I. Introduction 9 CHAPTER II

MATHEMATICALBACKGROUND

2.1 MOTIVATION FOR MATHEMATICALABSTRACTION

Leverage in VLSI (Very Large Scale Integrated) circuit and system design is achieved by designing compact cells and then replicating these cells to form the desired system or system component. For example, a properly designed dual-port register cell can be repeated in a row to form a register, and these rows can in turn be stacked to form a register file. This aspect of modularity is discussed by

Mead and Conway in their text on VLSI Circuit and System

Design[33]. Another example of leverage is the economy of scale achieved by mass producing a well designed microprocessor chip.

If a compact and efficient design for the control and computation cells can be found, then these designs can be replicated several times to form larger computer systems.

This and their speed are two reasons why systolic arrays and other microprocessor systems are gaining in popularity.

It is highly desirable, then, to produce a design for the control cell that allows fabrication of just one design for every chip in the system. In other words, the control cells

Chapter II. Mathematical Background 10 must be identical to take advantage of the leverage of replication in VLSI circuit and system design.

Other researchers have applied tessellation automata theory to the design and study of such systems as biological reproduction [7,16,37,55], language recognition [19,45], image processing [41], numerical computations [7], and information retrieval systems [7]. Research under Dr. Gray at Virginia Tech [14,22,30,57] has focused on developing the theory of tessellation automata towards the design and modeling of the distributed control of parallel processing systems. Tessellation automata forms a mathematical basis for the design of the identical control cells.

2.2 DEFINITION OF TESSELLATION AUTOMATA

A tessellation automaton can be defined as an array of identical finite-state machines. The inputs and outputs of each machine are directly connected to only a finite number of neighboring machines, and each machine is connected to its neighbors in a uniform way throughout the array. Each machine can change state only at discrete time steps and the state transitions are a function of the states of the machines in the uniformly defined, finite set of neighboring machines.

Chapter II. Mathematical Background 11 A Tessellation Automaton can be formally defined as a quadruple

where

1. A is a finite nonempty set called the state alphabet.

Every machine has A as its state set.

2. Ed is the tessellation space for the array. We refer to d

as our tessellation dimension. The position of each cell

in the tessellation space Ed can be specified as a

ct-tuple of integers (i 1 , ... ,id) which is abbreviated as

i. As an example, if d=2 every cell lies in a common

plane and can be specified by a pair of coordinates

(i1,i2).

3. X, the neighborhood index, is an n-tuple of distinct

ct-tuples of integers. The neighborhood index specifies to

which neighbors a machine's inputs and outputs are

connected. For n=3 and d=l, X=(-1,0,1) would be the neighborhood index of a cell connected to its left neighbor, itself, and its right neighbor. The

neighborhood of cell i, N(X,i), are the cells near cell i

at the points indicated by X.

4. a, the local transformation, is a mapping from the set of

all possible neighborhood configurations, An, to A,

Chapter II. Mathematical Background 12 the state alphabet for a cell (i.e., CJ : An -> A),

The neighborhood configuration, c(N(X,i)), describes the

states of cells at every point in the cells neighborhood.

The next content of cell i , c' (i), is the result of

applying CJ to a particular configuration c(N(X,i)), i.e.,

CJ(c(N(X,i))) = c' (i). The present and next configurations

of the entire array are abbreviated as c and c',

respectively.

2.3 HISTORY OF TESSELLATION AUTOMATA

Von Neumann was the first person known to have worked on cellular automata and his work was published posthumously by

Burks [ 55] . Von Neumann's model used 2 9 states and was developed as a model of biological processes as well as a potential model for a computer. He used an index of X = ((0,1), (-1,0), (0,0), (1,0), (0,-1)), ·which we now call the

"von Neumann neighborhood." The relationship between cells in the von Neumann neighborhood is shown in figure 4. His construction was left unfinished (although completed by

Burks) and somewhat impractical, but it sparked the imaginations of several other researchers.

Chapter II. Mathematical Background 13 Moore [ 34, 35] was the first person to use the term tessellation and developed some of the terminology of tessellation automata. His work was based on a neighborhood that includes the N,S,E,and W cells as von Neumanns does, and also includes the diagonals. Figure 5 shows the Moore neighborhood. Moore proved the existence of certain configurations that could not be reached unless the automata

started out in them. These were termed "Garden of Eden"

configurations. In work by Kumar and Gollakota, methods were developed to clear the array so that the initial situation of a seed in a quiescent environment is always

reachable.

Yamada and Amoroso [ 60] were the first to present a

formal definition of tessellation automata. Some fundamental

proofs on various aspects of tessellation automata were also developed by them.

Chapter II. Mathematical Background 14 N i-1,j x,y+l

u

w M E i,j-1 - i,j ...- i,j+l x-1,y x,y x+l,y

4

s i+l,j x,y-1

Figure 4. Von Neumann Neighborhood. At least 3 different notations are used to specify the position of cells in the von Neumann neighborhood.

Chapter II. Mathematical Background 15 NW N NE i-1,j-1 i-1,j i-1,j+l x-1,y+l x,y+l x+l,y+l

w E i,j-1 i,j+l x-1,y x+l,y

SW s SE i+l,j-1 i+l,j i+l,j+l x-1,y-1 x,y-1 x+l,y-1

Figure 5. Moore Neighborhood. At least 3 coordinate notations are used throughout this document to describe the relative position of neighbors in the Moore neighborhood.

Chapter II. Mathematical Background 16 Several early researchers were concerned with the ability of the automaton to "reproduce" itself (1,2,34,35,37,55).

Reproduction can be loosely defined as the ability of the automaton to recreate patterns identical to the original pattern without destroying the original pattern. While we would like our computer to be able to recreate its pattern, or "reconfigure," we do not want to have the global control pattern in the array to produce multiple copies of itself.

However, reproduction may be applicable to pattern growth when we wish to repeat a subpattern that forms the basic building block of the global pattern. (A method to synchronize the transition to the final pattern would also be necessary.) Theorems about pattern reproduction should not be confused with theorems about pattern reconfiguration.

Reconfiguration may be loosely defined as moving the global pattern to a fault-free portion of the array.

Thatcher[49,50] proved that for every tessellation automaton there exists a behaviorally equivalent Turing Machine. While inefficient, Turing Machines are considered to be the ultimate in computational power. Thus, a Turing Machine is at least capable of solving any problem that can be programmed on a tessellation automata.

Chapter II. Mathematical Background 17 Walters [57] showed that just as with Turing Machines, while stability for an arbitrary TA is unsolvable, if we properly choose CJ, then the TA will be "stable." When we

talk about stability, we mean that the pattern in the array

eventually reaches an equilibrium where c'=c (i.e., the next

configuration equals the present configuration) for all

cells at all future points in time. For control patterns it

is desirable to have our final configuration, once reached,

to remain stable. According to Walters work, then, our

final configuration Cf will be stable if we define our local

transformation so that CJ(Cf) = Cf for all points in time

after achieving Cf. Figure 6 illustrates the idea of

stability in pattern growth. Note that the pattern MARIAN

is mapped to itself by the local state mapping CJ.

Chapter II. Mathematical Background 18 Local State Mapping a : c(N(X,i)) -> c' (i) with Neighborhood Index X=(-1,0,1)

OOa -> b OaO -> C Time Step Pattern aOO -> b OOb -> d Obc -> e bcb -> d cbO -> f t=l 0 0 0 0 0 a 0 0 0 0 bOO -> d OOd -> M Ode -> A t=2 0 0 0 0 b C b 0 0 0 ded -> R edf -> I dfd -> A t=3 0 0 0 d e d f d 0 0 fdO -> N dOO -> 0 OOM -> 0 t=4 0 0 M A R I A N 0 0 OMA -> M MAR -> A ARI -> R t=S 0 0 M A R I A N 0 0 RIA -> I IAN -> A ANO -> N NOO -> 0

Figure 6. Pattern Growth Stability and a.

This example shows the growth of a pattern in a one dimensional array. A cell looks at its left neighbor's state, its own state, and its right neighbor's state to determine its own next state.

Chapter II. Mathematical Background 19 Walters proposed that a tessellation automaton govern the global function of an array of computational elements. Each cell in the controlling array (tessellation automaton) determines the function of the associated cell in the computing array. For some given global configuration, c, of the TA, there corresponds an arrangement of functional blocks in the computing array. This arrangement defines the global function of the computing array. Walters suggested partitioning the state alphabet, A, of the tessellation automaton, into a set, Af, of states corresponding in a 1-1 fashion with functions, and a set, As, of states used in the

"synthesis" of the final global configuration. It is very desirable to have a stable global configuration if we wish to maintain our global function for more than one time step.

This desired stability will be achieved by defining our a so that the final configuration will map to itself. This becomes important later when we define the methods by which the array of control cells achieves and maintains its final configuration.

Martin [30] investigated several aspects of the fault tolerance of tessellation automata. The idea of unique subpatterns played an important role in both pattern growth and in the detection of faulty cells. He defined an (l,n)ct

Chapter II. Mathematical Background 20 pattern to be a ct-dimensional pattern of size 1 in each dimension having all subpatterns of size n in each dimension unique. With unique subpatterns, a cell knows exactly where

it is in the pattern and can, therefore, uniquely choose its

next state. If a cell finds itself in an "illegal

neighborhood," it can move to a "quarantine" state at the

next time step, in an attempt to isolate the faulty cell in

its neighborhood. The array could then clear and

"reconfigure." Reconfiguration here means moving the pattern

to a fault free portion of the array. A straight forward implementation of such a control cell

would involve a register to hold the cell's state,

connections to all neighbors of the cell, and a large ROM

which used the states in the cell's neighborhood as an

address to locate the cell's next state. Mart in used a neighborhood in which each cell was connected to 14 other

cells besides itself. This means there are potentially

(ns)14 entries in the ROM, where ns is the number of states

in the cell's state alphabet. For 5 states we could have over 6 billion entries, which is a rather large ROM. Many configurations map to the same state. For example, illegal

neighborhoods map to a "Quarantine" state. Thus, there must

be ways of reducing this huge number of entries. Having a

large neighborhood also increases the number of connections

that a cell must have to communicate with other cells. With

Chapter II. Mathematical Background 21 14 other cells in the neighborhood even with only 3 bits to represent a state, there would 42 I/0 lines for a cell.

There were other problems with Martin's methods aside from the difficulty in computing the next state and the large numbers of I/0 connections. Martin's methods of reconfiguration wasted a large number of cells. All cells in the previous pattern were mapped to a Q (quarantine) state - not just the cells surrounding the faulty cell(s).

A great accomplishment of Martin's was showing that it is possible for a tessellation automaton to switch out bad cells and regrow an identical pattern in a fault free region of the array. Even though some of his methods did not lend themselves to a practical implementation, they showed that methods do exist for parallel architectures to be self testing and repairing. (Not to be confused with Avizienis's

STAR computer [3] which provides a good example of the concepts of self test and repair in non-parallel architectures through its use of N-Modular Redundancy.)

Another interesting problem in tessellation automata is the firing squad synchronization problem (FSSP) [36). In mathematical terms, the idea is to get the array of cells to simultaneously reach the same final state at the same time.

This process is begun when a cell on one end is told to

"fire when ready". The tricky part is to use a constant number of states. It has been shown by Balzer [4] that the

Chapter II. Mathematical Background 22 problem can be solved, for a one-dimensional firing squad, with 8 states in time 2 (n-1), where n is the number of soldiers. Other solutions have been presented for the 2 and

3 dimensional cases (44).

A solution to this problem does not directly apply to the growth of arbitrary patterns, since all cells are mapped to the same final state. However, it may be of use in some pattern growth methods that do not have a built-in method of synchronizing the time at which the final pattern appears.

Having all control cells reach their final state synchronously is important if we desire the corresponding cells in the computation plane to begin processing simultaneously. A pattern as large as the final pattern would have to be grown before the FSSP solution is applied so that the boundaries within which the messages are passed are fixed. In some respects, it may be just as easy to count off the time because counters are fairly cheap, even though the size of the counter may have to be increased as new larger patterns are added.

Chapter II. Mathematical Background 23 CHAPTER III

PATTERN GROWTH

3 .1 OVERVIEW

Our objective is to assign tasks to individual computation plane cells by growing patterns of states in the control plane. To each cell performing a distinct task or distinct set of tasks in the final pattern, we associate a distinct local state. For a processor, the word "task" refers to a computational step such as multiplying two values received from two neighboring cells, adding the result to a previously stored result, and outputting this value to a third neighboring cell. A task could also correspond to a series of sorting or data manipulation steps such as might be the case in database applications. If control cells, and thus local states, are also associated with switches, then a task may mean connecting the data path from the south to the east and a data path from the north to the west.

If one takes the perspective of looking down on the array from above, then the arrangement of nonzero states in the array forms a "pattern". As state information spreads from cell to cell, the pattern of states appears to grow. For

Chapter III. Pattern Growth 24 this reason, the collection of local transformations by which we reach the point where all control cells contain their proper local state is termed a pattern growth method.

The Gollakota Pattern Growth Method (GPGM) was proposed by Mr. Gollakota in his Masters Thesis at Virginia Tech, under the direction of Dr. Gray. Since Gollakota did not analyze his method, I will present an analysis of his method in section 3.2. A short review of his method is also given in section 3. 2. In section 3. 3, I propose a new pattern growth method, hereafter abbreviated as the BPGM for "Brighton Pattern Growth Method". In section 3.4, I present an analysis of the BPGM. The analysis presented shows that the BPGM is better than the GPGM in terms of both memory and time complexity. I will also show that both methods have the same intercellular I /0 requirements. As shown in sections 3.2 and 3.4, neither method would eliminate the dependence of the implementation on pattern size. In section 3.5, I will explain why all positional pattern growth methods, including the GPGM and the BPGM, must depend, to some degree of complexity, on the size of the final pattern.

Chapter III. Pattern Growth 25 Notation used to abbreviate the presentation is listed below. Some of the notation is unique to this presentation. Notation:

L (x) = riog2 (x)1 = "the smallest integer greater than or

equal to the log base 2 of x." lg (x) = log2(x) = "the log base 2 of x. "

B = Bloomtime = "time at which final pattern appears. "

X = "maximum pattern dimension along X-axis. " y = "maximum pattern dimension along Y-axis. " D = max (X, Y) = "maximum pattern dimension."

We would like to perform an analysis of a PGM in terms of its growth parameters, but should this analysis be in terms of B, X, Y, or, D? The order of the worst-case turns out to be the same whether we use the maximum pattern dimension D or the Bloomtime B. It also turns out that a memory or time complexity is most easily expressed in terms of B, because there are more than one pattern shape that have the same

Bloomtime. The relationship between Band Dis shown in an appendix.

Chapter III. Pattern Growth 26 3.2 REVIEW AND ANALYSIS OF THE GPGM

In order to compare the BPGM with the GPGM, I must first

analyze the GPGM. By analysis, I am referring to the time

and memory complexity of an implementation of the method.

The term memory is used instead of space to differentiate the concept of memory space from that of cellular array

space.

To help us analyze how much memory is needed to grow a pattern we need to make a couple of definitions. Let us define an intermediate state as a state that appears only in the growth of the pattern but not in the final pattern. Let us also define an intermediate pattern as a pattern

consisting solely of intermediate states. We can separate

the analysis of the resources needed to map the intermediate patterns to the next pattern, from the analysis of those

resources needed to map from the last intermediate pattern

to the final pattern, and from the final pattern to itself.

The following example should help to explain these terms,

and to make the concept of pattern growth more concrete.

Figure 7 shows the embedding of a small banyan network in

the processor-switch lattice, and figure 8 shows the switch

state assignments, some of which correspond to final local

states for switches in the array. In the example of the

Chapter III. Pattern Growth 27 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .o 0 0 0 0 0 0 D 0 0 ' 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0

D 0 0 0 0 D 0 0 0 0 0 0

0 0 0 0 0 0 D 0 0 0 0 D

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Figure 7. Embedding of Banyan Network

Chapter III. Pattern Growth 28 (9- 13(1) 0 -0 IS (2.) '.2E>(.3) 27(4) -e 35 (6) .2'2(5)

41 (.11.) 4-8(12)

Figure 8. Switch State Assignment Procedure

Integers outside parentheses indicate direct ion (i.e. l=N, 2=NE, 3=E, 4=SE, S=S, 7=W, 8=NW). Integers inside parentheses indicate state.

Chapter III. Pattern Growth 29 0 0 0

o as o t = 1

0 0 0

0 0 0 0 0

o o al o o

o a2 a3 a2 o t = 2

o o al o o

0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 a4 0 0 0

0 0 as a6 as 0 0

0 as a4 a6 a7 a6 o t = 3

0 0 as a6 as 0 0

0 0 0 a4 0 0 0

0 0 0 0 0 0 0

Figure 9. First 3 Intermediate Patterns using GPGM

Chapter III. Pattern Growth 30 0 0 0 0 0 0 0 0 a43 o 0 0 0 0 0 0

0 0 0 0 0 0 0 a44 a45 a44 o 0 0 0 0 0

0 0 0 0 0 0 a44 a43 a45 a46 a45 o 0 0 0 0

0 0 0 0 0 a45 a43 a46 a44 a46 a47 a46 o 0 0 0

0 0 0 0 a46 a43 a47 a44 a47 a45 a47 a48 a47 o 0 0

0 0 0 a47 a43 a48 a44 a48 a45 a48 a46 a48 a49 a48 o 0

0 0 a48 a43 a49 a44 a49 a45 a49 a46 a49 a47 a49 a50 a49 o

0 a49 a43 a50 a44 a50 a45 a50 a46 a50 a47 a50 a48 a50 a51 a50 a50 a43 a51 a44 a51 a45 a51 a46 a51 a47 a51 a48 a51 a49 a51 a52 a51

0 a49 a43 a50 a44 a50 a45 aSO a46 a50 a47 aSO a48 a50 a51 aSO

0 0 a48 a43 a49 a44 a49 a45 a49 a46 a49 a47 a49 aSO a49 o

0 0 0 a47 a43 a48 a44 a48 a45 a48 a46 a48 a49 a48 o 0

0 0 0 0 a46 a43 a47 a44 a47 a45 a47 a48 a47 o 0 0

0 0 0 0 0 a45 a43 a46 a44 a46 a47 a46 o 0 0 0

0 0 0 0 0 0 a44 a43 a45 a46 a45 o 0 0 0 0

0 0 0 0 0 0 0 a44 a45 a44 o 0 0 0 0 0

0 0 0 0 0 0 0 0 a43 o 0 0 0 0 0 0

Figure 10. Last Intermediate Pattern using GPGM, t=9.

Chapter III. Pattern Growth 31 0 0 0 0 0 0 0 0 0 0 0 0

0 15 8 11 15 11 7 15 7 8 15 0

0 2 0 0 14 3 12 13 o 0 2 0

0 2 0 0 13 12 3 14 o 0 2 0

0 15 8 4 15 4 9 15 9 8 15 o

0 2 12 3 2 0 0 2 12 3 2 0

0 2 3 12 2 0 0 2 3 12 2 0

0 15 o 0 15 o 0 15 o 0 15 o

0 0 0 0 0 0 0 0 0 0 0 0

Figure 11. Final pattern for Banyan Network Example, t~lO.

Chapter III. Pattern Growth 32 growth of the Banyan Network, the patterns appearing between t=l and t=9 are intermediate, and the final pattern appears at t=lO. In figure 9 we see the first 3 intermediate patterns. The last intermediate pattern in figure 10 is mapped to the pattern of final local states which is shown in figure 11. For t2!:10, the final pat tern is mapped to itself.

For the GPGM, the easiest way to find the next state is to use the neighborhood N(X,i) as a key and search for the next state using this key. (We could also use the neighborhood as an address, but this would result in an exponential dependence on the pattern size.)

As illustrated in Figure 12, the amount of memory used to implement the next state mapping for the GPGM is the product of the number of entries in the next state mapping table with the length of these entries. The number of entries is the sum of the number of distinct intermediate neighborhoods

(noIN) and the number of distinct final neighborhoods(noFN).

The length of each entry is 6* f1og2 (n1 q, where n1 = (the number of local states) = (the number of intermediate states plus the number of final states) =

Neumann neighborhood, plus the next state, must be stored at each entry.

Chapter III. Pattern Growth 33 Entry Format

NLS ELS LS WLS SLS Next Stat~

0 7 14 21 28 35 41

42 bits/entry

KEY

720 entries LS= Local State N = North E = East w = West S = South

30,240 bits

Figure 12. Next state mapping table for GPGM for a 10x7 cell Banyan network.

Chapter III. Pattern Growth 34 Now that all of the components have been defined, the equation for the memory complexity of the GPGMcan be given.

M = ( noIN + noFN )*6*rlog2(ni + nf)l Eqn. la

The above equation is in units of bits. If each state is placed in a separate memory location, such as a word in a RAM, then in terms of words,

M = ( noIN + noFN )*6 Eqn. lb

If we make the GPGMa little more like the BPGMby adding a Bloomtime parameter B, we could save a lot of memory when the array is capable of growing more than one pattern. For all global functions, a cell could use the same table for mapping to next intermediate states and then switch over at t=B-1 to the correct table, based on global state, to look up its final state. For t~lO, still another table would be used to map the final pattern to itself. As the GPGM stands a separate table containing intermediate and final state mappings must be used for .e.acil global function.

In the next few paragraphs, the components of the memory complexity equation are derived. Since the time required to perform the next state mapping is the time required to look up the next state in the memory, once the memory complexity is found, the time complexity follows directly.

Chapter III. Pattern Growth 35 Table 1 shows the number of Distinct Neighborhoods for the running example of the growth of a small Banyan Network using the GPGM. Gollakota incorrectly listed a number that is about 150 higher than the number computed here, and he did not derive a formula for the number of distinct neighborhoods. In general, for the intermediate patterns, the number of distinct neighborhoods, at a particular time step, is given by the formula t 2 + (t+1)2. Therefore,

B-1 nDrN = L( t 2 + (t+1) 2 ) t=l

= (2*B3 + B - 3)/3 Eqn. 2

Table 2 shows the number of distinct neighborhoods for the final pattern. This number is dependent on the number of final states and their arrangement. The examples below illustrate the maximum and minimum number of distinct neighborhoods.

Example of a ( 3, 6) pattern where the number of distinct neighborhoods = 36.

A B C D E F

G H I J K L

MN 0 p Q R

In general, the Maximum Number of Distinct Final

Neighborhoods = X*Y + 2 * (X + Y) .

Chapter III. Pattern Growth 36 TABLE 1

Number of Distinct Intermediate Neighborhoods to Grow a small Banyan Network using the GPGM.

Time States Number of Number of Number of Distinct Step Present States Cells w/States Neighborhoods

1 as 1 1 5 2 a(l. .3) 3 5 13 3 a(4 .. 7) 4 13 25 4 a(8 .. 12) 5 25 41 5 a(13 .. 18) 6 41 61 6 a(19 .. 25) 7 61 85 7 a(26 .. 33) 8 85 113 8 a(34 .. 42) 9 113 145 9 a(43 .. 52) 10 145 181 10 1 .. 15 15 52 so

Total Number of States= 68 Total Number of Distinct Neighborhoods = 720

Chapter III. Pattern Growth 37 TABLE 2

Number of Distinct Final Neighborhoods in Banyan Network Example

Final Number in Number with Distinct State Pattern Neighborhoods

0 52 9 1 0 0 2 12 6 3 6 5 4 2 2 5 0 0 6 0 0 7 2 2 8 4 4 9 2 2 10 0 0 11 2 2 12 6 5 13 2 2 14 2 2 15 12 9

Total Number of Final Neighborhoods in Pattern = 104

Total Number of Distinct Final Neighborhoods= 50

Chapter III. Pattern Growth 38 Example of a (3,6) pattern where the number of distinct neighborhoods= 13.

AAAAAA

AAAAAA

AAAAAA

In general, the Minimum Number of Distinct Final Neighborhoods= 13.

Although not shown, both of the above patterns are assumed to be embedded in an array of cells with quiescent states. The distinct neighborhoods of quiescent cells next to the pattern are also accounted for in the formulas. The

X and Y refer to the maximum X and Y dimensions of the pattern. Formulas for the number of distinct neighborhoods of final pattern shapes other than rectangular can be derived and are of order between 0(1) and O(m2), where m = max (X, Y). The most interesting final patterns will be fairly "regular", that is they will consist mainly of various subpatterns repeated in an orderly manner. A regular final pattern may have several distinct neighborhoods, but the number will remain constant as the size increases. We will see that the order of the memory needed to map a final pattern to itself does not affect the overall memory complexity of the GPGM.

Chapter III. Pattern Growth 39 As exemplified in table 1, the number of intermediate states, in terms of the time, is B ni = ( I u > - 2 t=l

= B(B + 1) - 2 Eqn. 3 This equation is valid for any pattern being grown using the

GPGM.

Now that all of the components for Eqn .1 have been derived, we see that

M = 0( (B3) * lg(B)) bits Eqn. 4a Or if each state is placed in a separate word,

M = O( B3) words Eqn. 4b

We could substitute D for Bin the above equations, since the Bloomtime Bis a linear function of the maximum pattern dimension D=max(X,Y).

The time complexity for the GPGM is O(L(norN + noFN)), since the entire table may have to be searched to find the correct next state. All cells must find their next state before the next time step, therefore the time step must be long enough for a cell to search through the entire next state table. If each comparison involves 1 state, and the table is lexicographically sorted, the number of comparisons will be no more than 5* r1og2 (no IN + noFN)l in the worst

Chapter III. Pattern Growth 40 case. Since norN is O(B3), and since log(x3) = 3*log(x), we can express Tin 0-notation as,

T = 0( lg(B)*lg(B) ) bit comparisons Eqn. Sa

T = O(lg(B)) word comparisons Eqn. Sb. where a comparison is assumed to take a fixed length of time.

We need to be concerned here with the maximum amount of time a search will take, since all cells must finish finding their next state before the next pattern growth time step.

This maximum amount of time will be, in word comparisons,

Tmax = 5* Gog2 ( norN + noFN >7

= S*flog2 ( (2*B3 + B - 3) /3 + B2 + 2*B >l Eqn. 6

The Bused may have to be the maximum B of all patterns.

The time needed to map to the next state may be reduced for the GPGM if an associative memory is used. Since Associative Memories have matching circuitry for each bit, they are necessarily more expensive than Random Access

Memories. The more expensive memory technology only aggravates the problem that the GPGM has with the large amount of memory necessary to perform the next state mapping. For this reason, and to give the comparison of the methods common ground, the same memory technology (Random

Access) is assumed for both.

Chapter III. Pattern Growth 41 Many heuristics might be applied to improve the time and memory complexity of the GPGM, however, this is not being advocated. The basic flaw in the GPGM, is that in trying to reduce the number of states while achieving unique neighborhoods, it totally ignores the increase in the complexity of determining the next state. Ideally, we would like to determine the next state either via a simple computation or a single memory reference, or possibly a combination of the two. The BPGM pays greater attention to the process of determining the next state.

Chapter III. Pattern Growth 42 3.3 PROPOSED METHODFOR PATTERN GROWTH

The three distinct pieces of information that a cell must know during pattern growth are:

1. Position in pattern,

2. Time during pattern growth,

3. Global Function to be implemented.

From these three pieces of information, a cell that will participate in the final pattern can determine what task(s) it will perform and at what time it should begin performing the task(s). As will be explained, cell along the edge but outside the final pattern can recognize from its neighbor's position, global function and time, that it should not be a part of the pattern and will remain in the quiescent state.

To avoid confusion, we should keep the set of states corresponding to special reconfiguration conditions (such as Q, S, z, and 0), distinct from the set of states corresponding to local tasks in the final configuration. We reserve the lower 20 values of the Local State Register for these special reconfiguration states - leaving us with all other higher values to correspond to local tasks. So far, only 10 of these special state values have been used.

Chapter III. Pattern Growth 43 The Pattern Growth Registers are XR, YR, and TR. Registers XR and YR hold the cells X and Y coordinates that specify the cell's position within the pattern during the process of pattern growth. The Time Register TR holds the current time during pattern growth while a cell is participating in pattern growth. The XR, and YR contents are computed from the positions passed by neighboring cells.

The appropriate TR contents can be derived from the position when it is first computed.

Let us associate a look-up table with each global function. Since the cell within the pattern knows the global state of the pattern, it knows which global function will be performed and in which table to look up its local state. If we store the Local States in the table according to a positional notation, a cell can use the knowledge of its position, with respect to the seed, to address its local state in the table.

Also associated with each global state are a set of parameters that govern how large the pattern will grow. Maxx, maxy, minx and miny are compared with the X and Y coordinates passed from the neighboring cells. If the coordinates are within the maximum and minimum x and y bounds, the cells XR and YR registers are loaded with the

Chapter III. Pattern Growth 44 appropriate neighbor's value and incremented. If a cell determines that it is outside the bounds of the final pattern's dimensions, it will remain in the quiescent state.

We also associate parameters cenx and ceny with each pattern. These hold the X and Y coordinates of the center of the pattern. The cells TR can be set by calculating the distance between the cell and the center of the pattern.

This is because the positional information only travels one cell from the center per time step. To help speed this calculation, pattern growth parameters cenx and ceny are also associated with each pattern.

When patterns with shapes somewhere between a diamond and a square are desired, then we need an additional parameter. Let us associate a time, with every global state, at which the final pattern of local states is to appear. This time step might appropriately be called the "Bloom Time." For example, if the pattern can be grown in 5 steps, the Bloom Time for the pattern would be 5. At step 5, the cell looks up its final local state in the table. The final configuration appears at the next time step, and the pattern of cells will begin to perform useful work during the next computation phase.

Chapter III. Pattern Growth 45 Figure 13 shows the pattern growth parameters and the

Table of Final States for the global function coresponding to the same 70 cell Banyan Network used in section 3.2.

Figure 14 shows the contents of the position registers and

LSR during the first 3 time steps and figure 15 shows the

contents of the position registers and LSR in the last

intermediate pattern, i.e. at t=(Bloomtime - 1). Using the

position as an index into the table specified by the global

function, the cell will determine the final local state it

should assume at t=Bloomtime. The final pattern will be the

same as shown in figure 11.

To map the final pattern to itself the cells simply stop

participating in pattern growth (i.e. they set their Pattern

Growth Flag to False). In a functionally equivalent, but

less efficient method, the cells could keep looking up their

final states at every time step after the pattern has

bloomed.

Note, we are not limited to growing rectangular patterns.

By reducing the Bloomtime, shapes between a diamond and

rectangle may be achieved.

Chapter III. Pattern Growth 46 Entry Format

Final State

0 3

4 bits/entry

cenx

ceny

7 maxx parameters maxy

minx

miny

Bloomtime

70 entries in Table ==

308 bits

Figure 13: Pattern Growth Parameters & Table of Final States for BPGM for a 10x7 cell pattern.

Chapter III. Pattern Growth 47 XR,YR

5,4 Ge

5,5 Gy

4,4 5,4 6,4 Gx Ge Gx

5,3 Gy

5,6 Gy

4,5 5,5 6,5 G Gy G

3,4 4,4 5,4 6,4 7,4 Gx Gx Ge Gx Gx

4,3 5,3 6,3 G Gy G

5,2 Gy

Figure 14 BPGM XR,YR, and LSR Contents, t=l,2,3.

Chapter III. Pattern Growth 48 XR.YR

1,7 2,7 3,7 4,7 5,7 6,7 7,7 8,7 9,7 10,7

1,6 2,6 3,6 4,6 5,6 6,6 7,6 8,6 9, 6 10,6

1,5 2,5 3,5 4,5 5,5 6,5 7,5 8,5 9,5 10,5

1,4 2,4 3,4 4,4 5,4 6,4 7,4 8,4 9,4 10,4

1,3 2,3 3,3 4,3 5,3 6,3 7,3 8,3 9,3 10,3

1,2 2,2 3,2 4,2 5,2 6,2 7,2 8,2 9,2 10,2

1,1 2,1 3,1 4,1 5,1 6,1 7,1 8,1 9,1 10,1

G G G G Gy G G G G G

G G G G Gy G G G G G

G G G G Gy G G G G G

Gx Gx Gx Gx Ge Gx Gx Gx Gx Gx

G G G G Gy G G G G G

G G G G Gy G G G G G

G G G G Gy G G G G G

Figure 15 BPGM XR,YR, and LSR Contents, t=9.

Chapter III. Pattern Growth 49 We can state the pattern growth algorithm in terms of the following three rules.

Pattern Growth Rule 3.3.1: When a cell finds itself in the seed at rest state { as indicated by LSR=R and O < GSR maxf } then the cell initiates pattern growth. To initiate pattern growth the cell performs the following actions: XR := cenx[GSR], YR:= ceny[GSR]; TR:= 1; PGF := true; OK PG:= false; LSR := Ge. The position registers are assigned a position at the approximate center of the pattern. The time register is reset to begin countdown to Bloomtime. OK PG is reset so that the pattern is not perpetually planted. LSR is assigned the growth state to indicate to neighbors that the cell is in pattern growth mode.

Pattern Growth Rule 3.3,2: At each time step after the cell begins participating in pattern growth, the cell checks to see if it has reached the time to map to its final state in the final pattern by checking whether TR= Bloomtime. At Bloomtime, the cell looks into the Table of Final States specified by its GSR and using the contents of its position registers as an index, locates its final local state and assigns this state to its LSR. At Bloom time the cell also resets the Pattern

Chapter III. Pattern Growth 50 Growth Flag and Registers (after locating its final state). Until Bloomtime is reached, the cell simply increments its

Time Register at every time step.

Pattern Growth Rule 3,3,3: If a cell is not yet participating in pattern growth

(PGF=false) and the cell is in the quiescent state and no neighbors are faulty, then the cell checks its buffers to see if the information passed from its neighbors indicates that the cell should be participating in pattern growth. If a neighboring cell is in a local growth state G, Ge, Gx, or

Gy, and is in a global state i that corresponds to a valid global function then the cell checks the position passed by the neighbor. (As described in the communication sequence, only half the position is actually passed. The XR contents are passed East and West and the YR contents are passed

North and South.) To see if the eel 1 is within the boundaries of the final pattern, the cell checks the neighbor's position against min and max position parameters associated with the neighbors global state. In pseudo-Pascal, this part of the rule can be stated as: if (WLSB er) and (0 < WGSB S maxf) and (WPB < maxx[WGSB]) then begin XR := WPB + l; GSR := WGSB; PGF := true; end; if (ELSB er) and (0 < EGSB S maxf) and (EPB > minx[EGSB])

Chapter III. Pattern Growth 51 then begin XR := EPB - 1; GSR := EGSB; PGF := true; end; if (NLSB er) and (0 < NGSB S maxf) and (NPB > maxy[NGSB]) then begin XR := NPB - 1; GSR := NGSB; PGF := true; end; if (SLSB er) and (0 < SGSB S maxf) and (SPB < maxy[SGSB]) then begin XR := SPB + 1; GSR := SGSB; PGF := true; end;

Where N, E, w, and S are directions, r={Gx, Ge, G, Gy}, R stands for Register, B stands for Buffer, P stands for

Position, and PGF is the Pattern Growth Flag.

Cells along the coordinate axes will have only one neighbor from which to receive the position. In this case the cell will only have loaded in half the position at this point. The other half of the position must be obtained from cenx or ceny, depending on whether the cell is missing the X or Y coordinate. An exception to this fix occurs when the cell did not receive a coordinate because it was blocked by a quarantine cell. When this exception occurs, the cell clears its GSR, LSR, and Pattern Growth Flag and Registers, and exits the pattern growth mode. When the cell successfully computes its position, the Time Register is initialized as with the distance between the cell and the center of the pattern, i.e., TR:= abs(XR - cenx[GSR]) + abs(YR - ceny[GSR]). The last condition the cell must check is whether the current pattern growth time step is beyond Bloomtime. This

Chapter III. Pattern Growth 52 may occur if the cell is just outside a pattern with a shape between a diamond and a rectangle. In this case the cell must clear LSR, GSR, XR, YR, TR, and PGF. Otherwise the cell assumes the appropriate Growth state G, Gx, or Gy at the next time step.

Note: while this rule has been described with registers immediately receiving values, this may cause timing problems in some implementations. This problem can be eliminated by using latches on the output lines of the logic used to compute portions of the algorithm. The simulator saves the position and global state in temporary variables until it is certain that these should be loaded in the registers. This was done so that the reader of the simulator would have an easier time visualizing the flow of control in a hardware implementation.

The pattern growth method was originally designed with only one growth state G, and with both position registers XR and YR being passed at every time step. We can reduce the intercell bandwidth by passing only the XR East and West, and only the YR North and South, but we now need growth states Ge, Gx, and Gy. Passing both XR, and YR gave the cell some fault tolerance in the case that a fault occurred during pattern growth. The pattern could essentially grow

Chapter III. Pattern Growth 53 "around" a quarantine region. By using the four growth states G, Ge, Gx, and Gy, we can add back some of the fault tolerance lost by not passing both XR and YR. The cell can still differentiate that it was not passed an X (Y) coordinate because it is along the Y (X) axis, from the case that it did not receive an X (Y) coordinate because it is next to either a faulty or a quarantine cell.

3.4 ANALYSIS OF THE BPGM

In this section the time and memory complexity of the

BPGM are analyzed. Both the time and memory complexity of the BPGM will be shown to be better than those of the GPGM.

The registers, and buffers used during pattern growth can be used in the growth of every pattern. Since their cost is shared, we can discount their memory cost in the growth of a pattern. It will turn out that including their cost will not affect the order of the memory complexity.

The memory not shared between patterns includes the 7 parameters and the Table of Final States associated with the pattern. For the moment, let us assume rectangular patterns. Then the size of the Table will be X*Y. For a

Chapter III. Pattern Growth 54 square pattern, X = Y = D = B; thus the Table has o2 = B2 entries. For rectangles where X Y, the Table has E = X*Y entries, and E < B2. For diamond shaped patterns, or patterns between a diamond and a square, there will be wasted entries in the Table. For a diamond shape there are

B2 cells, and the circumscribing square will have (2B-1)2 cells. In every case, we see that the memory usage for the

Table of Final States is O(B2) words. The word size used here can be as small as L(nf). Thus the Table size is

O(B2*L(nf)) bits.

Since the number of registers, buffers, and parameters remain constant with pattern size, the memory complexity is determined by the Table size. M = O(B2) words Eqn. 7a In terms of bits the, registers et. al. grow as L(B) bits.

Thus, the memory complexity in terms of bits is determined by the Table of Final States.

M = O(B2*L(nf)) bits Eqn. 7b

Comparing Eqn. 7a and 7b to Eqn. 4a and 4b, we observe that the memory complexity of the BPGM grows much less rapidly with pattern size than the memory complexity of the GPGM.

The time complexity, in terms of word operations, is

0 ( 1) • No matter what size the patterns may be, the BPGM

Chapter III. Pattern Growth 55 takes the same amount of time to compute position.

Determining the Final State involves a single memory reference to the Table. The same number of words is passed between cells at each time step. In terms of bit operations, the time complexity grows as O(L(B)), since this is the length of the words that are passed between cells and used in calculations and comparisons. In summary, T = 0(1) word operations Eqn. 8a T = O(L(B)) bit operations Eqn. 8b where an operation is assumed to take a fixed length of time.

The above analysis of complexity provides a rough idea of the resources required by the BPGM and how these resource requirements are affected by pattern size. I now present a more exact analysis to give a more complete picture of the resources required by a cell using the BPGM.

The memory requirements are the storage locations designated as registers and buffers, and the parameters and

Table associated with the pattern. There are 27 registers and buffers, all of bit width less than 2*[log2(B)]. They may be used for the growth of more than one pattern, as long as they are large enough for the largest pattern. Each global pattern requires its own parameters and its own Table

Chapter III. Pattern Growth 56 of Final States. The memory size is still dependent on the pattern size, but the complexity is of a smaller order than the GPGM and we no longer need to search the Table since position provides a direct index into the Table.

Let us take a look at how much time is spent passing information between cells during pattern growth. The pattern growth information is the position, time step, Global State, and Local State. The registers for holding this information are the XR, YR, TR, GSR, and LSR. Since the intermediate states of the GPGM have essentially been moved from the LSR to the XR, YR, and TR, the BPGM has fewer local states and thus a smaller LSR. The YR contents are passed North and

South, and the XR contents are passed East and West so, in essence, a cell only passes half its position to a neighbor.

The GSR and LSR contents are passed in every direction. TR is not passed since it can be derived from passed position.

Let X and Y be the maximum X and Y dimensions of the pattern, D = max(X,Y), B the Bloomtime, ni the number of

"intermediate states" (see analysis of GPGM), nf the number of final states, and ng the number of global states. Then the number of bits now being passed between cells using the

BPGM is

Chapter III. Pattern Growth 57 W = L(D) + L(nf) + L(ng) Eqn. 9a

Eqn. 9b

= L(B) + 1 + L(nf) + L(ng) Eqn. 9c

The number of bits passed by the GPGM is

w = L( ni + nf ) + L(ng) Eqn. 10a

= L( B(B+l)/2 - 2 + nf ) + L (ng) Eqn. 10b

2*L(B) - 1 + L (ng) Eqn. 10c

Comparing Eqn. 9c and 10c we see that the difference in the number of bits passed in the two methods differs is about

L (B) - L (nf) .

Exmaple: Suppose the BPGM uses a nominal bit width of 8 for all registers (thus L(B) = L(nf)). For the GPGM to have the same capabilities, its LSR and LSBs must be of length 16.

GPGM BPGM

NGSB 8 NGSB 8 NLSB 16 NLSB 8 NPB 8

Total= 24 24 bits passed

Thus the number of bits passed differs by Obits. The BPGM passes no more words than the GPGM. (Note: the size of the

GSR and GSBs is determined by reconfiguration considerations

Chapter III. Pattern Growth 58 and will be slightly larger.)

3.5 DISCUSSION OF PATTERN GROWTHCOMPLEXITY

Minimizing the hardware necessary to store, transfer and compute the intermediate states is important in minimizing the cost of the system, but is overshadowed by the potentially larger cost of mapping the last intermediate configuration to the final configuration.

As shown in the previous section, the complexity of the

BPGM in performing the mappings of one intermediate pattern to another only grows as L{D). With positions specified by an 8-bit X-register and an 8-bit Y-register, patterns with up to 256x256 = 65,536 cells can be specified. This is probably sufficient for most present applications of systolic arrays or CHiP computers. Larger patterns require a larger position register. With 16-bit X and Y registers, we can specify patterns with up to 65,536 x 65,536 = 4,294,967,296 cells. Since the X and Y registers grow as log 2 {x-dimension) and log 2 {y-dimension) respectively, it is not envisioned that the size of the X and Y registers should create any large problems in patterns of finite dimension.

There are, of course, other considerations beyond register

Chapter III. Pattern Growth 59 size. The circuitry used to compute the position and time also grows as L(D) as does the amount of information passed between cells.

The problem comes with the mapping to the final state.

Any pattern growth method which achieves a unique neighborhood around each cell is essentially a "positional pattern growth method". Pattern growth methods proposed by

Walters, Martin, Gollakota, and myself, are all positional methods. The GPGM may use a very convoluted way of determining its position, and thus pay a heavy price in the intermediate mappings, but in the end, the final state of a cell is found using its unique neighborhood in the last intermediate pattern. It is this mapping from the last intermediate pattern to the final pattern where any positional pattern growth method will have at least an O(D2 ) memory complexity, irrespective of its complexity in mapping the intermediate patterns to the next intermediate pattern.

To emphasize the potential problems of this O(D2) memory complexity, let's take a look at an example. A separate table and set of growth parameters is required for each distinct global configuration, and thus for each distinct global function. A pattern of 256x256 = 65,536 cells, for example, requires a memory with 65,536 storage locations. If

Chapter III. Pattern Growth 60 there are 256 or fewer distinct local functions, then a 64 kilo-byte memory is required for each global configuration.

A reasonably small number of configurations such as 16 patterns of 256x256 elements each, requires 10 Mbytes of memory, which is the equivalent of a PC/XT hard-disk. This is an unreasonable expense for a small systolic chip PE.

Allowing arbitrary final configurations thus has its price and minimization for the most general case is very difficult. If dealing with a specific case, however, it is possible to arrive at some simpler hardware, even if by heuristic means. The BPGM lends itself to several heuristics that may be applied to reduce the size of the table of final states.

Some savings may be possible if we know something about the final pattern. For example, if the pattern is very regular, the position indices could be adjusted by a small and simple routine to point into a table for the repeated subpattern. Patterns such as binary trees might fall into this category. Another example would be to map cells to states depending on the region of the position. This second heuristic applies to many systolic algorithms where most cells in the middle are performing one function while cells near the edge are performing another.

Chapter III. Pattern Growth 61 What is common to these heuristics is that we are augmenting the BPGM with a table look up routine that takes advantage of some unique characteristic of the pattern that the designer could use to reduce the size of the Table of

Final States. Most involve a trade-off with time. The routine to perform the mapping must be kept short to avoid exceeding the amount of time within a time step used for pattern growth and reconfiguration. In both of the examples, the amount of time for the mapping is not dependent on the pattern size, so they may be especially useful for large patterns.

It is still an open research problem to find a PGM whose time and memory requirements do not depend on the pattern dimensions or Bloomtime for any pattern grown using the method.

Chapter III. Pattern Growth 62 CHAPTER IV

CHANGES TO KUMAR'S RULES

4.1 INTRODUCTION

During the development of a simulator for Kumar's distributed control algorithms it became apparent that not all of his rules were correct or complete. It also became apparent that some of the rules could be improved upon to make more efficient use of time, memory, or cells. Nothing has been changed without reason. In fact, one goal was to keep the "system" as much the same as possible. Changes were only made to improve or correct.

One advantage of building on the previous system rather than starting anew is that most of the proofs in Kumar's dissertation do not need to be replaced or re-proven for a different system. The system goes through the same phases of reconfiguration, namely, quarantining faults, neutralization of superfluous reconfiguration sources, clearing, seed migration, and pattern regrowth. The rules used to sequence through the steps have been changed. In his dissertation, Kumar proved theorems about each phase to the effect that they would take a certain amount of time or

Chapter IV. Changes to Kumar's Rules 63 that the phase, if it was implemented properly, would have the desired effect. Since the phases of reconfiguration have not changed, such theorems do not have to be re-proven.

Kumar did not prove that his algorithms worked. It is the job of the simulator to be of aid in verifying the algorithms. It was with the help of the simulator that the control methods were debugged. The simulator has not yet proven the system flawless, but the stage has been reached that an exhaustive proof by example can be attacked. During the course of my thesis I had hoped to quickly write a program to simulate an array of cells that ran Kumar's reconfiguration algorithms. After writing and debugging I hoped to then break down the possible events into classes and let the simulator run an example of each class to verify the correctness of the system. Unfortunately, as I was debugging the simulator I began to find mistakes and omissions in Kumar's rules. It became apparent that an exhaustive proof would be quite expensive in terms of human and machine time.

The simulator has been documented and refined to the point that it can be passed on for someone else to take on the task of further verif icatio,n, or the development and integration of the reconfiguration and pattern growth algorithms with testing and I/0 algorithms. The next step

Chapter IV. Changes to Kumar's Rules 64 beyond software simulation would be a hardware implementation using existing microprocessor and memory chips. At some point in time it may be possible to layout an array of cells on a single chip or wafer. By that time, it is hoped that the distributed control algorithms will be ready to manage these array processors.

The revised rules are stated towards the beginning of each section. The corresponding numbering from Kumar's dissertation is given in parentheses, when there exists a

corresponding rule. Instead of explaining the whole system,

only the changes in the system are explained. This has the disadvantage of forcing the reader to have a copy of Kumar's dissertation. However, it does help keep the presentation of the changes as concise as possible, and since it is the changes that constitute the research contribution, it is most desirable to focus on them. The documentation of the

simulator's routines also provide a description of the final product.

Chapter IV. Changes to Kumar's Rules 65 4.2 ADDITIONS TO CHAPTER IV TEST RESULT INTERPRETATION

Rule 4,2,1: (4.1) In the diagnostic mode, cells conduct tests on the cells to which they are directly connected.

The testing consists of re-iterations of the same, finite test sequence. After each pass of the diagnostic sequence, a cell makes a transition to a quarantine (local) state 'Q' if it determines that one of the cells directly connected to it is faulty. A quarantine cell disconnects itself from neighboring cells which are faulty, according to its opinion, via local switching mechanisms.

Rule 4 .1 is not sufficient. It is not enough to

"disconnect" the faulty neighbor; the quarantine cell must also keep a record for future reference of which neighbors are faulty. This becomes important when the reconfiguration algorithms must make decisions based on whether or not the neighbor is known to be faulty. As an example of this, a cell with a seed state must "know" which cells are "faulty", i.e. have been disconnected, when choosing to which non-faulty neighbor to pass a seed.

There are at least two ways of keeping track of which cells are faulty. The first method is to load a special state 'X' into the LSB in the direction of the faulty cell.

For example, if the north neighbor is faulty, an Xis loaded

Chapter IV. Changes to Kumar's Rules 66 into the NLSB. As long as we then disconnect this register, as in Kumar's rule 4.1, the X will remain in the buffer.

The cell may then later on use the fact that an X resides in the buffer to know whether the neighbor in the corresponding direction is faulty.

A second method of keeping track of which cells are faulty is to set flags corresponding to which cells are faulty. The Fault Status Register forms the interface by which Testing communicates the fault status of the cell's neighbors to the reconfiguration and computation portions of the cell. Each bit of the FSR corresponds to a direction to which the cell is connected.

FSR +------+ I 1 I 2 I 3 I 4 I 5 I 6 I 7 I 8 I +------+

8 1 2 NW N NE

7 C 3 = W C E 6 5 4 SW S SE

Bits 1,3,5, and 7 correspond to the North, East, South, and

West neighbors respectively. These bits are dubbed the

North Status Flag (NSF), East Status Flag (ESF), South

Status Flag (SSF), and West Status Flag (WSF). Note: if we used just 3 bits we would not be able to represent all the possible combinations of simultaneously faulty neighbors.

Chapter IV. Changes to Kumar's Rules 67 New ouarantine Rule 4,2.2 : In the diagnostic phase, cells set status flags in the direction of the faulty neighbors. If any of the North,

South, East, or West Status Flags are true and if the

Reconfiguration Finite State Machine is in state 1, then LSR becomes Q and XR, YR, TR are cleared, and if GSR > maxf then the GSR is cleared.

RFSM represents the state of the Reconfiguration Finite

State Machine and maxf is the maximum number of valid functions. The condition RFSM = 1 is checked to avoid altering the register values once reconfiguration has begun.

If we were to place another Q into the LSR, a Y (clearing) state may be lost. Kumar did not clear XR, YR, or TR because these registers belong to the BPGM which did not exist when he wrote his dissertation.

Since a fault may occur in a neighbor during the process of neutralization of excess reconfiguration sources, the new rule clears the GSR if the GSR contains the top half of a priority value. Kumar failed to clear the GSR in this case. If the Quarantine cell with the highest priority value happened to also contain a priority value in the top half of its GSR because it became a Quarantine cell during a neutralization phase, then the cells in the array would think that this cell was to be the next reconfiguration

Chapter IV. Changes to Kumar's Rules 68 source when in reality it has no seed to pass. The cells with global states corresponding to valid functions would neutralize themselves, the real seed would be lost, and

reconfiguration would fail. Note that this error could also be avoided by checking that GSR <= maxf, before leaving RFSM

state 1.

4.3 CHANGES IN THE DETERMINATION OF FAULT-FREE SPACES IN A

CELLULAR ARRAY

Rule 4, 3, 1: (5 .1) Irrespective of its location, each cell in a quarantine envelope has space-value= -1.

Rule 4,3,2: (5.2) Assuming no wrap-around, each boundary cell that is not part of a quarantine envelope has space-value= 0.

Rule 4.3.3: (5.3) Space-values are updated synchronously at

times t 1, t 2 , t 3, ... , ti, .... Let the k cells directly

connected to cell Cj. have space-values s1,s2, ... ,skat

Then, at ti =t1+1 the space-value of Cj becomes

Chapter IV. Changes to Kumar's Rules 69 Rule 5. 1 may need to be changed. If we don't want patterns to reside next to quarantine regions, then the Q cell's space-value should be -2. If the pattern abuts a quarantine region, its communication with the outside world may be hampered.

Rule 5.2 is still valid. Each boundary cell that is not part of a quarantine envelope has a space-value= 0.

Rule 5.3 was a bit vague. It did not specify the number of cells k to which a cell was connected. From what followed in the dissertation, it was apparently the assumption that the connections to the 4 other cells in the

Von Neumann neighborhood was sufficient. In the following subsection I will show that this is not the case.

Chapter IV. Changes to Kumar's Rules 70 4.3.1 THE NEED FOR A DIAGONAL SPACE VALUE

In addition to the horizontal and vertical, Von Neumann neighborhood, space-value a 'diagonal' space value must be

used if rectangular patterns are to be grown without a lot

of wasted array space. The diagonal space value ensures that there exists sufficient room for the pattern in the

diagonal direction. The GPGM could not take advantage of a

diagonal space-value. The BPGM can take advantage of the new space value by adding the growth parameter dspace which is checked before setting OK_PG just as the space growth parameter is now checked before setting OK PG.

Why do we need to know how much space exists in the

diagonal direction? Why can't this space be derived from

the horizontal and vertical space? Consider the example, represented in the following figures, of an array without a

faulty region and the same array with a faulty region. The

t represents a -1 space value. Note that the 'regular'

space value does not accurately predict the amount of room in the northwest diagonal direction when there are faults in

the array.

Chapter IV. Changes to Kumar's Rules 71 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0

0 1 2 2 2 2 2 2 2 2 2 2 2 2 1 0

0 1 2 3 3 3 3 3 3 3 3 3 3 2 1 0

0 1 2 3 4 4 4 4 4 4 4 4 3 2 1 0

0 1 2 3 4 5 5 5 5 5 5 4 3 2 1 0

0 1 2 3 4 5 6 6 6 6 5 4 3 2 1 0

0 1 2 3 4 5 6 7 7 6 5 4 3 2 1 0

0 1 2 3 4 5 6 7 7 6 5 4 3 2 1 0

0 1 2 3 4 5 6 6 6 6 5 4 3 2 1 0

0 1 2 3 4 5 5 5 5 5 5 4 3 2 1 0

0 1 2 3 4 4 4 4 4 4 4 4 3 2 1 0

0 1 2 3 3 3 3 3 3 3 3 3 3 2 1 0

0 1 2 2 2 2 2 2 2 2 2 2 2 2 1 0

0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Figure 16: Space Values in a Fault-Free Array.

Chapter IV. Changes to Kumar's Rules 72 9 10

0 0 0 0 0 0 t 0 0 0 0 0 0 0 0 0

0 1 1 0 t t X t 0 1 1 1 1 1 1 0

0 1 0 t X X t 0 1 2 2 2 2 2 1 0

0 0 t X X t 0 1 2 3 3 3 3 2 1 0

0 0 t X t 0 1 2 3 4 4 4 3 2 1 0

0 t X t 0 1 2 3 4 5 5 4 3 2 1 0

7 t X t O 1 2 3 4 .5. ..6.5 4 3 2 1 0

0 t 0 1 2 3 4 5 6 6 5 4 3 2 1 0

0 0 1 2 3 4 5 6 7 6 5 4 3 2 1 0

0 1 2 3 4 5 6 6 6 6 5 4 3 2 1 0

0 1 2 3 4 5 5 5 5 5 5 4 3 2 1 0

0 1 2 3 4 4 4 4 4 4 4 4 3 2 1 0

0 1 2 3 3 3 3 3 3 3 3 3 3 2 1 0

0 1 2 2 2 2 2 2 2 2 2 2 2 2 1 0

0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Figure 17: Space Values in Array with Faults.

(t = -1, x = faulty cell)

Chapter IV. Changes to Kumar's Rules 73 Consider cells in row 7, columns 9 and 10. The cell at

(7,9) is assured of 5 useful cells in both the north and east directions, but it has only 2 useful cells in the northwest diagonal direction. The cell at (7,10) computes a space-value that accurately predicts at least 6 useful cells in the horizontal and vertical directions, but it has only 3 useful cells in the northwest diagonal direction. In general, dsv = Lsv/2_J, where again dsv is the diagonal space value and sv is the horizontal and vertical space value. This is because, when a Von Neumann neighborhood is used to compute a space value, we know that a cell is at the center of a hyperdiamond of useful cells with girth 2*sv+l, as correctly proven in theorem 5.3 of Kumar's dissertation.

The GPGM did not require a dsv, because it used a space-value approximately twice as large as the BPGM' s space-value. The BPGM could also get by without a dsv if it used as large an sv as the GPGM, but then the BPGM would waste as many cell's as the GPGM. The GPGM lacks a rule that keeps cells that are not part of the final pattern from becoming part of an intermediate pattern. For example, a 7x7 square final pattern is circumscribed by the last intermediate pattern which is a hyperdiamond of girth= 13.

It is interesting to note that Kumar's theorem 7.1 is valid for the BPGM but not the GPGM. Theorem 7.1 states

Chapter IV. Changes to Kumar's Rules 74 7 13

7

13

Figure 18: Array Space Wasted using GPGM. Only 49 of the 169 cells are available for computation.

Chapter IV. Changes to Kumar's Rules 75 that an axb pattern in an AxB array, can reconfigure at most

(A-a)*(B-b) times. The maximum number of reconfigurations for the GPGM is actually (A - 2*a + l)*(B - 2*b + 1). This is because the GPGM will not allow us to place a final pattern along the edge. The BPGM will, but we must explicitly check that enough fault-free space exists in the diagonal direction. The cell can no longer rely on an inflated sv to guarantee an adequate dsv.

The same effect as a diagonal space value, can be achieved by using a Moore neighborhood, instead of a Von

Neumann neighborhood, when passing and computing space values. I will show that a diagonal space value can also be passed in a two step process using a Von Neumann neighborhood if the diagonal connections do not exist between cells. If diagonal connections do exist, it would be faster and less complex to employ these connections when passing diagonal space values.

4.3.2 PASSING DIAGONAL SPACE VALUES

If connections between cells in the diagonal directions do not exist, then we need to use a 2 step process to pass the diagonal space values The figure on the next page illustrates this process which emulates the use of diagonal connections.

Chapter IV. Changes to Kumar's Rules 76 1 NW .."" N NE

2 1 ~' ,, 2 _.. w ..r C .... E 2 j • a 1 2

- 1 SW s .... SE

Figure 19: Passing DSVs without diagonal connections.

Chapter IV. Changes to Kumar's Rules 77 For a cell to receive a diagonal-space-value (dsv) from its NW neighbor, the value is first passed East and then passed South. The alternative choice of South first and East second could also have been made; a convention is set to avoid conflicts.

Algorithm 4,3,2.1 Passing Diagonal space Values Without Diagonal Connections: On the first cycle, pass dsv as one would pass sv, i.e. trade dsv with North, South, East, and West neighbors. On the second cycle: the dsv from West Diagonal Space Value Buffer is sent South, the dsv from North Diagonal Space Value Buffer is sent West, the dsv from East Diagonal Space Value Buffer is sent North, the dsv form South Diagonal Space Value Buffer is sent East.

When connections only exist in the horizontal and vertical directions, it takes 2 intercellular I/O cycles within the pass-space-values phase of the system clock to transfer the dsv's. If diagonal connections exist, the dsv can be passed in 1 intercellular I/O cycle during pass-space-value phase of the system clock. Thus, the redistribution of space values will be twice as fast if the diagonal connections are employed.

Chapter IV. Changes to Kumar's Rules 78 4.3.3 COMPUTING DIAGONAL SPACE VALUES

The diagonal space values are computed in the same manner as the space values, using rules 5.1, 5.2, and 5.3. An additional rule is also required since there may be a fault in the diagonal direction, and the cell will not be in a quarantine state.

New Space Rule 4.3,3,1: If the NWSF, SWSF, NESF, or SESF is set, and the cell is not in the quarantine state, then the cell assumes a O diagonal space value.

4.4 CHANGES TO 6.2 COMMUNICATIONOF CONTROL INFORMATION

Kumar defined sequences that the cells followed when passing control information. A new sequence of passing control information was developed by Dr. McKeeman and Mr. Brighton. The new sequence has some advantages over Kumar's communication sequences. First of all, only one type of cell is required, instead of the checker board pattern of two types of cells in Kumar's method. This adds to the uniformity of the cells. Secondly, the new sequence can be used with either the Von Neumann or the Moore Neighborhood, where as, Kumar's communication sequence could only be used

Chapter IV. Changes to Kumar's Rules 79 with the Von Neumann neighborhood. communication secn,ience 4.4.1:

1. Send space-value to North and West neighbors, while

receiving space-values from South and East neighbors.

2. Send space-values to South and East neighbors, while

receiving space-values from North and West neighbors.

3. Send diagonal space value to Northwest and Southwest

neighbors, while receiving diagonal space value from

Southeast and Northeast neighbors.

4. Send diagonal space ~alue to Southeast and Northeast

neighbors, while receiving diagonal space values from

Northwest and Southwest neighbors.

5. Compute new space value, and new diagonal space value.

6. Send Y-coordinate North and X-coordinate South, while

receiving the positions from the South and East.

7. Send Y-coordinate South and X-coordinate East, while

receiving positions from the North and West.

8. Send state to North and West neighbors and receive state

from South and East neighbors.

9. Send state to South and East neighbors and receive state

from North and West neighbors.

10.Compute new state.

Chapter IV. Changes to Kumar's Rules 80 j. .~ a

NB NB NB .- ...... WB EB -.... WB EB -.... WB EB -....-

SB SB SB a A~ j

NB NB NB

_. ... WB EB .... WB EB ....- WB EB ....-

SB SB SB

j a A~

NB NB NB

- WB EB .- WB ... WB .... EB -.... EB -- SB SB SB 4~ A 4~

Figure 20: Passing Information North and West.

Chapter IV. Changes to Kumar's Rules 81 1J 1J ,, NB NB NB - ..- WB EB ..- WB EB ..- WB EB ..

SB SB SB

,, ,, 1, NB NB NB .... - WB EB ..- WB EB ..- WB EB ..- SB SB SB

,, 1 , 1 , NB NB NB ...... WB EB .. WB EB ..- WB EB ...-

SB SB SB 1, 1, ,,

Figure 21: Passing Information South and East.

Chapter IV. Changes to Kumar's Rules 82 Steps 1 and 2 together constitute an intercellular I/0 cycle. To pass a value, the contents of the appropriate register are written onto the appropriate directional bus.

When receiving a value, the contents of the bus are written into the appropriate directional buffer. The flow of information over any intercell bus is only traveling in one direction at a time, i.e., no two cells are putting information on the same inter-cell bus at the same time.

Thus, there is no bus contention.

By "compute state" it is meant that the next values of all internal flags and registers, other than the space value registers, are determined for the next reconfiguration time step. Step 10 may actually take longer than the other steps combined.

Another change in the communication of control information involves the buffering of the control information. Kumar used the same set of buffers for both the space values and states. This is not possible. The next space value can be computed without knowing the current states of the neighbors, but the next state must be computed using the neighbors' space values. A cell cannot decide to which neighbor to pass the seed unless it knows both the space values of its neighbors and the states of its neighbors. The cell cannot remember the space values of its

Chapter IV. Changes to Kumar's Rules 83 neighbors unless it saves these values. Thus the cell needs both an SVB (Space Value Buffer) and an SB (State Buffer) for each of the neighbors (excluding itself).

Recall that Kumar's communication sequence was not capable of passing information in a Moore neighborhood. If cells are connected in a Moore neighborhood, then the wave method may still be used to pass control information. At any one time step, half the busses are used to send information while half the busses are used to receive information. As mentioned in the previous section, the computation of the diagonal space values is aided by communication in the diagonal direction if those connections exist. At present, this is the only case where communication of control information in the diagonal direction would be useful.

The matter of choosing the von Neumann neighborhood over the Moore neighborhood is a matter of some historical significance and perhaps deserves a closer examination. Aside from the lack of a communication protocol that accommodated the Moore neighborhood, were there other reasons for not employing the Moore neighborhood? In the days of Walters and Martin, the next state mapping was done solely by one huge state table. Kumar wisely decided that reconfiguration was too complex for such a simplistic,

Chapter IV. Changes to Kumar's Rules 84 memory consuming method. His reconfiguration algorithms look more like a sequential program, with multiple steps in decisions. These steps might correspond to sequential statements in a program, or to multiple levels of logic in a hardware implementation. The entire system might be likened to a very regular and homogeneous Local Area Network with each cell running the same protocol. When cr was implemented using a next state table rather than a sequential program, the memory size grew exponentially with the number of neighbors. (Recall from chapter 2, that this was the major downfall of Martin's method.) The length of a program, on the other hand, grows in an approximately linear manner with the size of the neighborhood. Therefore, the memory complexity of the next state mapping is no longer exponential in the number of neighbors.

But does using the Moore neighborhood buy us anything?

With the Moore neighborhood, the number of time steps needed to perform pattern growth and most phases of reconfiguration, could be cut in half. The information travels through the array twice as fast diagonally from the source when diagonal connections are used. However, each time step may now be twice as long. With twice as many neighbors the program would be twice as long and would take almost twice as long to execute. For example, to look for a

Chapter IV. Changes to Kumar's Rules 85 particular state in the neighborhood, the cell must examine twice as many buffers. Thus using the Moore neighborhood for state computation does not buy us much, and would make the pattern growth and reconfiguration methods inapplicable to those cases where only horizontal and vertical connections exist.

Chapter IV. Changes to Kumar's Rules 86 4.5 CHANGES TO 6.3 MECHANISM FOR SEQUENCING THROUGH THE

RECONFIGURATION STEPS

The revised RFSM (Reconfiguration Finite State Machine) can be behaviorally characterized by the following algorithm.

Algorithm 4.5.1 < A revision of Kumar's Algorithm 6.3,ll:

1. If LSR=Q and GSR is a valid function then check the

waitclock. If the waitclock <= 0 then reset KF and go to

state (2), else decrement waitclock.

2. Neutralization Phase: Swap SR and PRR. Wait for v

clock periods. Go to (3).

3. Clearing Phase: Un-swap SR and PRR. If NF= true then

GSR := 0. LSR := Y. Wait forµ clock periods. LSR := Q.

Go to (4).

4. If GSR=0 (the cell has been neutralized) then go to (6).

Otherwise go to (5).

5. Eject seed, i.e., transfer seed information to a fault

free neighbor. Reset NF, and GSR. Go to (6).

6. If GSR contains a valid function {neutralized quarantine

cell has been reactivated} and KF (contact flag) = true

then go to (2). If GSR contains a valid function and KF =

false then go to (5). Otherwise go to (6).

Chapter IV. Changes to Kumar's Rules 87 Where, vis the time necessary for priority neutralization,

andµ is the worst-case time delay between beginning and end

of the clearing process.

The presentation of the changes made to the

Reconfiguration Finite State Machine is organized by the

changes made to the actions in, and transitions out of, each

state.

(1) Must check that GSR contains a value corresponding to a

valid function, and not a priority value, before making a

transition to state 2. Values in GSR corresponding to valid

functions range from 1 through maxf.

Should also reset KF, if KF = true, because a Q cell that did not previously have a global state, has experienced a

collision with a growing pattern, and not resetting KF will

cause another transition to state 2 from state 6. The waitclock is given (Bloomtime [GSR] - TR) by the

quarantine procedure if the cell was participating in

pattern growth just before assuming the Quarantine state.

The waitclock is used to countdown to Bloomtime so that

reconfiguration will be initiated synchronously.

Chapter IV. Changes to Kumar's Rules 88 (2) This step needs a clarification. SR and PRR are not really "swapped". This is a misnomer by Kumar. If the values actually were to trade places between the two registers then several of the later rules would be incorrect. What Kumar does is to set the RSF (Register Swap

Flag) and as long as the RSF is set, the PRR contents are passed during the pass state phase of the system clock, in place of the SR contents which would normally be passed.

( 3) The following additions were made in state 3. Upon entering the clearing phase the LSR must assume a Y local state to indicate to itself and surrounding cells that it is a quarantine cell in clearing mode. If the NF

(Neutralization Flag) is set, the GSR is assigned O in order to neutralize the superfluous reconfiguration source. The quarantine cell returns to a Q state at the end of the clearing phase.

( 4) There are no changes needed in RFSM state 4. There still is a transition to 6 when GSR = 0 and to 5 otherwise.

(5) No Changes. Simulator waits an extra time step before clearing the GSR, and resetting handshake register and NF.

( 6) This step has been modified to deal with collisions.

Chapter IV. Changes to Kumar's Rules 89 Kumar's method of handling collisions wasted cells and presented problems with cells being in a quarantine state when none of their fault status flags were set. I devised a new method of handling collisions. In the new method, after the pattern blooms, the quarantine cells with which the pattern has collided make transitions to the neutralization mode. More is said\ about this later when the Collision procedure is described. Now when GSR contains a valid function and KF (Contact Flag) = true then the cell makes a transition to state 2 and resets KF. We still make a transition to state 5 when GSR is a valid function and KF is false. A cell remains in state 6 as long as the Quarantine cell is not .reactivated by accepting a valid global function; accepting a valid global function can only happen if a neighbor passes the Quarantine cell a seed or if a collision occurs between the Quarantine cell and a growing pattern.

4.6 CHANGES TO 6.4 NEUTRALIZATION OF SUPERFLUOUS

RECONFIGURATIONSOURCES

Before describing the revised rules governing the neutralization of superfluous reconfiguration sources, let us make two useful definitions.

Chapter IV. Changes to Kumar's Rules 90 crst The next state mapping performs neutralization as well

computing the cells next state.

PRR The Priority Value Register contains values between ns

and na+ns-1, where na is the number of cells in the

array, and ns is the total number of states.

Rule 4.6.1 {revised Rule 6.4.2,1}: Since the same mechanism crst is used for both next-state assignment and neutralization, the set of codewords for states and the set of code words for priorities are non-intersecting sets.

Priority values are indicated by GSR > maxf.

Rule 4.6,2 (revised Rule 6.4.2.2): For reasons that will be apparent later, the codeword for any member of the set of priorities has a larger binary value than the codeword for any member of the set of states.

Rule 4,6.3 (revised Rule 6.4.2.3): If one or more cells in the neighborhood of a cell has its SR ( state register) containing a member of the set of priority codewords, except

Chapter IV. Changes to Kumar's Rules 91 in the case defined by Rule 6.4.2.7, then crst computes the largest priority codeword among the priority codewords in the cell's neighborhood and stores this codeword in the cell's SR.

Rule 4.6.4 {revised Rule 6,4.2,4): When a quarantine cell possessing the global state enters the neutralization mode { as defined by its RFSM }, it sends its PRR value instead of its SR value to its neighbors during the 'send state' phase of the system clock. In order to remember that it is to take this turn in action, it internally sets its RSF (Register Swap Flag).

Rule 4.6,5 {revised Rule 6.4.2.5): If a cell with RSF=l has one or more of its neighbors with SR binary value greater than the binary value of the contents of its own SR, then its 'neutralized flag' (NF) is set and, at the end of neutralization mode, its register swap flag (RSF) is cleared. This cell is now "neutralized" and no longer a reconfiguration source.

Rule 4.6.6 {revised Rule 6.4,2.6): A cell makes a transition to a "clear mode" v periods { of the compute state phase of the system clock } after it makes a transition to the

Chapter IV. Changes to Kumar's Rules 92 "neutralization mode." When this happens, the following steps occur in sequence:

1. If NF=l then the GSR of the cell is set equal to O and NF

is cleared.

2. The register swap is reversed and the SR is again passed

during the "pass state phase" of the system clock.

Rule 4, 6, 7 : This rule presents an exception to the working of the Rule 6.4.2.3. Even if some of the neighbors of a Q cell have codewords representing priority values, the SR of the Q cell is not overwritten by the mechanism defined in Rule 6.4.2.3.

Kumar's Rule 6. 4. 2. 1 has been further specified. A special local state is indicated by O < LSR < nss, where nss

= number of special states. A valid global function is indicated by 1 < GSR <= maxf, where maxf = maximum number of functions. If GSR > maxf then the GSR contains a priority value. GSR = 0 indicates neither a global function nor a priority value. It turns out that GSR is only O when then cell is either in the O (Quiescent) or Qo (neutralized quarantine) state, or in the z (intermediate clearing) state.

Chapter IV. Changes to Kumar's Rules 93 Rules 6.4.2.2, 6.4.2.3 are still valid.

Rule 6.4.2.4 has been clarified some. As mentioned in the previous, the "swapping" or "substitution" really means we are passing the PRR in place of the SR.

Kumar's Rule 6.4.2.5 has been changed, but we need to introduce a new rule before discussing the change.

My Rule 4. 6. 6 is almost identical to Kumar's Rule

6.4.2.6. The actions in the rules are also specified in state 3 of the Reconfiguration Finite State Machine.

Rule 6.4.2.7 required an addition/correction. Not just

Qo cells but also Qi cells should not be overwritten with priority values, where i represents a valid global function.

New Neutralization Rule 4.6,8: This new rule requires a new register HPRR to hold the highest priority value seen by a quarantine cell. Priority values from faulty neighbors are ignored. The largest neighboring SB among non-faulty neighbors is loaded into HPRR. If RSF is set, the contents of HPRR is passed in place of the SR contents, during the pass-state phase of the system clocking.

Chapter IV. Changes to Kumar's Rules 94 Quarantine cells must also get into the act of passing priority values. If the array is cut into 2 parts by a quarantine region, and if Q cells do not pass priority values, 2 reconfiguration sources may survive the neutralization phase. The 2 patterns eventually grown may collide during growth or they will conflict when they start to communicate with the outside world.

One problem with not having Q cells compute, store, and pass priority values is illustrated below.

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0

0 0 0 0 0 Q X Q 0 0 0 . . . 0 0 0 0 Q Q'x Q 0 0 0 . . .

0 0 0 Q X X X Q 0 0 0 0 0 0 0 Q Q Q"0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Q cells that do not have non-faulty non-Quarantine neighbors would not know the largest priority value being passed in

Chapter IV. Changes to Kumar's Rules 95 the array; they would only see the priority values of their neighboring Q cells. For example, if Q' has a higher priority value than its Quarantine neighbors, and if Q" has the highest priority value of any of the Quarantine cells, then both Q' and Q" will become reconfiguration sources under Kumar's rules. It is thus possible ·that for a particular distribution of faults and a particular distribution of priority values, that multiple reconfiguration sources will remain after the neutralization phase.

Rule 6. 4. 2. 5 required a correction/ improvement. The register swap flag (RSF) should not be cleared until the end of neutralization. If RSF is cleared, the Quarantine cell will not continue to pass priority values. Thus, priority values higher than the one that neutralized the cell, will not be passed to neighbors.

4.7 CHANGES TO 6.5 THE CLEARING OF STATE REGISTERS IN AN ARRAY

The mechanism for clearing is also implemented by crst·

Let Y and 'Y denote neutralized and non-neutralized reconfiguration sources respectively. We define the

Chapter IV. Changes to Kumar's Rules 96 following rules.

Rule 4, 7, 1 Crevised Rule 6, 5, 1, 1 > : Any cell with a Y or 'Y neighbor, except Q, Y, 'Y, and z cells, make a transition to the "z" state at the next time step.

Rule 4.7,2

Rule 4,7,3

The z state, which might be thought of as a sleep state, represents an intermediate clearing state.

Lemma6.5,3: When one or more quarantine cells in the array enter the clear mode, the SRs of fault-free non-quarantine cells in the array are cleared in not more thanµ periods of the state-compute phase of the system clock, µ = na, na being the number of cells in the array.

Chapter IV. Changes to Kumar's Rules 97 Rule 6.5.1.1 was changed. Quiescent cells should not be mapped to the z state. This would cause a new wave of z states to be emitted every other time step.

Rule 6.5.1.2 was corrected. Cells in Q, Y, and 'Y states with a neighbor in the z state should not make a transition to the z state at the next time-step. This would cause a loss of quarantine.

Rule 6.5.1.3 required an addition. Cells with a z state should also clear pattern growth registers.

Lemma 6.5.3 is still valid. Note, however, that we don't really need to wait for the clearing of the entire array before ejecting the seed into the array, since the seed can travel no faster than the wave of z states.

4.8 CHANGES TO 6.6 INTERNAL SEEDING

In this section, the revised process of decision making used in internal seeding is described.

The hardware structure for internal seeding includes the

NSB (North State Buffer), SSB, ESB, and WSB. We also require

Chapter IV. Changes to Kumar's Rules 98 a small seed-direction-handshake register and buffers to hold the machine state indicating to whom the seed is being passed.

There are two possible cases that a quarantine cell may be faced with when choosing a cell to pass its seed to.

Case 1: The reconfiguration source in the eject seed mode possesses at least one quiescent fault-free neighbor.

Case 2: All the fault-free neighbors of the reconfiguration source in the eject-seed mode are in the Qo state.

When it is in the eject-seed mode (decided by its RFSM), a reconfiguration source examines the states of its neighbors that are not disconnected from it. These states are available in its buffers. If case 1 is true, then it chooses (by means of combinational logic) one of its neighbors in the quiescent state with the fewest marks. It then:

1. Stores the binary value corresponding to the chosen

direction in its seed-handshake register. The value of

this register is passed as part of the state information.

The register is seperate from the SR which is actually a

concatenation of the GSR and LSR. Binary codewords are

Chapter IV. Changes to Kumar's Rules 99 required to indicate that the north, south, east or west

neighbor has been chosen to receive the seed. A codeword

(middle) is also needed to indicate that no seed is

available to be passed.

2. At the next compute state phase the Q cell clears its

handshake register (sets it to middle) and clears its

GSR. It is very simple for the Q cell to remember to

perform the clearing since the contents of the

seed-handshake register will be non-zero only after a

seed has been passed.

A neighboring cell to the reconfiguration source will see the value of the sources seed-handshake-direction register and decide whether it has been chosen to receive the seed.

As an example, if a cell finds that the cell to its north has a south codeword in its handshake then it will receive the seed. For a quiescent cell, receiving the seed involves loading the global state of the reconfiguration source into its GSR and placing an S (seed state) or R (seed at rest state) in its LSR. If case 2 is true, the reconfiguration source chooses (by means of combinational logic) one of its neighbors in the Qo state with the fewest "marks". It then:

1. Places the chosen direction codeword in its

Chapter IV. Changes to Kumar's Rules 100 seed-handshake-register;

2. Clears its GSR during the next compute-state phase of the

system clock.

The receiving Qo cell will load the global state of the source into its GSR. Unlike the quiescent cell, it will not place a seed state into its LSR. A Q cell knows it is a reconfiguration source when it contains a valid global state and is in states 5 or 6 of the RFSM.

The entire process takes only two time periods of the compute state phase of the system clock. The procedure is a slight departure from Kumar's procedure. Kumar used actual handshake lines instead of registers with states. An asynchronous communication protocol would be a departure

from the mathematical foundations of tessellation automata and would also have been difficult to simulate, but would be as equally effective in communicating the information.

Another change is the "mark" used with quiescent cells in seed migration must also be used here with internal seeding.

By checking the neighbors mark, the reconfiguration source Q cell will know whether he has already tried passing the seed to that neighbor. Otherwise, two Q cells may keep passing the seed between them forever.

Chapter IV. Changes to Kumar's Rules 101 4.9 CHANGES IN 6.7 SEED MIGRATION

A seed state is passed between cells until it comes to rest in a cell surrounded by enough fault-free space to allow the growth of the pattern. In a sense, the seed is "migrating" through the array. The revised decision making processes involved in seed-migration is explained in this section. The list of improvements, corrections, and additions to the process is then presented and defended.

If a cell has passed a seed to its neighbor at a previous time step, as indicated by seed-handshake-register* m, then the cell resets its seed-handshake-register and its GSR, and if the cell does not have a quarantine state (Q or Y) in its

LSR, then the cell resets its LSR.

A cell knows to accept a seed state from its neighbor when it finds a neighbor with a seed-handshake-register value that points in its direction. To accept the seed, the cell loads a seed state into its LSR, as long as it is not

in the Q or Y (quarantine) state. It also loads the neighbors previous global state into its own GSR, increments

its mark, and sets its Accept Seed Flag (ASF). ASF is reset

after the rest of the decision making process described

Chapter IV. Changes to Kumar's Rules 102 below.

There are two distinct types of seed states. S·l represents a seed, corresponding to global state i, that is still migrating. Ri represents a seed, corresponding to global state i, that is at rest (has found a good cell in which to plant itself). The cell must decide which of these two states to assume.

Let each cell have a combinational logic block whose purpose is to determine whether the migrating seed should be permitted to come to rest in that cell or not. The output of this block is called OK PG. The following three functions are performed by it.

1. Derives the size of the available fault-free space around

the cell from the space-value and diagonal-space-value of

the cell.

2. Derives the size of the required fault-free space around

the cell from the state of the cell.

3. Compares the two, and sets OK PG to 1 if the size

of the available space is greater than or equal to the

size of the required space in each dimension.

This combinational block is enabled by the Accept Seed Flag.

There are three flavors of neighboring quiescent, and

Chapter IV. Changes to Kumar's Rules 103 quarantine states that the cell will consider.

0, Q - unmarked quiescent and Quarantine states

0', Q' - "single marked" quiescent, and Quarantine states

0'', Q'' - "double marked" quiescent, and Quarantine states

The marks are stored as a separate tag and passed as part of the state information.

If a cell determines OK PG should be false, then the cell makes a decision about the direction of seed migration according to the following order of preference:

1. An unmarked quiescent or quarantine neighbor with a

higher space-value;

2. An unmarked quiescent or quarantine neighbor with the

same space-value as that of the cell;

3. An unmarked quiescent or quarantine neighbor with a lower

space-value;

4. A single marked quiescent or quarantine neighbor with a

higher space-value; 5. A single marked quiescent or quarantine neighbor with the

same space-value as that of the cell.

6. A single marked quiescent or quarantine neighbor with a

lower space-value.

Double marked cells are never chosen. After passing the seed, the cell increases the value of its own mark by 1.

Chapter IV. Changes to Kumar's Rules 104 In addition to using the marks and space values in its decision, a cell also uses its pstype. When there exists more than enough space to plant the seed, but the cell has an improper pstype, then the cell should prefer a neighbor with a pstype that points the seed in the right direction.

This preference factor should be weighted less than the space preference.

During the compute-state phase, the outcome of the directional decision is placed in the seed-handshake register. At the next compute state phase of the system clock, the cell will set its SR to the quiescent state.

When a cell finds that another cell has pointed to it in the corresponding directional buffer, the cell loads the global state into its GSR and loads a seed state into its

LSR, as long as it is not a quarantine cell. This is the same procedure as was followed in the case that a cell was chosen to accept the seed from a reconfiguration source.

An assumption being made throughout this discussion, is that there exists a von Neumann path between any two cells in the fault free region. In other words, the fault free region is connected. If the fault free region were not connected, then there will emerge two sources of reconfiguration at the end of neutralization. It is possible then that the array will contain two seperate patterns, each contending for the attention of the I/O ports along the edge

Chapter IV. Changes to Kumar's Rules 105 of the array. If the outside world is prepared for such a potential conflict, then this should not be a problem. The extra pattern is simply ignored.

The following list contains the changes made to Kumar's Seed Migration procedure and the reasons for the changes.

1. The flag O.K. has been renamed OK PG for clarity. The PG stands for Pattern Growth.

2. A cell no longer changes its mark based on the mark of the cell to which it passes the seed. Cells now increment the mark when accepting a seed. Both methods have essentially the same effect to increment the mark in accordance with the number of times the cell has been visited.

3. Computation of OK_PG has been changed. 3.a. Must check diagonal space value to determine if there is enough space in the diagonal direction to grow the pattern before setting OK_PG. Without a diagonal space value, a lot of array space would be wasted. over 3 times

as many cells would be needed to grow the pattern as what would be used. For example, a 7x7 pattern would have to be grown in a (2*7-l}x(2*7-l} = 13xl3 square of fault free

Chapter IV. Changes to Kumar's Rules 106 cells. Note that of the BPGM and the GPGM only the BPGM can take advantage of a diagonal space value, the GPGM cannot.

3.b. If there are both processor and switch cells in the computation plane, then the cell's processor-switch type

(pstype) should be checked before setting OK PG. The pstypes are enumerated as

1 2 3 s s s

4 5 6 = s p s 7 8 9 s s s. When there exists enough room to grow the pattern but the cell with the seed contains the wrong pstype, then the cell should pass the seed to a neighbor that is in the direction of a cell with the correct pstype.

I am not advocating associating control hyperplane cells with switches since the complexity of pattern growth and reconfiguration would require a processor and memory in each switch. However, if this is done, then the above is a better solution than Mr. Gollakota's solution to the problem of aligning the grown pattern to the correct processor and switch types. Gollakota suggested, in his thesis, to shift the final pattern until the states in the control plane were above the proper types of cells in the computation plane.

This risks shifting the array into a Quarantine region -- thus causing reconfiguration to be reinitiated. This same

Chapter IV. Changes to Kumar's Rules 107 series of events may then repeat itself without end. It makes more sense to plant the seed in the correct type of cell to begin with, then the final pattern will not need to be shifted.

5. A cell must be careful not to pass the seed off the edge of the array. The first solution is to have the "outside world" pass a B (Boundary) state to the cell as a local state. Then the condition LSB <> B can be checked before passing the seed in that direction. A second solution is to have the "outside world" pass edge cells a mark= 2. Since neighbors with a mark= 2 are never chosen, the seed would not be passed off the edge. This second method is the method I have chosen to use in the simulator. A third method is to replace the present Edge flag with directional edge flags. A cell would then check that it is not passing the seed in a direction on which it is an edge cell.

6. A new rule has been invented to handle the case where no choice can be made. Via these rules, the outside world will eventually learn of the death of the array -- as long as there exists a Von Neumann path from the cell that last had the seed and an edge of the array.

Chapter IV. Changes to Kumar's Rules 108 New seed Migration Rule 4.9.1: If all neighbors of a Seed cell have been visited twice (choice= 2) the cell makes a transition to the D (Dead) state at the next time step.

New seed Migration Rule 4.9.2: Any cell with .a neighbor in the D state makes a transition to the D state itself.

7. I have placed the process of accepting a seed from either a Qi cell or Si cell in the Seed_Migration procedure.

A cell knows it should accept a seed when a non-faulty neighbor's handshake line is set in the cells direction. Since the cell only looks at the neighbors handshake line and global state i, it does not matter whether the neighbor's local state is a quarantine or a seed state.

When accepting a seed, a cell makes a transition to a Oi or

Si state depending on whether it was in the Qo or O state respectively. A cell increments its mark when accepting a seed.

Chapter IV. Changes to Kumar's Rules 109 4.10 REPLACEMENT TO 6.9 COLLISIONS BETWEEN GROWING PATTERNS AND QUARANTINEDREGIONS

The main problem with Dr. Kumar's method of handling collisions is that it creates a second layer of Quarantine cells along the faulty regions with which a growing pattern has collided.

0 0 0 0 0 Q 0 0

0 0 0 0 Q X Q 0

0 0 0 Q X Q Q 0

0 0 Q X Q Q

0 0 Q Q Q -

0 0 Q Q - --- 0 0 ------First of all, this wastes good cells by keeping them in Q states when they do not have faulty neighbors. Another problem arises because these cells are not actually quarantining any faulty cells, and yet they are in the quarantine state, there is the potential for some confusion in the design of the control algorithms. Upon considering various ways of circumventing these problems, a better method suggests itself.

New Collision Rule 4,10,1: If a Quarantine cell finds itself next to a neighbor with a

Chapter IV. Changes to Kumar's Rules 110 final local state in its LSR and a valid global function in its GSR, then the Quarantine cell accepts this neighbors

Global State, and sets its KF (Contact Flag).

Waiting until Bloomtime synchronizes the reconfiguration process of the quarantine cells with which the pattern has collided.

The Reconfiguration Finite State Machine has been appropriately modified so that if in state 6 a cell finds KF

= True and O < GSR <= maxf, it makes a transition to state 2 and reconfiguration commences. The KF is reset if the cell is in state 1, because a previously inactive Q cell has been activated, not re-activated, by the collision.

Note that this rule does not allow patterns to abut quarantine regions. If this is allowed, then the quarantine cell should also check that both minx< XR < maxx, and miny

4.11 COMMENTSON 6.10 EXTERNAL SEEDING INTO FAULTY REGIONS

There are no improvements or corrections to rules on external seeding since there weren't any such rules for

Chapter IV. Changes to Kumar's Rules 111 2-dimensional arrays. The simulator does not handle external seeding into faulty regions. I do have some

suggestions to make, however.

If cells are capable of testing each other, then it

should be possible for the "outside world" to test cells

along the edge of the array. It should be possible then to

know which cells along the edge are faulty.

Rule 4,11,1: The outside world should not pass a seed to

faulty cell along the edge of the array.

This rule by itself does not guarantee that a pattern

will be grown after a seed is ejected. There are new rules

in the section on seed migration, however, which take care

of informing the outside world when there is a lack of room

to grow the pattern. Recall that when the Seed cell has

neighbors whose marks are all greater than 2, then the cell

maps to a D (Dead) state and all neighbors of a D cell map

to the D state. Thus, when a D state finally appears along

the edge, it is apparent that there was not sufficient

fault-free space to grow the pattern.

Chapter IV. Changes to Kumar's Rules 112 j

H

NSVB NdSVB NLSB NGSB NPB

SVR dSVR CD CD m m ..... - - CD > CD en m m m - - - > en en CD FSR "'O (j) r- a. en - - en ...J (!) a. Ct! en Cf> Cf> < LSR Ct! Ct! Ct! Ct!< GSR PRR hPRR XR YR TR

SSVB SdSVB SLSB SGSB SPB

j

0

Figure 22. Registers and Buffers

This figure illustrates the registers and buffers present in each cell.

Chapter IV. Changes to Kumar's Rules 113 CHAPTER V

THE SIMULATOR

5 . 1 OVERVIEW

The simulator aids a sequential computer in emulating an array of control cells. At each time step, the simulator requests whether the user would like to quit, inject a fault, take a "snapshot" of the control states in the array and continue, or simply continue. Snapshots of the states in the array are sent both to the terminal screen and to a text file. The prompts for user input and the user's responses are displayed only on the screen in order to hold down the size of the output text file. All labels used to represent states have been kept as close as possible to the labels used by Kumar and Gollakota. In section 5. 2 the choice of Pascal as a simulation language is defended. In section 5. 3 the meaning of the simulator output is explained. For those wishing to maintain or make additions to the simulator a good deal of documentation is also provided in section 5.4 data structures and 5.5 routine descriptions. The pattern growth and reconfiguration algorithms have been described in the previous 2 chapters and it would be redundant to describe

Chapter V. The Simulator 114 them here. The other algorithms used to run the simulator are very simple and can most easily be gathered from reading the routine descriptions of Init, First_Prompt, Prompt,

Snapshot, and Main. Lastly, a sample run of the simulator is presented in section 5.6.

5.2 CHOICE OF PASCAL AS SIMULATION LANGUAGE

Since Pascal is not a standard HDL (Hardware Description

Language), I shall explain my reasoning behind using it as a simulation language. This section may be skipped at first reading without loss of continuity.

With existing simulators (such as TILADS, TEGAS, N.2,

ISPS, and GSP), the style of the output has been pre-defined

and is already set up for the user. The only additional

specification required of the user is which variables are to

be listed in the output. Having these output routines already set up is normally an advantage. However, to create

an output where the results are displayed as an array of

states, it is much more convenient if the values to be

displayed are more readily available. If a different output

format is desired, ready access to the storage locations

holding the values to be displayed greatly simplifies the

task of displaying the results. With existing simulators, a

Chapter V. The Simulator 115 companion program would have to be created to read and pick out the desired values from the simulator's output file, and then display these values on the screen and to a new output file. Several of the simulation parameters would have to be passed to this "output-reformat" program, for it to know where to find values of states and for it to know how many values to look for in the simulator's standard output format files. Having the display routines within the same set of modules as the simulator also simplifies the conversion of the integer values of states within the simulator into a more readable alphanumeric format.

It was determined that the "register transfer level" was the most appropriate level for simulation of the system. A

circuit level simulation would certainly have been overly detailed and would require a large amount of computer time

to execute. A logic level simulation would also have been possible, but what we really need to know first is whether the system is functionally correct. At the register transfer

level, we assume that there exist simple building blocks

such as registers and counters for storage, comparators to

aid in simple decisions, and simple adders or ALUs to

perform simple operations such as addition. Tri-state I/0 ports are also assumed. With these simple building blocks we can simulate the system in a fairly realistic manner at

Chapter V. The Simulator 116 the functional level. These building blocks are fairly standard components in most VLSI design "libraries" and in

"off-the-shelf" TTL logic, which further justifies their use. A register transfer level description is possible with existing Hardware Description Languages such as ISP'.

However, PASCAL is as powerful as these other languages in terms of constructs and about as easy to gain an understanding of system description from reading the program.

It is not as easy to specify timing characteristics of cells in PASCAL as would be possible in other simulation such as Tegas, TI LADS, or GSP. Fortunately, since the system uses the mathematical model of a tesselation automaton, events occur only at discrete points in time. The various timing concerns, such as the amount of time necessary for state computation, can be better determined after a logic level simulation with a language such as SPLICE. At this point in the systems definition, all timing is synchronous, a master fault tolerant clock [15] is assumed, and state computations and transfers occur during

their appropriate phase of the system clock.

Chapter V. The Simulator 117 5.3 THE USER INTERFACE

At start-up the user is first prompted for dimensions of cellular array, desired global function, and seed location.

The dimensions are prompted for as follows,

Array Simulator Version 1.0

Please Enter I - dimension of array

20

Please Enter J - dimension of array

18

If the user had responded with a dimension larger than the simulator is capable of, the user would have been re-prompted and informed of the maximum size of the array.

Due to the limitations of the line length on the display screen, this maximum size is currently 20.

The user is next prompted for the first global function and the location of the seed. For example,

Please enter initial function

1

Please give I coordinate of seed

7

Please give J coordinate of seed

Chapter V. The Simulator 118 7

Beginning Simulation

Simtime = 0

If the user had given a coordinate for the seed that was outside the dimension that the user chose for the array, then the user is reprompted for the coordinate. For example, if the I dimension had been chosen, by the user, as 11, then the following may occur:

Please give I coordinate of seed

432

Please stay within bounds 1 to 11.

At every time step, the simulator asks the user whether he or she would like to quit, inject a fault, see and record

a 'snapshot' of the array and continue, or to simply just

continue. This prompt is given below.

q = quit, f = fault injection, s = snapshot and continue

any other character= continue, r = register display

If the user responds with q, then the simulator halts

execution and returns control to the local operating system

environment. If the user responds with f, then Prompt asks

Chapter V. The Simulator 119 the user where the fault should be placed. For example,

f

Please enter I coordinate of fault

5

Please enter J coordinate of fault

6

q-quit, f-inject another fault, *-continue.

If the user were to give a coordinate outside the i or j dimensions of the cellular array, then he/she would be prompted for a new coordinate and reminded of the chosen i and j dimensions. For example, assuming the i-dimension was chosen to be 20 by the user during initial prompting, the following dialogue would occur.

Please enter I coordinate of fault.

234

Please stay within bounds 1 to 20.

When a fault is injected, the simulator sets the status flags of all cells surrounding the faulty cell, in the direction of the faulty cell. For example,

NSF[ifault+l,jfault] := true. The next local state of the faulty cell is set to X for display purposes. After this

Chapter V. The Simulator 120 point the cell will compute its state as if it were not faulty, and the surrounding cells will ignore the state of the faulty cell.

After injecting a fault the user has the choice of stopping the simulation immediately, injecting another fault, or simply continuing with the simulation. Thus any number of faults may be injected at every time step.

The array of states is only displayed when the user responds with 's' to the prompt. This feature helps reduce the large amount of output that would otherwise occur at each time step.

The simulator snapshot has a fairly basic format. The contents of each cell's State Register is displayed and these states are arranged as an array just as the processors and switches are arranged as an array in the computation hyperplane. The state is coded for output as either an upper or lower case letter or as a number. Inside each cell, all distinct states have a distinct numerical value. The simulator output routine converts states to characters before output, so that it is easier for a human viewer to

"see" what is going on. If a state register contains a priority value, the last three digits of the numerical value in the SR are displayed. Recall that priority values are all

Chapter V. The Simulator 121 greater than ns and are used as a part of the process of eliminating superfluous reconfiguration sources. Since the quarantine cell with the highest priority value "wins," it seems appropriate to output priorities as numerical values.

Keeping the value displayed to 3 digits holds down the length of the lines.

The labels used to correspond to special states, such as

Q for quarantine and S for seed, have been kept as close as possible to the labels used in Kumar's dissertation and

Gollakota's thesis. Thus there is a direct correspondence between the meaning of figures in their work and the meaning of the simulator's output display. The following list provides a compilation of the letters used for special states and the meanings of these states.

0 Quiescent states indicate unused cells. The LSR and GSR

are both zero for these cells. Available cells are what gives the array its redundancy which, in turn,

gives us fault tolerance. S The Seed state contains the information necessary to

initiate the growth of a control pattern (i.e. a global

state corresponding to a valid global function). A cell

containing the seed state makes a decision of whether the

seed should come to rest in it and begin the process of

Chapter V. The Simulator 122 pattern growth, or to choose which of its neighbors to

pass the seed.

R The seed at rest state is asumed by the cell about to

initiate pattern growth.

G The Growth state indicates that a cell is participating

in pattern growth. A cell participating in pattern growth

has non-zero X, Y, and T Registers, and its PGF is set.

X Faulty cell. Cell's periodically test their neighbors

and set the corresponding directional flag in their FSR

(fault status register) when the neighbor fails a test.

Q Quarantine cells isolate faulty cells and participate in

the process of reconfiguration if they contain a valid

global state.

Y Quarantine cells participating in clearing have Y local

states. However, since there are two types of quarantine

cells participating in clearing mode, Y local states are

not displayed.

W,VNeutralized and non-neutralized quarantine cells participating in reconfiguration are denoted by Wand V

respectively. Cells with Y local states and zero global states are represented by w, while cells with Y local states and non-zero global states are represented on the

display as V. After clearing mode, these states move to

Qo and Qi states respectively. z The clearing state is the intermediate state of

Chapter V. The Simulator 123 non-quarantine cells participating in clearing.

For a more in-depth description of these states and the rules governing reconfiguration, see Kumar's dissertation

(22], and Chapters III and IV of this thesis.

Another feature of the user interface, useful mainly for debugging, is the 'r' command. Typing r puts the user in a mode that allows him or her to examine the values of registers in the array. The user is prompted for which register to be displayed and then the contents of the register is displayed for all cells in the array. The user then has the option of examining another register or going back to the regular simulation mode. The values are not interpreted for the user as with a snapshot. The values displayed are the actual integers in the registers.

Chapter V. The Simulator 124 5.4 DATA STRUCTURES

The data structures within the simulator are very simple, nothing more complex than a matrix. I will explain why and how matrices are used to hold register, flag, and buffer values for the array of cells. The meanings of all constants, types, and vars are explained. Both the data structures used for holding cell information and the data structures used to control the flow of the simulator are described.

CONSTS

The constants declared in the simulator are maxsize, maxf, nu, mu, ns, and nss. Maxsize is the maximum size of the X and Y dimensions of cellular arrays that may be simulated. This constant is used to initialize the size of matrices in the simulator. Pascal does not allow variable array dimensions, so the next best thing is to localize the size as a const at the beginning of the program. If arrays larger than 20x20 are desired, then maxsize must be increased beyond 21.

Constant maxf is the maximum number of global functions that the array is capable of executing. Currently there are

4 possible patterns, and these are read in during

initialization. When adding more functions, increase maxf.

Chapter V. The Simulator 125 Constants nu, and mu define the time required for neutralization, and the time required for clearing respectively. Both nu and mu are currently 15.

Constant ns is the number of states that a cell may assume. Since the highest global state must be below maxf and since the simulator assumes both GSR and LSR are 8 bits wide, ns = 256*(maxf+l). A value in the SR greater than or equal tons, must be a priority value.

Constant nss is the number of "special states", i.e., the number of reserved local state values. The special local states presently are 0, G, Q, S, R, Y, X, z, D, and K .

There is room for another 10 special local states to be added. Each special state is a declared constant and is given a value less than or equal to 19.

TYPES

Types matrix and matrixb are each 2-dimensional matrices whose element types are integer and boolean respectively.

The indices of matrix and matrixb range between O and maxsize. These types are used in the declarations of the registers and the flags.

Type directions is an enumerated type of constants

(n,so,e,w,m). The directions are used to as the element type of the 2-dimensional array m~rixd which is the type of

Chapter V. The Simulator 12 6 the handshake register. We type the directions so that they may be printed out as characters.

Type globaltype is an array of integers ranging from 1 to maxf. The pattern growth parameters are declared using this type. Type tabletype = array[l. .maxf,O .. maxsize,0 .. maxsize] is used to type the Table of final states.

FILE VARS

The simulator only uses a few text files. Text files mi and mo are used for terminal input and output and should only be declared when the simulator is running on a computer system that requires declaration of interactive I/0. File textin holds the pattern growth parameters and final state tables. There are currently 4 patterns in this file. File stextin holds an alphabet of characters used to display the final states of cells in patterns. File textout holds snapshots of the array taken during simulation.

SIMULATOR VARS

Some variables are only used by the simulator for the purposes of keeping track of the simulation and are never used by the cells in making decisions. The simulator vars include idim, jdim, simtime, done, state, h, and snap. The

Chapter V. The Simulator 127 integer vars idim and jdim hold the number of cells in the i-dimension and j-dimension respectively. The user is allowed to choose these during the first series of prompts.

They must be chosen between O and maxsize, which is currently 20. The current simulation time is held in simtime. The flag done is used by Prompt to communciate to the main program whether the user wishes to end the simulation. Array state holds the final state character alphabet, and is used to convert the integer final states to characters for display purposes. Flag Snap is set by Prompt to tell the main program when the user requests a snapshot of the current contents of the array. Integer variable his used as a loop index in the main program.

RECONFIGURATION AND PATTERN GROWTHVARS

For every register in the cell there is a corresponding pair of matrices in the simulator. One matrix is used to hold the register values computed at the last time step for all the cells in the array. The second matrix is used to hold the values being computed at the present time step.

Essentially one matrix holds the values on the output lines of the register while the other holds the values on the input lines to the register. Matrices holding the old values are prefaced with an "o" and mtrices holding the new values are prefaced with an "n". The values of all

Chapter V. The Simulator 128 registers in cell (i,j) are held in position [i,j] of the matrices. The dimensions are 0 .. maxsize. The register matrices are listed below.

Matrices oLSR and nLSR contain the old and new value of the Local State Register, oGSR and nGSR contain the old and new values of the Global State Register, and oSR and nSR contain the old and new values of the State Register. As explained by Kumar, the State Register is divided into the

Local State Register and the Global State Register. In the simulator, it makes referencing these halves easier if we maintain seperate matrices for the SR, GSR, and LSR. At the beginning of every time step, oSR contains 256*oGSR + oLSR, assuming an 8 bit register length for the LSR.

The old and new SVR and the old and new DVSR are held in matrices oSVR, nSVR, odSVR, and ndSVR, respectively.

Since register PRR is read-only we only need one matrix,

PAR, to hold its value. HPRR is used to keep track of the highest priority value in a cells neighborhood during a time step.

PSR holds 9 for processors and 1 .. 8 for switches, depending on their relative position to the nearest processor.

The contents of the position and time registers used in pattern growth are held in matrices oXR, nXR, oYR, nYR, oTR,

Chapter V. The Simulator 129 and nTR.

In order to keep down the cost of simulation, not all of

the buffers are given matrices in the simulator. The values

that a cell would send to the State Buffers of its neighbors

are held in SB, GSB, and LSB. Rather than examining the

appropriate directional Local and Global State Buffers, the

cell examines the buffer value of its neighbor. For

example, instead of examining NGSB[i,j], EGSB[i,j],

WGSB[i,j], and SGSB[i,j], the cell would examine GSB[i-1,j],

GSB[i, j+l], GSB(i, j-1], and GSB[i+l, j]. This may not be

possible in a hardware implementation, but it is an easy way

to save space in the simulator. Since the Position Buffer

always passes the XR east and west and the YR north and

south, the neighbors registers can be examined instead of

examining the neighbors Position Buffer. The same register

contents are not always passed in the State Buffer, so this

trick is not possible in that case. For more realism, we may wish to add the matrices for the directional buffers in

each cell, but this will cost more storage on the sequential

computer executing the simulator. The last pieces of information shared between cells are

the mark and the handshake. The mark is held in omark and

nmark, and the handshake line value is held in ohandshake

and nhandshake.

The arrays used to hold pattern growth parameters are all

Chapter V. The Simulator 130 "globaltype" which means they have as many entries as there are valid global functions, i.e. maxf. The arrays holding pattern growth parameters are: cenx, ceny, maxx, maxy, minx, miny, Bloomtime, hvspace, dspace, and pstype. These parameters allow for the most general positional PGM.

The Tables of Final Local States for all functions are held in Table. All tables are presently as large as the entire array, instead of only being as large as their pattern. This is because Pascal does not allow variable dimensions in arrays. In an actual implementation we would probably store the starting address for each table instead.

Since all flags are updated within each cell, we can dispense with having an "old" and "new" version of each

flag. We must, of course, be careful to only assign one value to a flag, within any time step. The boolean flag matrices include: PGF (Pattern Growth Flag), CM (Clear

Mode), Edge (Edge Flag), NF (Neutralization Flag), NM

(Neuralization Mode), RSF (Register Swap Flag), OK_PG (OK

for Pattern Growth), and KF (Contact Flag)

Instead of having one FSR (Fault Status Register), I have

a separate boolean matrix for each of the 8 status flags.

This is somewhat wasteful in terms of storage, but it makes

the usage a little easier. The Status Flags are: NSF, SSF,

WSF, ESF, NWSF, NESF, SWSF, and SESF.

Chapter V. The Simulator 131 The state of the Reconfiguration Finite State Machine is held in matrix RFSM. The matrices representing the clocks used in reconfiguration are neutclock, clearclock, and waitclock.

PSM is used to hold the processor switch mapping that determines which direction a seed should head to reach the nearest cell of the correct pstype. The factors that nudge the seed in the right direction are held in Nfact, Efact,

Wfact, and Sfact.

Chapter V. The Simulator 132 5.5 ROUTINE DESCRIPTIONS

This section describes the routines within the simulator. For each routine, a general description of its purpose and how it works is given. For a more detailed description of the data structures or algorithms, please see the previous sections, chapters III and IV of this thesis, and Kumar's dissertation. This section should not be considered a definition of the pattern growth and reconfiguration algorithms, but as an aid in understanding and maintaining this simulator for these algorithms. However, it is also hoped that after reading this section one will have a better idea of how the rules can be implemented. The interface for each routine is also described. Most routines are only accessible within the simulator module while others, such as Prompt, interact with the terminal or with data files. Most interfaces between procedures are via "side effects", that is, the change of a register value or flag in one routine may have an effect on another routine. This is a reflection of the organization of Kumar's Rules. The call structure is shown in the next figure, and is further explained in the routine descriptions.

Chapter V. The Simulator 133 CALL STRUCTURE CHART

Init

First_Prompt

ass Space Values dminneighbor

Compute_Space_ValueL minneighbor

Main Pass State Quarantine

Compute_Next_ State _,. Reconfig FSM _,.Eject Seed

Prompt \' Neutral~ze -

Snapshot Clear_State_Registers

Seed_Migration

Pattern Growth

Collision

Figure 23: Call Structure of Simulator.

Chapter v. The Simulator 134 5.5.1 RECONFIGURATION AND PATTERN GROWTHROUTINES

Pass_Space_Values updates the space value registers SVR and DSVR with the new values computed at the last time step.

In a more realistic implementation, this routine would also handle the passing of space values between cells. The space value is first passed north and east, while receiving values from the south and west. The SVR contents are then passed south and west, while values are received from the north and east. The details of the intercell I/0 cycles are described in the changes to Kumar's algorithms. Called by Main.

Interface:

Arrays Examined: nSVR,nDSVR

Arrays Altered: oSVR,oDSVR

minneighbor computes the minimum space value among its north, south, east, and west neighbors and itself. Function minneighbor is called by Compute_Space_Values.

Interface:

Input Parameters: i,j -- indices of cell whose space

value is being computed.

Output Return Value: minneighbor equals minimum space

value in von Neumann neighborhood

Arrays Examined: oSVR -- current contents of Space Value

Register.

Chapter v. The Simulator 135 dminneighbor computes the minimum diagonal space value

among cell's NW, NE, SW, SE neighbors and itself. Called by

Compute_Space_Value.

Interface:

Input Parameters: i,j -- indices of cell whose space

value is being computed.

Output Return Value: drninneighbor equals minimum space

value of diagonal neighbors and cell.

Arrays Examined: oSVR -- current contents of Diagonal

Space Value Register.

Compute_Space_Value computes the next space value and

diagonal space value for each cell in the array. This

procedure calls functions minneighbor and drninneighbor. and

is called by main program.

Interface:

Arrays Examined: oLSR, oGSR, oSVR, oDSVR, NWSF, NESF, SWSF, SESF.

Arrays Altered: nSVR, ndSVR.

Pass_State updates the state register values. In a more

realistic implementation, Pass_State would also contain the

protocol to handle the passing of states between cells.

Pass State calls no other routines, and is called by the

Chapter V. The Simulator 136 main program. Interface: Arrays Examined: nLSR, nGSR, nPRR, nhandshake, nmark,

nXR, nYR, nTR, RSF, LSB.

Arrays Altered: oLSR, oGSR, oSR, oPRR, ohandshake,

omark, oXR, oYR, oTR, SB, GSB, LSB.

Quarantine checks the N, S, E, and W, Status Flags and places the cell in the quarantine state if any for these flags have been set, and the cell is in RFSM state 1. In an actual implementation, these flags will be set according to the results of testing procedures. In the simulator, the status flags are set when a user injects a fault in a neighboring cell. The RFSM states are governed by Reconfig_FSM. RFSM = 1 indicates that the cell has not entered the neutralization or clearing modes yet. The cell would not want to set the LSR to Q during a clearing mode since its state should be Y during the clearing mode. If a cell that was participating in pattern growth makes a transition to the Quarantine state, then the waitclock must be initialized for a countdown to Bloomtime in RFSM state 1, and the pattern growth flag and registers are cleared.

Procedure Quarantine is called by procedure

Compute_Next_State.

Chapter V. The Simulator 137 Interface:

Arrays Examined: NSF, SSF, WSF, ESF, RFSM, oGSR, PGF,

Bloomtime, oTR.

Arrays that may be Altered: nLSR, nXR, nYR, nTR, nGSR,

PGF, waitclock.

Eject_Seed transfers a seed into the array from the

Quarantine cell acting as the reconfiguration source. This procedure divides the possible situations into two cases.

Case 1 indicates that there exists a non-faulty neighbor in the quiescent state. Case 2 indicates that the only non-faulty neighbors are in the Quarantine state. If there are no non-faulty neighbors, the cell would be cut off from the rest of the array and it does not matter what happens if it ever executes this procedure.

If case 1 holds, Eject_Seed chooses a neighbor in the quiescent state to which to pass the seed. A neighbor with no mark is chosen over a neighbor with a single mark, and no cell is chosen if it has more than 1 mark. (The number of marks indicate the number of times the cell has been visited.) The handshake is set to a value corresponding to the direction of the neighbor being chosen. To break ties, the order of directional preference is E, W, S, N. The directional preference is somewhat arbitrary, but it does have the affect of moving the seed either across or down

Chapter V. The Simulator 138 before moving it up.

The decisions in case 2 are similar to those in case 1 except a quarantine cell is being chosen.

Eject Seed does not check the space value of the neighbor, as is done in Seed_Migration, since a quiescent

neighbor will always have an SVR = 0, and a Quarantine neighbor will always have an SVR = -1.

Called by Reconfig_FSM; calls no routines.

Interface:

Input Parameters: i,j -- index of Q cell ejecting seed.

Arrays Examined: LSB, NSF, SSF, WSF, ESF, omark.

Arrays Altered: nhandshake.

Reconfig_FSM implements the Reconfiguration Finite State

Machine. In the simulator, the current state of cell (i,j)

is saved in RFSM[i, j]. The 6 states of the machine

correspond to 6 labels of the case statement which forms the

body of the procedure. In actions taken in each state and

the transitions out of each state are embodied in the pascal

statements within the begin-end block of each case

statement label.

State 1 is the initial state of all cells. A transition

to state 2 is made when a cell with a global function in its

GSR quarantines a fault. If it was participating in pattern

growth before assuming the Q state, a Q cell will wait until

Chapter V. The Simulator 139 Bloomtime, via a countdown using the waitclock, before making a transition to state 2. After the neutralization phase in state 2, the cell moves to the clearing phase in state 3. Cells move from state 3 to state 4 at the end of the clearing phase. In state 4, neutralized cells make a transition to state 6 and the one cell that is not neutralized will make a transition to state 5. In state 5, the reconfiguration source will transmit the seed information to a fault free neighbor, and then transition to state 6. Cells remain dormant in state 6 until they are either passed a seed by a neighbor, or a collision with a final pattern occurs. A transition is made to state 5 on the former and to state 2 on the latter. Reconfig_FSM is called by Compute Next_State, and calls Eject_Seed when in state 5.

Interface: Arrays Examined: oLSR, oGSR, waitclock, RFSM, KF, NM, neutclock, RSF, CM, NF, clearclock. Arrays Altered: RFSM, KF, waitclock, neutclock, clearclock, NM, RSF, nLSR, nGSR.

Neutralize neutralizes superfluous reconfiguration sources. Quarantine cells which have a valid function in their GSR essentially bid on who will become the Qi cell to

Chapter V. The Simulator 140 eject a seed into the array. These bids were termed

"priority values" by Kumar, and are loaded on reset into the PRR = Priority Register. The bids are passed between cells until the highest bid permeates the array.

The State Buffers of each of the neighbors is checked to see if any contain a priority value. If a neighbor is passing a priority value, as indicated by a neighbors SB> ns (the number of states), then the cell determines the largest priority value in its neighborhood. If the cell is a Qi or Qo cell, then the highest priority value is saved in the hPRR. All other cells, except Y and 'Y cells, save the highest priority value in the State Register. (The Y and 'Y quarantine cells initiate the clearing phase.) If the

Register Swap Flag RSF[i,j] is set and if any neighbor contains a higher priority value than the Quarantine cells

PRR, the Quarantine cell "neutralizes" itself by setting

NF[i,j] to false.

Procedure Neutralize is called at every time step by Compute_Next_State. No routines are called by Neutralize.

Interface: Arrays Examined: SB, oLSR, NSF, SSF, WSF, ESF, oPRR.

Arrays Altered: HPRR (if a quarantine cell ), SR (if not

a quarantine cell and not a Y or 'Y cell).

Chapter V. The Simulator 141 Clear_State_Registers clears the Local State, Global

State, and Pattern Growth Registers under conditions in rules set down by Kumar and corrected by Brighton. The clearing phase follows the neutralization phase, so the SR's of cells other than Quarantine cells will contain priority values just before clearing. A wave of z states spreads out

across the array, leaving quiescent states in its .

Cells with Quarantine states remain in the Quarantine state.

As long as the cell is not in the Q, Y, z, or 0 state,

if a neighbor is in the Y state, the cell sets nLSR[i,j] to

z and nGSR[i,j] to 0, else if a neighbor is in the z state

it sets nLSR to z and nGSR := 0. (The Y local state is

assumed by quarantine cells in the clearing mode in

procedure Reconfig_FSM.) If oLSR[i,j] = z then cell (i,j)

clears its State Register and Pattern Growth Registers.

Clear_State_Registers is called by Compute_Next_State; no

routines are called by Clear_State_Registers.

Interface:

Arrays Examined: oLSR, LSB.

Arrays Altered: nLSR, nGSR, nXR, nYR, nTR.

Seed_Migration is used by a cell to accept a Seed state,

and to decide what to do with the seed. If a cell passed a

seed at the last time step, as indicated by handshake, then

the cell returns to either the quiescent or quarantine

Chapter V. The Simulator 142 state. A cell marks itself every time it accepts a seed. If the cell decides to keep the seed and begin pattern growth,

it sets OK PG. If the does not set OK_PG, it must decide to which neighbor to pass the seed. Unvisited neighbors (mark

= 0) are preferred over neighbors who have possessed a seed

once (mark= 1). No cell with a mark= 2 is ever chosen.

In fact, if all neighbors have mark= 2, the cell takes on a

D = Dead State to indicate that reconfiguration has failed.

The "outside world" should pass a mark= 2 to edge cells so

that the seed is not passed off the edge of the array. This

is achieved in the simulator by a loop in First_Prompt that

initializes marks just outside edge to 2.

Of those cells with the preferred mark, a cell with the

highest space value plus diagonal space value sum is chosen.

When space is not a problem, the "nudge factors'' are added

to this sum, to direct the seed towards the nearest cell of

the correct pstype. When a tie remains, the priority is

east, west, south, north.

Seed_Migration also returns the cell to a non-seed state

the time step after the seed has been passed. Quarantine

cells become Qo and other cell go into the quiescent state.

Called by Compute_Next_State; calls no routines.

Interface:

Arrays Examined: ohandshake, oLSR, oGSR, oSVR, odSVR,

hvspace, dspace, PSR, pstype, OK_PG, NSF, SSF, ESF, WSF.

Chapter V. The Simulator 143 Arrays Altered: nhandshake, nmark, nGSR, nLSR, OK PG.

Pattern_Growth handles the growth of final patterns of local states corresponding to the desired global function.

When a seed is planted in a cell by Seed_Migration,

Pattern Growth sets nLSR to Ge and initializes XR and YR to the correct position near the center of the final pattern; clears the Time Register TR; sets its PGF (Pattern Growth

Flag); and clears OK PG. The PGF is set as long as a cell is participating in pattern growth. OK PG is cleared by the seed cell so that the seed is only planted once.

When a cell realizes it has a non-faulty neighbor with a

G local state, and a valid global function, the cell checks to see if it is within the bounds of the pattern to be grown. If so, the cell will compute its position in the pattern using the position of its neighbor. It will also compute the correct time within pattern growth from position using the formula

TR:= abs(XR - cenx[GSR] div 2) +

abs(YR - ceny[GSR] div 2) + 1.

It then accepts the neighbor's global state, sets PGF and assumes a local state of G.

At every time step after the setting of PGF,

Pattern Growth checks the Time Register to see if the value stored is greater than or equal to the Bloomtime. When it

Chapter V. The Simulator 144 is Bloomtime, the final local state is found in array Table using the global state, and position as indices. PGF, and the pattern growth registers are reset after assuming the final local state. Until Bloomtime, the Time Register TR is incremented at each time step.

Pattern Growth is called by Compute_Next State, and does not call any routines.

Interface:

Arrays Examined: OK_PG, PGF, oXR, oYR, oTR, GSB, LSB,

maxx, minx, maxy, miny, cenx, ceny.

Arrays Altered: nXR, nYR, nTR, OK_PG, nLSR, nGSR, PGF.

Collision checks for collisions of growing patterns with Qo cells. The Quarantine cell waits until the colliding pattern 'blooms' before loading the global state of the pattern into the Quarantine cell's global state register and the KF (contact flag) is set. Procedure Reconfig_FSM takes care of the reinitiation of the reconfiguration process when it finds the Quarantine cell in RFSM state 6 with KF set and a valid global function in the GSR. Collision is called by Compute_Next_State, and calls no routines.

Interface:

Arrays Examined: oLSR, oGSR, NSF, ESF, WSF, SSF, LSB,

GSB.

Arrays Altered: nGSR, KF.

Chapter V. The Simulator 145 Compute_Next_State implements the next state mapping transformation termed crst by Kumar. At every time step, the main program calls Compute_Next State, and in turn,

Compute_Next_State calls all the procedures that are used in determining the next state. The procedures called by

Compute_Next_State are, in order, Reconfig_FSM, Neutralize,

Clear_State_Registers, Seed_Migration, Pattern Growth,

Collision, and Quarantine. An attempt has been made to keep the ordering of the routines from affecting the outcome of the next state computation. This has been achieved by keeping the logic within each routine fairly well self contained. An experimental hardware implementation may be necessary to eliminate any overlooked race conditions. The next state computed by Quarantine should take priority over states computed by other procedures.

Interface:

No variables are altered within Compute_Next State, but nearly all registers and flags are examined, or altered,

by the routines that are called by Compute_Next_State.

Chapter V. The Simulator 146 5.5.2 SIMULATOR SERVICE ROUTINES

First_Prompt prompts user for dimensions of cellular array, desired global function, and seed location.

Procedure First_Prompt also performs a small amount of initialization for the cells. Edge[i,j] is set for all cells on the edge of the Cellular Array. A mark of 2 is given to all cells just outside the edge of the Cellular

Array.

The Simulator will only compute next states for those cells in cellular array positions idim through jdim. Thus, only pascal array positions between idim and jdim will be altered during simulation.

The cell chosen as the initial location of the seed is given a global state f corresponding to the requested global function f, and a local state of S.

Interface:

Terminal input and output. Arrays Examined: none. Array Altered: nLSR, nGSR, Edge, nmark, omark.

Variables Altered: idim, jdim, f.

Prompt is executed at every time step to ask the user whether he/she would like to quit, inject a fault, see and

Chapter V. The Simulator 147 record a 'snapshot' of the array and continue, or to simply just continue. This prompt is given below.

q = quit, f = fault injection, s = snapshot and continue

any other character= continue, r = register display

The complete description of the prompts is given in the user interface section. If the user responds with q, then the simulator halts execution and returns control to the local operating system environment. If the user responds with f, then Prompt asks the user where the fault should be placed.

When a fault is injected, procedure Prompt sets the status flags of all cells surrounding the faulty cell, in the direction of the faulty cell. For example,

NSF[ifault+l, jfault] := true. The next local state of the faulty cell is set to X for display purposes. After this point the cell will compute its state as if it were not faulty, and the surrounding cells are to ignore the state of the faulty cell. After injecting a fault the user has the choice of stopping the simulation immediately, injecting another fault, or simply continuing with the simulation. Thus any number of faults may be injected at every time step.

If the user responds with 's' to the initial prompt,

Prompt sets flag Snap. This is a signal to the main program

Chapter V. The Simulator 148 to call procedure Snapshot to output the local states of the cells in the array. This feature helps reduce the large amount of output that would otherwise occur at each time step.

Interface:

Terminal I/O

Arrays Examined: none.

Arrays Altered: nLSR, NSF, SSF, ESF, WSF, NWSF, NESF,

SWSF, SESF.

Variable Altered: Snap.

Init initializes all arrays in the simulator program.

Arrays corresponding to LSR, GSR, SVR, DSVR, XR, YR, and TR are cleared. The PRR is assigned values in a linear fashion. The neutralization and clearing clocks are reset to 0. The handshake arrays take on directions of m (for middle) The marks of all cells is initially zero since reconfiguration has not taken place yet. The RFSM state is initialized to 1 for all cells. All internal flags, including PGF, Edge, CM, NM, NF, RSF, KF, OK_PG, and all the fault status flags, are reset to false. The arrays containing the pattern growth parameters and the array Table containing the final patterns are initialized using the contents of the file textin. Only four patterns are currently available in textin. For

Chapter V. The Simulator 149 explanations of the patterns, please refer to Gollakota's thesis; for our purposes here they are simply arbitrary control patterns.

The array state[0 .. 26] is initialized using file stextin and is used to convert numerical values of states to

semi-meaningful letter states in the procedure Snapshot.

The processor switch type values are assigned to the

array PSR. The assignment is based on the cells position in the array in the manner of the pattern below, where the upper left hand corner cell (1,1) is a switch with pstype 1.

S S S 1 2 3 s p s = 4 5 6 s s s 7 8 9 Interface:

Arrays altered: all

Arrays Examined: none

variables altered: none

Snapshot is called by the main program to display the

states in the array of cells. The state display is normally

the Local State; during clearing the local state is Y for

quarantine cells, but a Wis displayed for quarantine cells

with a O GSR and a V for quarantine cells with a non-zero

GSR. The states are declared as constants at the top of the

Chapter V. The Simulator 150 program. The meaning of each state has been thoroughly described elsewhere. For all special local states, as indicated by O LSR 19, the state is written via a case statement. The numerical value of priority value local states is output since priority is based on the magnitude of the numerical value. The priority value is reduced by 1000 before display to keep the displayed value to 3 digits, so a display of 20 by 20 states will still fit on one screen.

Final local states are converted to lower case letters via the array state that was initialized in Init. The value of the current simulation time step is also output.

Interface:

CRT display

Text Output file: textout

Arrays altered: none

Arrays examined: oLSR, oGSR

Print_Reg is called by Prompt when thge user desires to look at the values stored in some of the registers whithin each cell. This procedure is most useful for debugging purposes.

Interface:

Arrays Examined: oXR, oYR, oLSR, oGSR, oSVR, odSVR, oTR,

oPRR, HPRR, ohandshake, omark

Chapter V. The Simulator 151 Main Program is fairly short; it serves to coordinate the routines. Before simulation is begun, all text files are reset or rewritten. The flag done is used by Prompt to communicate to the main program whether the user would like to continue. Flag Snap is set by prompt to communicate to the main program whether the user would like Snapshot to be called during the current time step. Main calls Init to initialize all the arrays and variables. First_Prompt is then called to gather information on the size of the array, and the location of the first seed. At every time step, main calls Prompt, Pass_Space_Values, Compute_Space_Values,

Pass_State, and Compute_Next_State. Procedure Snapshot is only called if the user responded with 's' to Prompt which then sets flag snap thus requesting a Snapshot of the array's local states. The simulation time variable, simtime, is also incremented by main at every time step. When the user responds with q to Prompt, then Prompt sets done, and the main program will exit its while-loop and end.

Interface:

Variables Examined: done, snap.

Chapter V. The Simulator 152 5.6 EXAMPLE OF PATTERN GROWTHAND RECONFIGURATION

In this section we present an example of pattern growth and reconfiguration. The example pattern is the Banyan network from Gollakota's thesis. The example was generated with the aid of the simulator. Although not shown in the figures, the simulator interactively requests commands. At any time step, the user may quit, inject faults, or simply continue. In order to reduce the size of the example, only select time steps are shown. Also, more faults are injected in a short period of time than would probably normally occur.

A seed is planted at time step 0. During the following time steps the seed migrates to a large enough region to begin pattern growth. At time step 14 the seed comes to rest, and at time step 15 pattern growth is initiated. The seed did not plant itself in the upper corner, because of processor-switch type considerations, and because of the pull of the seed towards the large open space in the center of the array. The space values for this pattern are both 5.

A fault is injected at time step 18, and at time step 19 the fault is quarantined. The growing pattern "collides" with the quarantine region surrounding this fault. The states the faulty cells take on are of no consequence since

Chapter V. The Simulator 153 the surrounding quarantine cells ignore the outputs of the faulty cells. Within the simulator, faulty cells are initially represented as X states. Faulty cells residing next to other faulty cells go to the quarantine state Q just as good cells. The faulty cells believe they are participating in reconfiguration although in reality they are not since their outputs are being ignored. A second fault is quarentined at time step 23. At time step 24 the pattern "blooms". The neutralization phase begins at step 28, at which time the Q cells begin to pass their priority values. Note that the two quarantine regions have gone into this phase synchronously. It may also be noted that the priority values were initialized in a sequential manner at time step 0. These priority values could have also been assigned in a more random fashion to spread probability of a cell being a reconfiguration source more evenly throughout the array. The neutralization phase lasts for 15 time steps as timed by the

neutralization clock within each Q cell. Steps 31 and 36 show the continuation of this phase.

At time step 43, the Q cells enter the clearing phase. The state registers of all other cells in the array are then cleared. Clearing also takes 15 time steps in this example.

At time step 58 the Q cells exit the clearing mode and at time step 61 a seed is ejected into the array by the remaining source of reconfiguration. The seed then migrates

Chapter v. The Simulator 154 to a large enough fault free portion of the array to regrow the pattern; coming to rest at time step 70. The pattern is then regrown and blooms at time step 80. Time steps until

84 demonstrate the stability of the regrown pattern.

A third fault was injected at step 61 to show the response of Q cells to neutralization and clearing phases when the Q cells are not participating in reconfiguration.

A fourth fault is quarentined in a fully grown, stable pattern at t=85. The remaining time steps show another series of reconfiguration phases which result in the pattern blooming a third time at time step 145 along the edge of the array.

It is impossible to fully illustrate all the possible fault scenarios through examples presented in the thesis.

Users are invited to experiment with the injection of faults in various portions of the array and at various points during reconfiguration. The only times the array should fail to reconfigure is when there is not enough fault-free space left in the array to regrow the pattern, or when the seed is trapped by surrounding faulty regions. Neither of these scenarios should occur if the cells have a reasonable probability of surviving the mission time.

Chapter V. The Simulator 155 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 2

Chapter V. The Simulator 156 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 10

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step = 14

Chapter V. The Simulator 157 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Ge 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 15

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gx Ge Gx 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 16

Chapter V. The Simulator 158 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 X 0 G Gy G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G G Gy G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gx Gx Gx Ge Gx Gx Gx 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G G Gy G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G Gy G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 18

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 G Gy G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q X Q G Gy G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q G G Gy G G G 0 0 0 0 0 0 0 0 0 0 0 0 Gx Gx Gx Gx Ge Gx Gx Gx Gx 0 0 0 0 0 0 0 0 0 0 0 0 G G G Gy G G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G G Gy G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G Gy G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 19

Chapter V. The Simulator 159 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 G Gy Q X Q G G 0 0 0 0 0 0 0 0 0 0 Q X Q G Gy G Q G G G 0 0 0 0 0 0 0 0 0 0 0 Q G G Gy G G G G G 0 0 0 0 0 0 0 0 0 0 Gx Gx Gx Gx Ge Gx Gx Gx Gx Gx 0 0 0 0 0 0 0 0 0 0 G G G G Gy G G G G G 0 0 0 0 0 0 0 0 0 0 G G G G Gy G G G G G 0 0 0 0 0 0 0 0 0 0 G G G G Gy G G G G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 23

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 p k Q p Q h p 0 0 0 0 0 0 0 0 0 0 Q X Q n C 1 Q 0 0 b 0 0 0 0 0 0 0 0 0 0 0 Q 0 m 1 C n 0 0 b 0 0 0 0 0 0 0 0 0 0 p h d p d i p i h p 0 0 0 0 0 0 0 0 0 0 b 1 C b 0 0 b 1 C b 0 0 0 0 0 0 0 0 0 0 b C 1 b 0 0 b C 1 b 0 0 0 0 0 0 0 0 0 0 p 0 0 p 0 0 p 0 0 p 0 0 0 0 0 0 0 0 0 0 0 ·o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 24

Chapter V. The Simulator 160 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 395 Q 397 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 413 p 395 Q 417 Q 397 p 0 0 0 0 0 0 0 0 0 0 Q 433 Q 413 C 417 Q 417 0 b 0 0 0 0 0 0 0 0 0 0 433 Q 433 m 1 C 417 0 0 b 0 0 0 0 0 0 0 0 0 0 p 433 d p d i p i h p 0 0 0 0 0 0 0 0 0 0 b 1 C b 0 0 b 1 C b 0 0 0 0 0 0 0 0 0 0 b C 1 b 0 0 b C 1 b 0 0 0 0 0 0 0 0 0 0 p 0 0 p 0 0 p 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 28

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 395 0 397 0 0 0 0 0 0 0 0 0 0 0 0 0 0 413 0 395395397397397 0 0 0 0 0 0 0 0 0 0 0 0 413413413395417 Q 417397397 0 0 0 0 0 0 0 0 0 0 0 Q 433413417 Q 417 Q 417397397 0 0 0 0 0 0 0 0 433 Q 433 Q 433417417 Q 417417417 0 0 0 0 0 0 0 0 433433433 Q 433433433417417417417 b 0 0 0 0 0 0 0 0 0 433433433433433 d 417417417 h p 0 0 0 0 0 0 0 0 0 0 433433433 b 0 0 417 1 C b 0 0 0 0 0 0 0 0 0 0 b 433 1 b 0 0 b C 1 b 0 0 0 0 0 0 0 0 0 0 p 0 0 p 0 0 p 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 30

Chapter v. The Simulator 161 0 0 433433433433433433433417417417417417417397397 0 0 0 0 433433433433433433433433433417417417417417417397397 0 0 433433433433433433433433433433433417417417417417417397397 0 433433433433433433433433433433 Q 433417417417417417417397397 433433433433433 Q 433433433 Q 433 Q 433417417417417417417397 433433433433 Q 433 Q 433433433 Q 433433433417417417417417417 433433433433433 Q 433433433433433433433433433417417417417 0 433433433433433433433433433433433433433433417417417417 0 0 433433433433433433433433433433433433433417417417417 0 0 0 433433433433433433433433433433433433417417417417 0 0 0 0 433433433433433433433433433433433417417417417 0 0 0 0 0 0 433433433433433433433433433417417417417 0 0 0 0 0 0 0 0 433433433433433433433417417417417 0 0 0 0 0 0 0 0 0 0 433433433433433 0 417417417 0 0 0 0 0 0 0 0 0 0 0 0 433433433 0 0 0 417 0 0 0 0 0 0 0 0 0 0 0 0 0 0 433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Simulation time step= 36

433433433433433433433433433433433433433433433433417417417417 433433433433433433433433433433433433433433433433433417417417 433433433433433433433433433433433433433433433433433433417417 433433433433433433433433433433 Q 433433433433433433433433417 433433433433433 Q 433433433 W 433 W 433433433433433433433433 433433433433 Q 433 W 433433433 W 433433433433433433433433433 433433433433433 V 433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433417 433433433433433433433433433433433433433433433433433433417417 433433433433433433433433433433433433433433433433433417417417 433433433433433433433433433433433433433433433433417417417417 433433433433433433433433433433433433433433433417417417417 0 433433433433433433433433433433433433433433417417417417 0 0 433433433433433433433433433433433433433417417417417 0 0 0 433433433433433433433433433433433433417417417417 0 0 0 0 433433433433433433433433433433433417417417417 0 0 0 0 0 0 433433433433433433433433433417417417417 0 0 0 0 0 0 0 0 433433433433433433433417417417417 0 0 0 0 0 0 0 Simulation time step= 43

Chapter V. The Simulator 162 433433433433433433433433433433433433433433433433433417417417 433433433433433433433433433433433433433433433433433433417417 433433433433433433433433433433433433433433433433433433433417 433433433433433433433433433 z Q z 433433433433433433433433 433433433433433 Q z 433 z W z W z 433433433433433433433 433433433433 Q z W z 433 z W z 433433433433433433433433 433433433433 z V z 433433433 z 433433433433433433433433433 433433433433433 z 433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433417 433433433433433433433433433433433433433433433433433433417417 433433433433433433433433433433433433433433433433433417417417 433433433433433433433433433433433433433433433433417417417417 433433433433433433433433433433433433433433433417417417417 0 433433433433433433433433433433433433433433417417417417 0 0 433433433433433433433433433433433433433417417417417 0 0 0 433433433433433433433433433433433433417417417417 0 0 0 0 433433433433433433433433433433433417417417417 0 0 0 0 0 0 433433433433433433433433433417417417417 0 0 0 0 0 0 Simulation time step= 44

433433433433433433433433433433433433433433433433433433417417 433433433433433433433433433433433433433433433433433433433417 433433433433433433433433433 z 433 z 433433433433433433433433 433433433433433433 z 433 z O Q O z 433433433433433433433 433433433433433 Q O z O W O W O z 433433433433433433 433433433433 Q O W O z O W O z 433433433433433433433 433433433 z O V O z 433 z O z 433433433433433433433433 433433433433 z O z 433433433 z 433433433433433433433433433 433433433433433 z 433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433417 433433433433433433433433433433433433433433433433433433417417 433433433433433433433433433433433433433433433433433417417417 433433433433433433433433433433433433433433433433417417417417 433433433433433433433433433433433433433433433417417417417 0 433433433433433433433433433433433433433433417417417417 0 0 433433433433433433433433433433433433433417417417417 0 0 0 433433433433433433433433433433433433417417417417 0 0 0 0 433433433433433433433433433433433417417417417 0 0 0 0 0 Simulation time step= 45

Chapter V. The Simulator 163 z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 w 0 w 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 w 0 0 0 w 0 0 0 0 0 0 0 0 0 0 0 0 0 0 V 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433433433 z 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433433433433 433 z 0 0 0 0 0 0 0 0 0 0 0 z 433433433433433433 433433 z 0 0 0 0 0 z 0 0 0 z 433433433433433433433 433433433 z 0 0 0 z 433 z 0 z 433433433433433433433433 433433433433 z 0 z 433433433 z 433433433433433433433433433 433433433433433 z 433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433433 433433433433433433433433433433433433433433433433433433433433 Simulation time step= 54

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433433433 z 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433433433433 433 z 0 0 0 0 0 0 0 0 0 0 0 z 433433433433433433 433433 z 0 0 0 0 0 z 0 0 0 z 433433433433433433433 433433433 z 0 0 0 z 433 z 0 z 433433433433433433433433 Simulation time step= 58

Chapter V. The Simulator 164 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433 Q 433 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433 Q 433 Q z 0 0 0 0 0 0 0 0 0 0 0 0 0 z 433433433 Q 433 Simulation time step= 61

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q z 0 0 0 0 0 ·0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z Q 433 Simulation time step= 64

Chapter V. The Simulator 165 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Simulation time step= 70

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ·o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G Gy G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G G Gy G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G G G Gy G G G 0 0 0 0 0 0 0 0 0 0 0 0 Gx Gx Gx Gx Ge Gx Gx Gx Gx 0 0 0 0 0 0 0 0 0 0 0 0 G G G Gy G G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G G Gy G G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 G Gy G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Simulation time step = 75

Chapter V. The Simulator 166 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 p h k p k g p g h p 0 0 0 0 0 0 0 0 0 0 b 0 0 n C 1 m 0 0 b 0 0 0 0 0 0 0 0 0 0 b 0 0 m 1 C n 0 0 b 0 0 0 0 0 0 0 0 0 0 p h d p d i p i h p 0 0 0 0 0 0 0 0 0 0 b 1 C b 0 0 b 1 C b 0 0 0 0 0 0 0 0 0 0 b C 1 b 0 0 b C 1 b 0 0 0 0 0 0 0 0 0 0 p 0 0 p 0 0 p 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Simulation time step= 80

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 p h k p k g p g h p 0 0 0 0 0 0 0 0 0 0 b 0 0 n C 1 m 0 0 b 0 0 0 0 0 0 0 0 0 0 b 0 0 Q 1 C n 0 0 b 0 0 0 0 0 0 0 0 0 0 p h Q X Q i p i h p 0 0 0 0 0 0 0 0 0 0 b 1 C Q 0 0 b 1 C b 0 0 0 0 0 0 0 0 0 0 b C 1 b 0 0 b C 1 b 0 0 0 0 0 0 0 0 0 0 p 0 0 p 0 0 p 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Simulation time step= 85

Chapter V. The Simulator 167 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 561 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 p h 581561583 g p g h p 0 0 0 0 0 0 0 0 0 0 b 581581583583583 m 0 0 b 0 0 0 0 0 0 0 0 0 0 581581603 Q 603583583 o 0 b 0 0 0 0 0 0 0 0 0 581581603 Q 603 Q 603583583 h p 0 0 0 0 0 0 0 0 0 0 603603603 Q 603603603 1 C b 0 0 0 0 0 0 0 0 0 0 b 603603603603603 b C 1 b 0 0 0 0 0 0 0 0 0 0 p o 603603603 0 p 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0 0 603 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Simulation time step= 90

0 581581603583603603603603603603603583583 0 0 0 0 0 0 581581603603603603603603603603603603603583583 0 0 0 0 0 581603603603603603603603603603603603603603583583 0 0 0 0 603603603603603603603603603603 Q 603603603603583583 0 0 0 603603603603603 Q 603603603 Q 0 Q 603603603603583583 0 0 603603603603 Q 0 Q 603603603 Q 603603603603603603583583 0 603603603603603 Q 603603603603603603603603603603603603583583 603603603603603603603603603603603603603603603603603603603583 603603603603603603603603603603603603603603603603603603603603 603603603603603603603603603603603603603603603603603603603603 603603603603603603603 z 603603603603603603603603603603603603 603603603603603603 z 0 z 603603603603603603603603603603603 603603603603603 z 0 W 0 z 603603603603603603603603603603 603603603603 z 0 W 0 W 0 z 603603603603603603603603603 603603603603603 z 0 V 0 z 603603603603603603603603603603 603603603603603603 z 0 z 603603603603603603603603603603603 603603603603603603603 z 603603603603603603603603603603603603 603603603603603603603603603603603603603603603603603603 Q 603 603603603603603603603603603603603603603603603603603 Q 433 Q 603603603603603603603603603603603603603603603603603603 Q 433 Simulation time step= 105

Chapter V. The Simulator 168 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 603603 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 603 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Simulation time step= 125

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gx Ge Gx 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Gy 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Simulation time step= 135

Chapter V. The Simulator 169 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 Q 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 p h k p k g p g h p 0 0 0 0 0 0 0 0 0 0 b 0 0 n C 1 m 0 0 b 0 0 0 0 0 0 0 0 0 0 b 0 0 m 1 C n 0 0 b 0 0 0 0 0 0 0 0 0 0 p h d p d i p i h p 0 0 0 0 0 0 0 0 0 0 b 1 C b 0 0 b 1 C b 0 0 0 0 0 0 0 Q 0 0 b C 1 b 0 0 b C 1 b 0 0 0 0 0 0 Q 0 Q 0 p 0 0 p 0 0 p 0 0 p 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 433 Simulation time step = 145

Chapter V. The Simulator 170 REFERENCES

1. Amoroso, S. and G. Cooper, "Tessellation Structures for

Reproduction of Arbitrary Patterns," J. Comput. Sys. Sci., Vol. 5, pp. 455-464, 1971.

2. Anderson, Peter G., "Another proof of the theorem on

pattern reproduction in tesselation structures," J.

Computer Sys. Sci., Vol. 12, no. 3, June 1976, pp.

394-398.

3. Avizienis, A. et. al., "The STAR (Self-Testing and

Repairing) Computer: An Investigation of the theory and practice of Fault-Tolerant Computer Design," IEEE Transactions on Computers, Vol. C-20, pp. 1312-1321,

November 1971.

4. Balzer, R., "An 8-state Minimal Time Solution to the Firing Squad Synchronization Problem," Information and control, Vol. 10, No. 1, 22-42, Jan. 1967. 5. Batcher, K. E., "Design of a Massively Parallel Processor," IEEE Transactions on Computers, Vol. C-29, pp. 836-841, September 1980.

6. Bentley, John L., and H. T. Kung, "A Tree Machines for

Searching Problems," Proc. Int'l Conf. Parallel

Processing, 1979, pp. 257-266.

7. Demongeot, J., E. Goles, and M Tchuende editors,

References 171 "Dynamical systems in Cellular Automata," Academic Press, 1985. 8. Despian, A. M., and Patterson, D. A., "X-tree: A Tree Structured Multiprocessor Computer Architecture," IEEE

Transactions on Computers, 1978. 9. Finkel, R. A., and M. H. Soloman, "Processor Interconnection Strategies," IEEE Transactions on Computers, May 1980. 10.Finkel, R. A. and Solomon, M. H., "The Lens Interconnection Strategy," IEEE Transactions on

Computers, Dec. 1981. 11.Fisher, Allan L., H.T. Kung, Lois M. Monier, Hank Walker, Yasunori Dohi, "Design of the PSC: A Programmable Systolic Chip," Third Caltech Conference on Very Large Scale Integration March 21-23, 1983, (ed. Bryant). 12.Goke, L. R., "Banyan Networks for Partitioning Multiprocessor Systems," Ph.D. Thesis, Univeristy of Florida, 1976. 13.Gollakota, N. S., and Gray, F. G., "Fault Tolerant Clocks in Arrays of Processors, " IEEE Proceedings of the Southeast Conference, Apr. 1984. 14.Gollakota, N. S., "Automatically Reconfigurable Highly Parallel Computer Systems," M. S. Thesis, VPI&SU,

Blacksburg, Virginia, 1984.

15.Goodman, J. R., and Sequin, C.H., "Hypertree: A

References 172 Multiprocessor Interconnection Topology" IEEE

Transactions on Computers, Vol. C-30, no. 12, Dec. 1981. 16.Harao, M. and S. Noguchi, "Fault Tolerant Cellular

Automata," Proceedings of the 1974 Conference on

Biologically Motivated Automata Theory. IEEE, New York, 1974. 17.Hillis, w. D., "The Connection Machine," ACM Distinguished Dissertation 1985, MIT Press, 1985. 18.Hopkins, A. L., T. Basil Smith and J. H. Lala, "FTMP - A Highly Reliable Fault Tolerant Multiprocessor for

Aircraft," Proceedings of the IEEE, Vol. 66, No. 10, October 1978.

19.Kosaraju, S. R., "Speed of Recognition of Context-Free

Languages by Array Automata," SIAM Journal on Computing, Vol. 4, pp. 331-340, 1975. 20. Koren, I., "A reconfigurable and fault-tolerant VLSI

multiprocessor array," in Proc. 8th Int. Symp. Comput.

Architecture, 1981. 21.Kumar, R., and Gray, F. G., "Control Patterns in Cellular

Arrays" IEEE Proceedings of the SouthEastCon '84, pp.

443-448.

22.Kumar, R., "A Fault-Tolerant Cellular Architecture,"

Ph.D. Dissertaion, VPI&SU, Blacksburg, Virginia, 1984.

23.Kung, H. T. and C. E. Leiserson, "Systolic Arrays (for

VLSI)," in I. S. Duff and G. w. Stewart, editors, Sparse

References 173 Matrix Proceedings 1978, pp. 256-282, Society for Industrial and Applied Mathematics, 1979. 24.Kung, H. T., and C. E. Leiserson, "Algorithms for VLSI

Processing Arrays," in C. Mead and L. Conway, Intro. to VLSI Systems, Addison-Wesley, Reading, Mass., 1980, pp. 271-292. 25.Kung, H. T., "Why Systolic Architectures?" Computer, January 1982, pp. 37-46.

26.Malek, M., and Myre, W. w., "Figure of Merit for Interconnection Networks," IEEE Transactions on

Computers, 1982. 27.Mallela, S. and G. M. Masson, "Diagnosable systems for intermittent faults," IEEE Transactions on Computers, Vol. C27, June 1978, pp. 560. 28. Manning, Frank B., "Automatic Test, Configuration and Repair of Cellular Arrays," MIT, Cambridge, Mass., U.S.A., June 1975.

29.Manning, Frank B., "An Approach to Highly Integrated, Computer Maintained Cellular Arrays," IEEE Transactions on Computers, Vol. C-26, June 1977. 30.Martin, H. L., "A Self-Reconfigurable Cellular Structure," Ph.D. Dissertation, VPI&SU, Blacksburg,

Virginia, 1980. 31.Martin, H. L., F. G. Gray, and J. R. Armstrong,

"One-Dimensional Control in Self-Reconfigurable Systems,"

References 174 Proceedings of Southeastcon 79, pp. 212-214, April 1-3, 1979. 32.Martin, H. L., F. G. Gray and J. R. Armstrong, "Multiple Faults in a One-Dimensional Self-Reconfigurable Control System, " Proceedings of 11th Annual Southeastern Symposium on System Theory, pp. 66-69, March 12-13, 1979. 33. Mead, C., L. Conway, Introduction to VLSI Systems, Addison Wesley, Reading, Mass., 1980. 34. Moore, E. F., "Machine Models of Self Reproduction," Notices of American Math. Society, 1959. 35. Moore, E. F., "Machine Models of Self-Reproduction," Proc. Symp. in Applied Math, Vol. 14, 1962, Amer. Math. Soc., Providence, RI. 36.Moore, E. F., "The Firing Squad Synchronization Problem," Sequential Machines - Selected Papers, pp. 213,214, Addison-Wesley Publishing Company Inc., 1964. 37. Nishio, H. and Y. Kobuchi, "Fault Tolerant Cellular Spaces," Proceedings 1974 Conference on Biologically Motivated Automata Theory, IEEE, New York, 1974. 38.Ostrand, T. J., "Pattern Reproduction in Tessellation Automata of Arbitrary Dimension," J. Computer and System

Sciences, Vol. 5, pp. 623-628, 1971. 39.Pradhan, D. K., and Reddy, S. M., "A Fault Tolerant Communication Architecture for Distributed Systems" IEEE

Transactions on Computers, Sept. 1982.

References 175 40.Preparata, F. P., G. Metze, and R. T. Chien, "On the connection assignment problem on diagnosable systems,"

IEEE Trans. Electron. Comput., vol. EC-16, pp. 854, Dec.

1967.

41.Preston, K., and Michael J. B. Duff, "Modern Cellular Automata: Theory and Applications," Plenum Press, 1984.

42.Seigel, H.J., R. J. McMillen and P. T. Mueller, "A

Survey of Interconnection Methods for Reconfigurable

Parallel Processing Systems," National Computer

Conference, pp. 529-542, June 1979.

43.Seitz, C. L., "Ensemble Architectures for VLSI - A Survey

and Taxonomy," in P. Renfield (ed.), Proc. Conf. Advanced

Research in VLSI, Artech House, 1981, pp. 130-135. 44.Shinahr, I., "Two- and Three-Dimensional Firing-Squad Synchronization Problems," Information and Control, vol.24, 163-180, 1974.

45.Smith III, A. R., "Two Dimensional Formal Languages and

Pattern Recognition by Cellular Automata," Proceedings of

the 12th Annual Symposium on Switching and Automata

Theory, pp. 144-152, IEEE, New York, 1971. 46.Snyder, L, "Introduction to the Configurable, Highly

Parallel Computer," Computer, January 1982, pp. 47-56.

47. Snyder, Lawrence, "Parallel Programming and the Poker

Programming Environment," pp. 27-36, Computer, July

1984.

References 176 48.Snyder, L., A. Kapauan, J. T. Field, D. B. Gannon, "The

Pringle Parallel Computer," Computer, pp. 12-20, 1984.

49.Thatcher, J., "The Construction of a Self-Describing

Turing Machine, " Proc. of the Symp. on Math. Theory of

Automata, April 1962, pp. 165-171. SO.Thatcher, J. w., "Self-Describing Turing Machines and

Self-Reproducing Cellular Automata," in Burks ed., Essays

on Cellular Automata, University of Illinois Press, 1970,

pp. 103-131.

51.Thompson, C. D. and H. T. Kung, "Sorting on a

Mesh-Connected Parallel Computer," Communications of the

ACM, Vol. 20, pp. 263-271, April 1977.

52.Thompson, R. A., S. M. Walters, and F. G. Gray,

"Stability in a Class of Tesselation Automata," Proc. of

the Ninth Annual Southeastern Symposium on Systems

Theory, pp. 404-414, March 1977.

53.Thurber, K. J., "Interconnection Networks a Survey and

Assessment," National Computer Conference, May 1974.

54.von Neuman, J., "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components,"

Annuals for Mathematical Studies, Princeton University

Press, Vol. 54, pp. 43-98, 1956.

55. von Neuman, J., Theory of Self-Reproducing Automata, (edited and completed by A. w. Burks), Univ. of Illinois Press, 1966.

References 177 56.Walters, S. M., R. A. Thompson, and F. G. Gray, "Pattern Synthesis in One-Dimensional Tessellation Automata," Proc. Eighth Annual Southeastern Symposium on System Theory, pp. 11-12, 1976.

57.Walters, S. M., "Pattern Synthesis and Perturbation in

Tesselation Automata," Ph.D. Dissertation, Virginia

Tech., Jan. 1977.

58.Walters, S. M., F. G. Gray and R. A. Thompson, "Self-Diagnosing Cellular Implementations of Finite-State Machines," IEEE Transactions on Computers, vc-30,

pp.953-959, December 1982.

59.Wittie, L. D., "Efficient Message Routing in Mega-Micro

Computer Networks" Proceedings of the 3rd Symposium on

Computer Architecture, 1976, pp. 136-140.

60. Yamada, H. and s. Amoroso, "Tesselation Automata," Information and Control, Vol. 14, pp. 299-317, 1969.

61.Yamada, H. and S. Amoroso, "A Completeness Problem for

Pattern Generation in Tesselation Automata," J. Comp. Sys. Sci., 4, pp. 137-176, 1970. 62.Yamada, H. and S. Amoroso, "Structural and Behavioral Equivalences of Tesselation Automata," Information and

Control, 18, pp. 1-31, 1971.

References 178 Appendix A, ARRAYSIM

This appendix contains the program text for the simulater

ARRAYSIM which simulates the cells in the control hyperplane

following the rules laid out in the body of the thesis.

Appendix A. ARRAYSIM 179 {*********************************************************** * * * Module Name: Control Cells Simulation Module * * * * Author: Bryan Arthur Brighton * * Rajesh Kumar * * * * Written: 3/7/85 * * Last Modified 8/26/87 * * * * OS: MTS/UB * * Language: PASCAL/JB * ***********************************************************}* * { * * This module (the only one in the simulator system) is* * intended to simulate the control cells for the Cellular* * Automata Hyperplane of the Fault Tolerant Parallel* * Processor. The system is described in Dr. Kumar's* * dissertion Mr. Gollakota's thesis, and Mr. Brighton's* * thesis, all at Virginia Polytechnic Institute and State* * University. * * Once the simulator module is compiled using a pascal* * compiler and once the object module is executed, the* * simulator prompts the user for input parameters* * concerning the array of cells. These parameters include* * the size of the array and the location of the first* * seed. * * At each time step, the simulator prompts the user as* * to whether the simulation should continue and whether* * any faults are to be injected. The user may also request* * a "snapshot" of the states in the array. * * * * INTERFACE: * * Input Flles: * * TEXTIN -- Contains Pattern Growth Parameters and * * Final State Tables for four example paterns.* * STEXTIN -- Contains alphabet. Used to convert the * * integer internal states to characters * * for output di splay. * * MI Terminal Input. * * MO Terminal Output. * * * * Output Files: * * TEXTOUT-- Contains all "snapshots" of array states * * output during simulation. * * * * * ***********************************************************}

Appendix A. ARRAYSIM 180 Program ARRAYSIM(Input,Output); {* Progam to simulate an Array of Control Cells. *} const {* The following constant parameters are dependent on array size and are used to initialize the simulator*}

maxsize = 21; { maximum array size+ 1 } maxf = 4; { maximum number of functions} nu = 15; { time required for neutralization} mu = 15; { time required for clearing} {* number of states*} ns = 256*(maxf+1); { number of states} nss = 19; { number of special (local) states} {* Special Local States*} Gx = 3 : { Growth State along x-axis} Ge= 4 ; { Growth State at center } G = 5 ; { Growth State not on axis} Gy = 6 ; { Growth State along y-axis} Q = 7 ; { Quarantine state} S = 8 ; { Seed state} R • 9 ; { Seed at Rest state} Y = 10 ; { Quarantine state indicating clear mode} X • 11 ; { Bad Processor} z = 12 ; { intermediate clearing state} D = 13 ; { Dead state} K = 14 ; { Contact state} B = 16 ; { Boundary State} type matrix = array[0 •• maxsize,0 •• maxsize] of integer; matrixb = array[0 •• maxsize,0 •• maxsize) of boolean; directions= (n,so,e,w,m); directionset = set of directions; matrixd • array[0 •• maxsize,0 •• maxsize] of directions; tabletype=array[1 •• maxf,0 •• maxsize,0 •• maxsize) of integer; globaltype • array[1 •• maxf] of integer; index • o•• maxsize; var mi • text; { terminal input ·1 mo • text; { terminal output} textin • text; { input text file} stextin • text; { input state text file } textout • text; { output text file } {* Simulator vars*}

Appendix A. ARRAYSIM 181 idim integer: { number of cells in i dimemsion} jdim integer: { number of cells in j dimension} simtime integer: { Simulation Time} done boolean: { Done with Simulation Flag} state array[0 •• 26] of char: { State Character Array} h integer: { loop index} Snap boolean: { flag to tell main to take Snapshot} {* Reconfiguration and Pattern Growth vars*} oLSR : matrix: { old value of Local State register} nLSR : matrix: { new value of Local State register} oGSR : matrix: { old value of Global State register} nGSR : matrix: { new value of Global State register} oSR : matrix: { old value of State Register} nSR : matrix: { new value of State Register} oSVR : matrix: { old value of Space Value Register} nSVR : matrix: { new value of Space Value Register} odSVR : matrix: { old value of diagonal Space Value Reg} ndSVR : matrix: { new value of diagonal Space Value Reg} oPRR : matrix: { old value of Priority Register} nPRR : matrix: { next value of Priority Register} hPRR : matrix: { highest Priority value received} PSR : matrix: { Processor Switch type Register} SB : matrix: { State Buffer } GSB : matrix: { Global State Buffer} LSB : matrix: { Local State Buffer} oXR : matrix: { old X-position register} nXR : matrix: { next X-position register} oYR : matrix: { old Y-position register} nYR : matrix: { next Y-position register} oTR : matrix: { old Time during pattern growth Reg} nTR : matrix: { new Time during pat~ern growth Reg} Table : tabletype: { Table of final local states for global functions } Bloomtime : globaltype:{ time at which pattern should Bloom into final local states} cenx global type: { x coordinate of center of pattern} ceny global type: { y c?ordinate of center of pattern} maxx global type: { maximum x dimension of pattern} maxy global type: { maximum y dimension of pattern} minx global type: { minimum x dimension of pattern} miny global type: { minimum y dimension of pattern} hvspace global type: { amount of horizontal/vertical space needed to grow pattern} dspace : globaltype: { amount of diagonal space needed to grow pattern} pstype : globaltype: {processor-switch type for seed cell} PGF : matrixb: { Pattern Growth Flag} ASF : matrixb: { Accept Seed Flag}

Appendix A. ARRAYSIM 182 CM : matrixb; { Clear Mode} Edge : matrixb; { Edge flag} NF : matrixb; { Neutralization Flag} NM : matrixb: { Neutralization Mode} RSF : matrixb; { Register Swap Flag} OK PG : matrixb; { OK for Pattern Growth flag} KF- : matrixb; { Contact Flag} NSF : matrixb; { North Status Flag} SSF : matrixb; { South Status Flag} WSF : matrixb; { West Status Flag} ESF : matrixb: { East Status Flag} NWSF : matrixb; { North West Status Flag } NESF : matrixb; { North East Status Flag } SWSF : matrixb; { South West Status Flag } SESF : matrixb; { South East Status Flag } RFSM : matrix: { Reconfiguration Finite State Machine state} neutclock matrix: { neutralization mode timer} clearclock matrix: { clear mode timer} waitclock matrix: { wait mode time} omark matrix: { old visitation mark} nmark matrix: { new visitation mark} ohandshake matrixd: { old value of handshake line} nhandshake matrixd: { next value of handshake line} PSM : array[1 •• 9,1 •• 9] of integer;{processor-switch map} Nfact: array[0 •• 8] of integer: { North nudge factor} Efact: array[0 •• 8] of integer; { East nudge factor} Wfact: array[0 •• 8] of integer; { West nudge factor} Sfact: array[0 •• 8] of integer; { South nudge factor}

{ } {*********************************************************} { } Procedure Pass Space Values: { Procedure to-pass array space values and clock, i.e. update, space value registers. (Actually, values are not traded between cells in simulator.) } var i,j : o•• maxsize;{ array subscripts} begin for i := 1 to idim do for j :• 1 to jdim do begin oSVR[i,j] := nSVR[i,j]; odSVR[i,j] := ndSVR[i,j]; end; end; { }

Appendix A. ARRAYSIM 183 {***********************************************} { } function minneighbor(i:integer;j:integer):integer; { Function to compute the minimum space value in the cell's neighborhood. } var minn :integer; begin minn := oSVR[i-1,j]; if oSVR[i+1,j] < minn then minn := oSVR[i+l,j]; if oSVR[i,j-1] < minn then minn := oSVR[i,j-1]; if oSVR[i,j+l] < minn then minn := oSVR[i,j+l]; minneighbor := minn; end;{ function minneighbor} { } {***********************************************} { } function dminneighbor(i:integer;j:integer):integer; { Function to compute the minimum diaganol space value } var dminn :integer; begin dminn := odSVR[i-1,j-1]; if odSVR[i+1,j+1] < dminn then dminn :• odSVR[i+l,j+l]; if odSVR[i+l,j-1] < dminn then dminn := odSVR[i+l,j-1]; if odSVR[i-1,j+1] < dminn then dminn := odSVR[i-1,j+l]; dminneighbor := dminn; end;{ function dminneighbor} { } . {***********************************************} { } { } procedure Compute_Space_Value; { Procedure to compute the next space value for each cell in the array } var i,j : o •• maxsize;{ array subscripts} begin for i:=1 to idim do for j:=1 to jdim do begin if (oLSR[i,j]=Q) or (oLSR[i,j]=Y) then begin nSVR[i,j] := -1; ndSVR[i,j] := -1 end else

Appendix A. ARRAYSIM 184 if (Edge[i,j] = true) then begin nSVR[i,j] := O; ndSVR[i, j] : = 0 end else begin nSVR[i,j] := minneighbor(i,j) + 1: ndSVR[i,j] := dminneighbor(i,j) +1: end: if ((NWSF[i,j] = true) or (NESF[i,j] = true) or (SWSF[i,j] = true) or (SESF[i,j] = true)) and ((oGSR[i,j] <> Q) and (oGSR[i,j] <> Y)) then ndSVR[i, j] : = 0: end: end: {compute space value} { } {**********************************************} { } procedure Pass State: { Procedure to-update state registers} var i,j : O•• maxsize:{ array subscripts} begin for i:•1 to idim do for j:•1 to jdim do begin oLSR[i,j] = nLSR[i,j]: oGSR[i,j] = nGSR[i,j]; oSR[i,j] • 256*nGSR[i,j] + nLSR[i,j]: oPRR[i,j] = nPRR[i,j]: ohandshake[i,j] • nhandshake[i,j]: omark[i,j] = nmark[i,j]: oXR[i,j] = nXR[i,j]: oYR[i,j] • nYR[i,j]; oTR[i,j] • nTR[i,j]: if (not RSF[i,j ) and (LSB[i,j] <> X) then begin SB[i,j] :• 256*nGSR[i,j] + nLSR[i,j]: GSB[i,j] :• nGSR[i,j]: LSB[i,j] := nLSR[i,j] end: if (RSF[i,j]) and (LSB[i,j] <> X) then begin SB[i,j] :• hPRR[i,j]; GSB[ i, j] : • hPRR[i,j] div 256: LSB[i, j] : = hPRR[i,j] mod 256 end:

Appendix A. ARRAYSIM 185 end{ for } end: {procedure Pass_State} { } {**********************************************} { } procedure Quarantine: { Checks to see if testing has set a status flag, i.e. a neighbor has become faulty. If RFSM= 0, this is the first time the status has been found true, thus the cell enters the Quarantine state and the Pattern Growth registers are cleared. GSR only cleared if it contains the upper portion of a priority value (this would occur if a fault has been found during neutraliztion). } var i,j : O•• maxsize: { array indices} begin

for i := 1 to idim do begin for j := 1 to jdim do begin if ((NSF[i,j] = true) or (SSF[i,j] = true) or (WSF[i,j] = true) or (ESF[i,j] = true)) and (RFSM[i,j] = 1) then begin {* Assume Quarantine state*} nLSR[i,j] := Q: {• If fault occurred during neutralization then clear GSR •} if (oGSR[i,j] > maxf) then nGSR[i,j]:=O: {• Reset Pattern Growth registers and Flag in case fault ocurred while participating in PG.*} if PGF[i,j] then begin waitclock[i,j] :• Bloomtime[oGSR[i,j]] - oTR[i,j]: nXR[ i , j ] : = 0 : nYR[i,j] := O: nTR[ i, j] :• 0: PGF[i,j] := false: end:{if} end:{ if} end:{for} end:{for} end: { Procedure Quarantine} { } {**********************************************}

Appendix A. ARRAYSIM 186 { } Procedure Eject_Seed(i,j : index);

{ Procedure to transfer seed from Q cell into the array} var case1 : boolean: case2 : boolean: begin { initialize} case1 := false: case2 := false: { Determine case} if ((LSB[i-1,j] :I: 0) and (not NSF[i-1,j])) or ( ( LSB [ i + 1 , j ] = 0) and (not SSF[i+1,j])) or ((LSB[i,j-1] = 0) and (not WSF[i,j-1])) or ( ( LSB[ i , j + 1 ] = 0) and (not ESF[i,j+1])) then begin case1 := true end: {if} if ( (NSF[i,j] = true) or (LSB[i-1,j] = Q) ) and ( (SSF[i,j] = true) or (LSB[i+1,j] = Q) ) and ( (WSF[i,j] = true) or (LSB[i,j-1] = Q) ) and ( (ESF[i,j] = true) or (LSB[i,j+1] = Q) ) then begin case2 := true end: {if} if case1 then begin { Transfer seed to a fault-free quiescent neighbor. } { Prefer unvisited neighbor over visited neighbor. } if (LSB[i,j+1] = 0) and (ESF[i,j] = false) and (omark[i,j+1] = 0) then nhandshake[i,j] := e else if (LSB[i,j-1] = 0) and (WSF[i,j] = false) and (omark[i,j-1] = 0) then nhandshake[i,j] := w else if (LSB[i+1,j] = 0) and (SSF[i,j] = false) and (omark[i+1,j] = 0) then nhandshake[i,j] := so else if (LSB[i-1,j] = 0) and (NSF[i,j] = false) and (omark[i-1,j] = 0) then nhandshake[i,j] := n else if (LSB[i,j+1] = 0) and (ESF[i,j] = false) and (omark[i,j+1] = 1) then nhandshake[i,j] := e else if (LSB[i,j-1] = 0) and (WSF[i,j] = false) and (omark[i,j-1] = 1) then nhandshake[i,j] := w else if (LSB[i+1,j] = 0) and (SSF[i,j] = false)

Appendix A. ARRAYSIM 187 and (omark[i+1,j] = 1) then nhandshake[i,j] := so else if (LSB[i-1,j] = 0) and (NSF[i,j] = false) and (omark[i-1,j] = 1) then nhandshake[i,j] := n; end; {if} if case2 then begin { Transfer seed to a fault-free neighbor in Quarantine State. } { Prefer unvisited Quarantine neighbor. } if (LSB[i,j+1] = Q) and (ESF[i,j] = false) and (omark[i,j+1] = 0) then nhandshake[i,j] := e else if (LSB[i,j-1] = Q) and (WSF[i,j] = false) and (omark[i,j-1] = 0) then nhandshake[i,j] := w else if (LSB[i+1,j] = Q) and (SSF[i,j] = false) and (omark[i+1,j] = 0) then nhandshake[i,j] := so else if (LSB[i-1,j] = Q) and (NSF[i,j] = false) and (omark[i-1,j] = 0) then nhandshake[i,j] := n else if (LSB[i,j+1] = Q) and (ESF[i,j] = false) and (omark[i,j+1] = 1) then nhandshake[i,j] := e else if (LSB[i,j-1] = Q) and (WSF[i,j] = false) and (omark[i,j-1] = 1) then nhandshake[i,j] := w else if (LSB[i+1,j] = Q) and (SSF[i,j] = false) and (omark[i+1,j] = 1) then nhandshake[i,j] := so else if (LSB[i-1,j] • Q) and (NSF[i,j] = false) and (omark[i-1,j] = 1) then nhandshake[i,j] := n; end; {if} end; { Procedure ejectseed} { } {****************************************************} { } procedure Reconfig FSM; { Procedure to implement Reconfguration Finite State Machine. The state transitions and the actions taken in each state are described in Dr. Kumar's dissertation and the corrections are described in Mr. Brighton's thesis. In the Simulator, RFSMholds the state. State 1 is the initial state of all cells. A transition to state 2 is made when a cell with a global function in its GSR Quarantines a fault. After the neutralization phase in state 2 , the cell moves to the clearing phase in state 3. State 4 follows state 3 and is simply for making the decision to transisition to state 5 or state 6. In state 5, the single reconfiguration source will transmit the seed information to a fault free neighbor. Cells remain dormant in state 6 until they are either passed a seed by a neighbor, or a collision with a growing pattern occurs. A transition is made to state 5 on the former and to state 2 on the latter.}

Appendix A. ARRAYSIM 188 var i,j : O•• maxsize: { array indices} begin for i := 1 to idim do begin for j := 1 to jdim do begin case RFSM[i,j] of 1: begin { Enter reconfiguration mode if Quarantine cell with global state corresponding to a function} if (oLSR[i,j] = Q) and (0 < oGSR[i,j]) and (oGSR[i,j] <= maxf) then begin if ASF[i,j] then begin RFSM[i,j] := S: ASF[i,j] := false: end else if (waitclock[i,j] <= 0) then begin RFSM[i,j]:=2: if ( KF[i,j] =true) then KF[i,j] := false: end else waitclock[i,j]:= waitclock[i,j] - 1: end: end: 2: begin if NM[i,j]=false then begin NM[i,j] := true; neutclock[i,j] := nu: RSF[i,j] := true: hPRR[i,j] :• oPRR[i,j] end else { NM is true} begin neutclock[i,j]:=neutclock[i,j]-1: if neutclock[i,j]•O then begin RFSM[i,j]:•3; NM[i,j]:=false: end: end {else} end: {2nd possibility} 3: begin

Appendix A. ARRAYSIM 189 if CM[i,j]=false then begin RSF[i,j] := false; if NF[i,j]=true then begin nGSR[i,j]:=0: end: CM[i,j]:=true: nLSR[i,j]:=Y: clearclock[i,j]:=mu: end else { CM is true} begin clearclock[i,j]:=clearclock[i,j]-1: if clearclock[i,j]=0 then begin RFSM[i,j]:=4: . nLSR[i,j]:=Q: CM[i,j]:=false: end: end { if-then-else} end; 4: begin if oGSR[i,j]=0 then RFSM[i,j]:=6 else RFSM[i,j]:=5: hPRR[i,j]:•oPRR[i,j]: end: 5: begin if (not NF[i,j]) and (oGSR[i,j] > 0) and (oGSR[i,j] <= max£) then begin Eject Seed(i,j): {* neutralize after ejecting seed*} NF[i,j] := true; end else begin nhandshake[i,j] := m: nGSR[ i , j ] : = 0 : RFSM[i,j] := 6: end: end: 6: begin if (oGSR[i,j] > 0) and (oGSR[i,j] <= maxf) then if KF[i,j] = true then {A collision has occurred between the quarantine region and a growing pattern.} begin RFSM[i,j] := 2:

Appendix A. ARRAYSIM 190 KF[i,j] := false end else begin {neutralized quarantine cell has been reactivated by a neighbor passing it a seed} RFSM[i,j]:=5: NF[i,j]:=false end end end {case} end:{for} end:{for} end; { Procedure Reconfig FSM} { } {*********************************************************} { } procedure Neutralize; { Procedure to neutralize superfluous reconfiguration sources. The Qi cell with the highest priority value "wins", where i is a valid function.} var temp:integer: i,j : o •• maxsize; { array subscripts} begin for i := 1 to idim do begin for j := 1 to jdim do begin { Rule 6.4.2.3} if (SB[i-1,j] > ns) or (SB[i+1,j] > ns) or (SB[i,j-1] > ns) or (SB[i,j+1] > ns) then begin { New Rule } { if Qo or Qi then ••• } if ( oLSR[i,j] = Q) and ( oGSR[i,j] <= maxf ) then begin temp:• oPRR[i,j]; if (temp< SB[i-1,j]) and (not NSF[i,j]) then temp:• SB[i-1,j]; if (temp< SB[i+1,j]) and (not SSF[i,j]) then temp:= SB[i+1,j]: if (temp< SB[i,j-1]) and (not WSF[i,j]) then temp:= SB[i,j-1]; if (temp< SB[i,j+1]) and (not ESF[i,j]) then temp:• SB[i,j+1]; hPRR[i,j] := temp: { Rule 6.4.2.5} if temp> oPRR[i,j] then NF[i,j] := true: end

Appendix A. ARRAYSIM 191 else { Rule 6.4.2.7} begin { as long as cell isn't Y or 'Y} if not CM[i,j] then begin temp:= oSR[ i, j]: if (temp< SB[i-1,j]) then temp ·-.- SB[ i-1, j]: if (temp< SB[i+1,j]) then temp .- SB[i+1,j]: if (temp< SB[i,j-1]) then temp :=·- SB[ i, j-1]: if (temp< SB[i,j+1]) then temp .- SB[i,j+1]: nSR[i,j] := temp: ·- nGSR[i,j] := temp div 256: nLSR[i,j] := temp mod 256: end {then} end {else} end: { then } end:{for} end:{for} end: { Procedure Neutralize} { } {****************************************************} { } procedure Clear State Registers: { Procedure to clear Local State, Global State, and Pattern Growth Registers if conditions are satisfied. The clearing phase of reconfinguration follows the neutralization phase, so the SR's of cells next to Quarantine cells will contain priority values just before clearing. A wave of z states spreads out across the array, leaving quiescent states in its wake. Cells with Quarantine states remain in Quarantine state.} var i,j : o•• maxsize: { array subscripts} begin for i :• 1 to idim do begin for j := 1 to jdim do begin { Rules 6.5.1.2 and 6.5.1.1 } if (oLSR[i,j] <> Q) and (oLSR[i,j] <> Y) and (oLSR[i,j] <> 0) and (oLSR[i,j] <> z) then begin if (SB[i-1,j] = z) or (SB[i+1,j] = z) or (SB[i,j-1] = z) or (SB[i,j+1] = z) then begin nLSR[ i, j] : = z: nGSR[ i , j ] : = 0 : end else

Appendix A. ARRAYSIM 192 if (( LSB[ i-1 , j] = Y) and (GSB[i-1,j] <= maxf)) or ( ( LSB[ i + 1 , j ] = Y) and ( GSB [ i + 1 , j] <= maxf)) or ((LSB[i,j-1] = Y) and (GSB[i,j-1] <= maxf)) or (( LSB[ i, j + 1] = Y) and (GSB[i,j+1] <= maxf)) then begin nLSR[ i , j ] : = z: nGSR[ i , j ] : = O: end {else-if} end: {if} { Rule 6.5.1.3} if (oLSR[i,j] = z) then begin nGSR[i,j] := O: { GSR <- default global state} nLSR[i,j] := O: { LSR <- quiescent state} nXR[i,j] := O: { XR <- 0} nYR[i,j] := O: { YR <- 0} nTR[i,j] := O: { TR <- 0} nmark[i,j] :• O: {mark<- 0} end: end:{for} end:{for} end: { Procedure Clear State Registers} { } {***************************************************} { } procedure Seed Migration: { Procedure used by a cell to accept and seed and decide what to do with seed. If cell does not keep seed it must choose a neighbor to pass the seed. This procedure also takes care of accepting a seed passed by either Eject Seed or Seed Migration Celis with fewer-marks have been visited less and are preferred over cells with more marks. It is also more desirable to pass seed to cells with higher space values, since seed is trying to find a cell with enough space around it to grow the pattern. } var choice :integer: 1,J : o•• maxsize: { array subscripts} nfunc,efunc,wfunc,sfunc,topfunc : integer: begin for i := 1 to idim do begin for j := 1 to jdim do begin {* If in state handshake<> m, cell just passed seed and must now return to quiescent state. *}

Appendix A. ARRAYSIM 193 if ( ohandshake[i,j] <> m) and ( 0 < oGSR[i,j] ) and ( oGSR[i,j] <= maxf )then begin nhandshake[i,j] := m: nGSR[ i , j ] : = 0 : if (oLSR[i,j] <> Q) and (oLSR[i,j] <> Y) then nLSR[ i , j ] : = 0 : end: {* Check to see if neighbor is passing a seed. ASF[i,j] set when accepting a seed*} if (ohandshake[i-1,j] = so) and (NSF[i,j] = false) then begin if (oLSR[i,j] <> Q) and (oLSR[i,j] <> Y) then nLSR[ i , j ] : = S : nGSR[i,j] := GSB[i-1,j]: nmark[i,j] := omark[i,j] + 1: ASF[i,j] := true: end: if (ohand'shake[i+1,j] = n) and (SSF[i,j] = false) then begin if (oLSR~i~j] <> Q) and (oLSR[i,j] <> Y) then nLSR[ 1 , J ] : = S: nGSR[i,j] := GSB[i+1,j]: nmark[i,j] := omark[i,j] + 1: ASF[i,j] :• true: end: if (ohandshake[i,j-1] = e) and (WSF[i,j] = false) then begin if (oLSR[i,j] <> Q) and (oLSR[i,j] <> Y) then nLSR[ i , j ] : = S : nGSR[i,j] := GSB[i,j-1]: nmark[i,j] :• omark[i,j] + 1: ASF[i,j] := true: end: . if (ohandshake[i,j+l] = w) and (ESF[i,j] = false) then begin if (oLSR~i~j] <> Q) and (oLSR[i,j] <> Y) then nLSR[ 1 , J ] : = S : nGSR[i,j] := GSB[i,j+1]: nmark[i,j] := omark[i,j] + 1: ASF[i,j] := true: end: {* When cell accepts a seed it must decide what to do with it. Ejecting seed from Q cell handled elsewhere. *}

if (ASF[i,j]) and (oLSR[i,j] <> Q) then begin

Appendix A. ARRAYSIM 194 {* Set flag OK_PGset if and only if their is enough room around cell to grow the pattern, and the cell is of the correct processor-switch type. *}

if (oSVR[i,j] >= hvspace[nGSR[i,j]]) and (odSVR[i,j] >= dspace[nGSR[i,j]]) and (PSR[i,j] = pstype[nGSR[i,j]]) then OK PG[i,j] := true else- OK PG[i,j] := false: if OK PG[i,j] then begin- nLSR [ i , j ] : = R: ASF[i,j] := false: end: if not OK PG[i,j] then begin - { compute choice} if (omark[i-1,j] = 0) or (omark[i+1,j] = 0) or (omark[i,j-1] = 0) or (omark[i,j+1] = 0) then begin choice :• 0 end else . begin if (omark[i-1,j] = 1) or (omark[i+1,j] = 1) or (omark[i,j-1] = 1) or (omark[i,j+1] = 1) then begin choice:= 1 end else begin choice:= 2 end end: { writeln(mo,'choice' ,choice): } if (choice= 0) or (choice• 1) then begin { Set handshake line in direction of neighbor that has been visited the least, and of those that have been visited the least, one with as high a space value as possible: if still a tie, priority is e,w,so,n} nfunc := -3: efunc := -3: wfunc := -3: sfunc := -3: if omark[i-1,j] = choice then begin nfunc := oSVR[i-1,j] + odSVR[i-1,j]:

Appendix A. ARRAYSIM 195 if ( oSVR[i-1,j] > hvspace[nGSR[i,j]] ) and ( odSVR[i-1,j] > dspace[nGSR[i,j]] ) then nfunc := nfunc + Nfact[PSM[PSR[i,j],pstype[nGSR[i,j]]]]; end; if omark[i,j+1] = choice then begin · efunc := oSVR[i,j+1] + odSVR[i,j+1]; if ( oSVR[i,j+1] > hvspace[nGSR[i,j]] ) and ( odSVR[i,j+1] > dspace[nGSR[i,j]] ) then efunc := efunc + Efact[PSM[PSR[i,j],pstype[nGSR[i,j]]]]; end; if omark[i,j-1] = choice then begin wfunc := oSVR[i,j-1] + odSVR[i,j-1]; if ( oSVR[i,j-1] > hvspace[nGSR[i,j]] ) and ( odSVR[i,j-1] > dspace[nGSR[i,j]] ) then wfunc := wfunc + Wfact[PSM[PSR[i,j],pstype[nGSR[i,j]]]]; end; if omark[i+1,j] = choice then begin sfunc := oSVR[i+1,j] + odSVR[i+1,j]; if ( oSVR[i+1,j] > hvspace[nGSR[i,j]] ) and ( odSVR[i+1,j] > dspace[nGSR[i,j]] ) then sfunc := sfunc + Sfact[PSM[PSR[i,j],pstype[nGSR[i,j]]]]; end; topfunc := efunc; nhandshake[i,j] := e; if wfunc > topfunc then begin . nhandshake[i,j] := w; topfunc := wfunc; end: if sfunc > topfunc then begin nhandshake[i,j] := so; topfunc :• sfunc; end; if nfunc > topfunc then nhandshake[i,j] := n; {writeln(mo,'n' ,nfunc,'e' ,efunc,'w' ,wfunc,'s' ,sfunc);} {writeln(mo,'nhandshake' ,ord(nhandshake[i,j]):3)} end; { if choice 0,1 } if (choice= 2) then nLSR[i,j] := D; {Reconfiguration failed, Array Dies}

Appendix A. ARRAYSIM 196 end: {if not OK PG} ASF[i,j] := false: end: {if ASF}

if ((LSB[i-1,j] = D) and (GSB[i-1,j] <= maxf)) or ( ( LSB[ i + 1 , j] = D) and (GSB[i+l,j] <= maxf)) or ((LSB[i,j-1] = D) and (GSB[i,j-1] <= maxf)) or ( ( LSB[ i , j + 1] = D) and (GSB[i,j+l] <= maxf)) then nLSR[i,j] := D: {Reconfiguration Failed, Array Dies} end:{ for} end:{for} end: { Procedure Seed Migration} { } {****************************************************} { } procedure Pattern Growth: { Procedure to determine local state (local function) of cell in a pattern for the desired global state (global function). Cell's keep track of their position within pattern and the current time during pattern growth. When it is time for the final pattern to appear (i.e. t = Bloomtime), cell's within the pattern look up their final local state in the Table corresponding to their global state, using their position as an index. } var 1,J : o•• maxsize: { array subscripts} xpos,ypos: integer: { computed position within pattern} global : integer: { temp storage for global state} exit flag: boolean: { flag to exit pattern growth mode} begin for i := 1 to idim do beqin for j := 1 to jdim do begin if oLSR[i,j] = R then {* plant seed •l begin nXR[i,j] :• (maxx[oGSR[i,j]]+minx[oGSR[i,j]]) div 2: nYR[i,j] :• (maxy[oGSR[i,j]]+miny[oGSR[i,j]]) div 2: nTR[i,j] := O: PGF[i,j] :• true: OK_PG~i~j] := false: nLSR[ 1 , J ] : = Ge : end: if (PGF[i,j]) then begin {• Check for Bloomtime. *}

Appendix A. ARRAYSIM 197 if (oTR[i,j] >= Bloomtime[oGSR[i,j]] - 1) then begin {*Lookup the final control state in table *} nLSR[i,j] := Table[oGSR[i,j],oYR[i,j],oXR[i,j]]; {* Clear Pattern Growth Flag and registers *} PGF[i,j] := false; nXR[i,j] := 0;{don't really need to clear registers} nYR[i,j] := 0; nTR[ i, j] : = 0; end else begin {Otherwise, increment Time Step Register and continue} nTR[i,j] := oTR[i,j]+l; end; end; if (not PGF[i,j]) and (oLSR[i,j] = 0) and (oGSR[i,j] = 0) and (not NSF[i,j]) and (not SSF[i,j]) and (not ESF[i,j]) and (not WSF[i,j]) then {* if not yet part of pattern growth mode and in quiescent state and not next to fault then ••• *} begin xpos :• 0; ypos := 0; if ((Gx <= LSB[i,j-1]) and (LSB[i,j-1] <= Gy) and (GSB[i,j-1) > 0) and (GSB[i,j-1) <• maxf) and (oXR[i,j-1] < maxx[GSB[i,j-1]])) then begin xpos := oXR[i,j-1]+1; global := GSB[i,j-1]; PGF[i,j] :• true; end; if ((Gx <= LSB[i,j+l]) and (LSB[i,j+l] <= Gy) and (GSB[i,j+l] > 0) and (GSB[i,j+l] <= maxf) and (oXR[i,j+l] > minx[GSB[i,j+l)])) then begin xpo.s : = oXR[ i , j + 1 ]- 1 ; global :• GSB[i,j+l); PGF[i,j] := true; end; if ((Gx <= LSB[i-1,j]) and (LSB[i-1,j] <= Gy) and

Appendix A. ARRAYSIM 198 (GSB[i-1,j] > 0 ) and (GSB[i-1,j] <= max£) and (oYR[i-1,j] > miny[GSB[i-1,j]])) then begin ypos := oYR[i-1,j]-1; global:= GSB[i-1,j]; PGF[i,j] := true; end; if ((Gx <= LSB[i+1,j]) and (LSB[i+1,j] <= Gy) and (GSB[i+1,j] > 0) and (GSB[i+1,j] <= max£) and (oYR[i+1,j] < maxy[GSB[i+1,j]])) then begin ypos := oYR[i+1,j]+1; global := GSB[i+1,j]; PGF[i,j] := true; end; if PGF[i,j] then begin {* Load Position*} exit flag:= false; if xpos <> 0 then nXR[i,j] := xpos { neighbor passed a position} else if ((LSB[i-1,J]=Gy) or (LSB[i-1,j]=Gc) or (LSB[i+1,j]=Gy) or (LSB[i+1,j]=Gc)) then nXR[i,j] := cenx[global] { assume along axis} else exit flag:= true; { kluge to exit PG mode} if ypos <> O then nYR[i,j] := ypos { neighbor passed a position} else if ((LSB[i,j-1]=Gx) or (LSB[i,j-1]=Gc) or (LSB[i,j+1]=Gx) or (LSB[i,j+1]=Gc)) then nYR[i,j] := ceny[global] { assume along axis} else exit flag:= true; { kluge to exit PG mode} {* Compute Time*} nTR[i,j] :• abs( nXR[i,j] - cenx[global] ) + abs( nYR[i,j] - ceny[global] ) + 1; {*Map to Gi if not beyond Bloomtime, and was not prevented from recieving position by Q cell*} if (nTR[i,j] < Bloomtime[global]) and (exit flag= false) then begin - if ((LSB[i-1,j]=Gy) or (LSB[i-1,j]=Gc) or

Appendix A. ARRAYSIM 199 (LSB[i+1,j]=Gy) or (LSB[i+1,j]=Gc)) then nLSR[i,j] := Gy else if ((LSB[i,J-1]=Gx) or (LSB[i,j-1]=Gc) or (LSB[i,j+1]=Gx) or (LSB[i,j+1]=Gc)) then nLSR[i,j] := Gx else nLSR[ i , j ] : = G; nGSR[i,j] := global; end else begin PGF[i,j] := false; nXR[i,j] := 0; nYR[i, j] : = 0; nTR[i,j] := O; end; { if } end; { if } end; {if} end;{ for} end;{ for} end; { Procedure for Pattern Growth} { } {*******************************************************} { } Procedure Collision; { This procedure checks for collisions of growing patterns with Qo cells. At bloom time, the global state is loaded into the Quarantine cells global state register and the KF (contact flag) is set. } var i,j : o•• maxsize; { array indices} begin for i := 1 to idim do begin for j :• 1 to jdim do begin if (oLSR[i,j] • Q) and (oGSR[i,j] = 0) then begin if (nss < LSB[i-1,j]) and (NSF[i,j] • false) and (0 < GSB[i-1,j]) and (GSB[i-1,j) <= maxf) then begin nGSR[i,j):= GSB[i-1,j); KF[i,j) :• true; end; if (nss < LSB[i+1,j]) and (SSF[i,j) = false) and (0 < GSB[i+1,j]) and (GSB[i+1,j) <= maxf) then begin nGSR[i,j):= GSB[i+1,j];

Appendix A. ARRAYSIM 200 KF[i,j] := true: end: · if (nss < LSB[i,j-1]) and (WSF[i,j] = false) and (0 < GSB[i,j-1]} and (GSB[i,j-1] <= maxf} then begin nGSR[i,j]:= GSB[i,j-1]: KF[i,j] := true: end: if (nss < LSB[i,j+1]} and (ESF[i,j] = false) and (0 < GSB[i,j+1]) and (GSB[i,j+1] <= maxf) then begin nGSR[i,j]:= GSB[i,j+1]: KF[i,j] := true: end: end: { if } end: { for } end:{ for } end: { Procedure Collision} { } {*******************************************************} { } procedure Compute_Next_State: { Procedure to implement sigma st - the next state transformation } begin Reconfig FSM: Neutralize: Clear State Registers: Seed_Migration: Pattern Growth: Collision: Quarantine: end: { Procedure Compute Next State} { } {******************************************************} { } procedure Print Reg: {* Prompts user-for the register that the user would like displayed. The integer value of the register is then displayed for every cell in array. *} var ch: char: i,j : integer: begin writeln(mo,' Which Register?, (typer for a list)'): readln(mi ,ch):

Appendix A. ARRAYSIM 201 writeln(textout,ch): case ch of 'r' : begin write (mo,' x=XR, y=YR, z=(XR,YR), l=LSR, g=GSR,'): write(mo,' s=SVR, d=dSVR, t=TR, p=PRR, H=HPRR,'): write(mo,' h=handshake, m=mark'): end: 'x' : for i : = 1 to idim do begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,oXR[i,j]:3): write(textout,oXR[i,j]:3) end: end: 'y' : for i := 1 to idim do begin writeln(mo): writeln(textout); for j := 1 to jdim do begin write(mo,oYR[i,j]:3): write(textout,oYR[i,j]:3) end: end: 'z' : for i := 1 to idim do begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,'(' ,oXR[i,j]:1,oYR[i,j]:1,')'): write(textout,'(' ,oXR[i~j]:1,oYR[i,j]:1,')') end: end; 'l' : for i := 1 to idim do begin writeln(mo); writeln(textout); for j :• 1 to jdim do begin write(mo,oLSR[i,j]:3); write(textout,oLSR[i,j]:3) end: end: 'g' : for i := 1 to idim do b~gin writeln(mo): writeln(textout): for j :• 1 to jdim do begin write(mo,oGSR[i,j]:3); write(textout,oGSR[i,j]:3) end: end; 's' : for i := 1 to idim do begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,oSVR[i,j]:3): write(textout,oSVR[i,j]:3) end: end: 'd' : for i := 1 to idim do begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,odSVR[i,j]:3); write(textout,odSVR[i,j]:3) end: end: 't' : for i := 1 to idim do

Appendix A. ARRAYSIM 202 begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,oTR[i,j]:3): write(textout,oTR[i,j]:3) end: end: 'p' : for i := 1 to idim do begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,oPRR[i,j]:3): write(textout,oPRR[i,j]:3) end: end: 'H' : for i := 1 to idim do begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,HPRR[i,j]:3): write(textout,HPRR[i,j]:3) end: end: 'h' : for i := 1 to idim do begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,ord(ohandshake[i,j]):3): write(textout,ord(ohandshake[i,j]):3) end: end: 'm' : for i := 1 to idim do begin writeln(mo): writeln(textout): for j := 1 to jdim do begin write(mo,omark[i,j]:3): write(textout,omark[i,j]:3) end: end: end: { case } writeln(mo): writeln(textout): end: { Print Reg} { } - {{******************************************************} } . procedure First Prompt: {• Prompts user for dimensions of cellular array, desired global function, and seed location •} var iinit : integer: jinit : integer: f : integer: { temp var for Global Function} i,j : O•• maxsize: { array subscripts} begin writeln(mo,' Array Simulator Version 1.0'): writeln: writeln(mo,' Please Enter I - dimension of array'): readln(mi,idim):

Appendix A. ARRAYSIM 203 while((idim < 1) or (idim >= maxsize)) do begin writeln(mo,' Please use value between 1 and' ,(maxsize-1)); readln (mi, idim); end; writeln(mo,' Please Enter J - dimension of array'); readln(mi,jdim); while((jdim < 1) or (jdim >= maxsize)) do begin writeln(mo,' Please use value between 1 and' ,(maxsize-1)); readln (mi, jdim); end; {* Set Edge flag for cells on edge of array*} for i:=O to (idim+1) do for j:=O to (jdim+1) do begin if (i=1) or (j=1) or (i=idim) or (j=jdim) then Edge[i,j]:=true; if (i=O) or (j=O) or (i=idim+1) or (j=jdim+1) then begin { So cells won't pass seed off edge. } omark[i,j] := 2; nmark[i,j] := 2; end; end; writeln(mo,' Please enter initial function'); readln(mi,f); while((f < 1) or (f > maxf)) do begin writeln(mo,' Please use value between 1 and' ,maxf); readln(mi,f); end; writeln(mo,' Please give I coordinate of seed'); readln(mi,iinit); while( (iinit < 1) or (iinit > idim) ) do begin writeln(mo,' Please stay within bounds 1 to' ,idim); readln(mi,iinit); end; writeln(mo,' Please give J coordinate of seed'); readln(mi,jinit); while( (jinit < 1) or (jinit > jdim) ) do begin writeln(mo,' Please stay within bounds 1 to' ,jdim); readln(mi,jinit); end;

Appendix A. ARRAYSIM 204 writeln(mo,' Beginning Simulation');writeln; writeln(mo,' Simulation Time= 0'); nGSR[iinit,jinit] := f; nLSR[iinit,jinit] := S; ASF[iinit,jinit] := true; end; {Procedure First Promt} { } - {******************************************************} { } procedure Prompt; {* Prompts user for next action. The choices are to quit, inject a fault, or to simply let the simulator continue*} var ifault • integer; { Fault injection coordinates} jfault • integer; ch • char ,• { Command Character storage variable} begin writeln(mo,' q = quit, f = fault injection, s = snapshot'); writeln(mo,' r = register display, other chars= continue'); readln(mi,ch); while (ch= 'r') do begin Print Reg; writein(mo,' Another? r=yes, •=no'); readln(mi,ch); end; while (ch= 'f') do begin writeln(mo,' Please enter I coordinate of fault'); readln(mi,ifault): while( (ifault < 1) or (ifault > idim) ) do begin writeln(mo,' Please stay within bounds 1 to' ,idim): readln(mi,ifault): end: writeln(mo,' Please enter J coordinate of fault'): readln(mi,jfault): while( (jfault < 1) or (jfault > jdim) ) do begin writeln(mo,' Please stay within bounds 1 to' ,jdim): readln(mi,jfault): end;

Appendix A. ARRAYSIM 205 writeln(mo,' Fault injected in cell' ,ifault:3,jfault:3); nLSR[ifault,jfault] := X; NSF[ifault+l,jfault] := true; SSF[ifault-1,jfault] := true; ESF[ifault,jfault-1] := true; WSF[ifault,jfault+l] := true; NWSF[ifault+l,jfault+l] := true; NESF[ifault+l,jfault-1) := true; SWSF[ifault-1,jfault+l] := true; SESF[ifault-1,jfault-1] := true; write(mo,' £-inject another fault, •-continue,'); writeln(mo,' s-snapshot and continue'); read(mi ,ch); end; if (ch= 'g') then begin done := true; writeln(mo,' ending simulation') end else begin writeln(mo,' continuing simulation ••• '); Snap:• False; if ch• 's' then Snap:= TRUE; end; end; { Procedure Prompt} { } {******************************************************} { } procedure Init; { Procedure to initialize all variables (i.e., reset the computer) } var h,i,j : o•• maxsize; { array subscripts} psnum : array[0 •• 2,0 •• 2) of integer; begin { Clear Arrays} for i:=O to maxsize do for j:= 0 to maxsize do begin nLSR[ i , j ] : = 0 ; nGSR[ i , j ] : = 0 ; nSR[i,j] :• O; oLSR[ i , j ] : = 0 ;

Appendix A. ARRAYSIM 206 oGSR[i,j] = O: oSR[i,j] = O: LSB[i,j] = O: GSB[i,j] = O: SB[i,j] = O: nSVR[i,j] = O: ndSVR[i,~] O: oSVR[i,j -• O: odSVR[i,j] :• O: nPRR[i,j] := ( maxf + 1)*256 + maxsize*i + j: oPRR[i,j] .- nPRR[i,j]: hPRR[i,j] ·-= O: nXR[i,j] = O: nYR[i,j] O: nTR[i,j] -O: oXR[i,j] -= O: oYR[i,j] • O: oTR[i,j] = O:

RFSM[ i, j] : = 1: neutclock[i,j] :• O: clearclock[i,j] := O: waitclock[i,j) := O: nhandshake[i,j] :• m: ohandshake[i,j] :• m: nmark[i,j] :• O: omark[i,j] :• O: end: { reset flags} for i:•0 to maxsize do for j:=0 to maxsize do begin PGF[i,j] = false: ASF[i,j] • false: CM[i,j] • false: Edge[i,j] = false: NF[i,j] • false: NM[i,j] • false: RSF[i,j] • false: KF[i,j) • false: OK PG[i,j] :• false: NSF[i,j] • false: SSF[i,j] • false: WSF[i,j] = false: ESF[i,j] • false: NWSF[i,j] • false: NESF[i,j] • false: SWSF[i,j] = false: SESF[i,j] = false: end:

Appendix A. ARRAYSIM 207 { Read Tables and parameters} {* reset(textin,'file=textin'): *} for h:= 1 to maxf do begin read(textin,cenx[h]); read(textin,ceny[h]); read(textin,maxx[h]); read(textin,maxy[h]); read(textin,minx[h]); read(textin,miny[h]): read(textin,hvspace[h]); read(textin,dspace[h]): read(textin,pstype[h]); read(textin,Bloomtime[h]); {* if input dump= TRUEthen begin - writeln(' maxx=' ,maxx[h]:3): writeln(' maxy=' ,maxy[h]:3): writeln(' minx=' ,minx[h]:3); writeln(' ~iny=' ,miny[h]:3); writelnl' hvspace•' ,hvspace[h]:3); writeln(' dspace=' ,dspace[h]:3); writeln(' pstype•' ,pstype[h]:3); writeln(' Btime•' ,Bloomtime[h]:3): end;*} for i:• maxy[h] downto miny[h] do begin {*writeln;*} for j:= minx[h] to maxx[h] do begin read(textin,Table[h,i,j]); Table[h,i,j] := Table[h,i,j] + 20; {*write(Table[h,i,j]:3);*} end; {for} end; {for} {*writeln;*} end; {for} { Read in State Conversion Array} {* reset(stextin,'file•stextin');*} for i:= 0 to 15 do begin read(stextin,state[i]); end; { Define Processor-Switch Register values. 1,2,3,4,6,7,9,are switches and 5 is a processor.

Appendix A. ARRAYSIM 208 i.e. 1 2 3 S s S 4 5 6 = S P S 7 8 9 S S S Seed may only be planted in cell of correct type.} psnum[1,1] := 1; psnum[1,2] := 2; psnum[1,0] ·-.- 3; psnum[2,1] := 4; psnum[2,2] := 5; psnum[2,0] :• 6; psnum[0,1] :• 7; psnum[0,2] ·-.- 8; psnum[O,O] ·-.- 9: for i:=1 to maxsize do for j:= 1 to maxsize do begin PSR[i,j] := psnum[i mod 3,j mod 3] end; {* The following incomprehensible piece of code determines the directional nudge factors that nudge the seed in the direction of the correct pstype. *} for i :• 1 to 9 do begin for j := 1 to 9 do begin PSM[i,j] :• j - i: if (PSM[i,j] • 7) or (PSM[i,j] = -5) then PSM[i, j] : = -2 else if (PSM~i~j] • -7) or (PSM[i,j] = +5) then PSM[ 1 , J ] : = + 2 : PSM[i,j] :• PSM[i,j] + 4: end: end: for i :•Oto 8 do begin Nfact[i] := O: Efact[i] :• O: Wfact[i] := O: Sfact[i] :• O: end: {*Nudge> 0 if in right direction, nudge most if in exact direction. *} Nfact[O] := 1: Nfact[1] :• 2: Nfact[2] :• 1; Efact[2] :• 1; Efact[5] := 2; Efact(S] :• 1: Wfact[O] := 1: Wfact[3] := 2: Wfact[6] :• 1: Sfact[6] :• 1: Sfact[7] := 2; Sfact[S] := 1: end: { Procedure Init} { } {******************************************************} { } procedure Snapshot; { Procedure to output cell states} var i,j : o•• maxsize; { array subscripts}

Appendix A. ARRAYSIM 209 begin for i :=1 to idim do begin writeln; writeln(textout); for j := 1 to jdim do begin if (oSR[i,j] < ns) then begin if (oLSR[i,j] <= nss) then begin case oLSR[i,j] of 0 : begin write(' 0 '); write(textout,' 0 ') end; Gx: begin write(' Gx'); write(textout,' Gx') end; Ge : begin write(' Ge'): write(textout,' Ge') end; G: begin write(' G '): write(textout,' G ') end; Gy: begin write(' Gy'); write(textout,' Gy') end; Q: begin write(' Q '); write(textout,' Q ') end; Y: begin if (oGSR[i,j] = 0) then begin write(' W '); write(textout,' W ') end else { oGSR[i,j] <> 0} begin write(' V '): write(textout,' V ') end: end: S: begin

Appendix A. ARRAYSIM 210 write ( ' S ' ) ; write(textout,' S ') end; R: begin write(' R '); write(textout,' R ') end; X: begin write(' X '); write(textout,' X ') end; z: begin write ( ' z ' ) ; write(textout,' z ') end; D: begin write(' D '); write(textout,' D ') end; K: begin write(' K '); write(textout,' K ') end; end; { case } end{ then2} else { oLSR > nss} begin write(state[(oLSR[i,j]-20)]:2,' '); write( textout, state[ (oLSR[ i, j ]-20)]: 2,' ') end; {else2} end {then1} else {oSR[i,j] > ns} begin write((oSR[i,j]-1000):3); write(textout,(oSR[i,j]-1000):3) end: {•write(ord(o~a~dshake[i,j~)~2,oGSR[i!j~:3,oLSR[!,~l:3, oSVR~l~J]:2,od~~[l,J]:2,oXR[1,J]:2,oYR[1,J]:2, 0TR[1,J):2,SB[1,J]:3); write(texout,ord(ohandshake[i,j]):2,oGSR[i,j]:3,oLSR[i,j]:3, oSVR~i~j]:2,od~~[i,j]:2,oXR[i,j]:2,oYR[i,j]:2, 0TR[1,J):2,SB[1,J]:3);•} end; end; writeln; writeln(textout); writeln('Simulation time step=' ,simtime:4); writeln(textout,'Simulation time step= ',simtime:4); end; { Procedure Snapshot}

Appendix A. ARRAYSIM 211 {*******************************************************} { } { Main program begins here} begin {• Reset and Rewrite IO text files•} reset(mi,'file••msource•,interactive'): rewrite(mo,'filer•msink•'): reset(textin,'file•textin'): reset(stextin,'file•stextin'): rewrite(textout,'file=textout'): done:= false: simtime := O: Init: First_Prompt: Pass_State: {• initialize old registers•} for h :• 1 to jdim do {• initialize space values•} begin Pass_Space_Values: Compute Space Value: end: - - Snapshot: simtime :• simtime + 1: while not done do begin { Note: Done only set false through Prompt} Prompt: Pass_Space_Values: Compute_Space_Value: Pass State: Compute-- Next State: if (Snap• true) then Snapshot: simtime :• simtime + 1: end: {while} end. { main program}

Appendix A. ARRAYSIM 212 AJ2pendix B, Pattern Growth Parameters and Tables This Appendix contains the contents of file "textin" which the simulator reads to find the Pattern Growth Parameters and Tables for 4 diferent patterns. The parameters are: cenx, ceny, maxx, maxy, minx, miny, dspace, hvspace, pstype, and bloomtime. For the meaning of the integer state assignments, refer to figure 8, page 29. File "stextin" contains the leters oabcdefghijklmnp which correspond to integers 0 ... 15 in "textin". Since the first 20 integers are reserved for special states, the integers here are incremented by 20 to get local states.

5 4 10 7 1 Banyan Network 1 5 5 6 10 15 8 11 15 11 7 15 7 8 15 2 0 0 14 3 12 13 0 0 2 2 0 0 13 12 3 14 0 0 2 15 8 4 15 4 9 15 9 8 15 2 12 3 2 0 0 2 12 3 2 2 3 12 2 0 0 2 3 12 2 15 0 0 15 0 0 15 0 0 15

5 5 10 10 1 Lens Structure 1 5 5 9 12 15 0 0 15 0 0 15 0 0 15 2 12 3 2 0, 0 2 12 3 2 2 3 12 2 0 0 2 3 12 2 15 0 0 15 0 0 15 0 0 15 2 0 0 0 12 3 0 0 0 2 2 0 0 0 3 12 0 0 0 2 15 0 0 15 0 0 15 0 0 15 2 12 3 2 0 0 2 12 3 2 2 3 12 2 0 0 2 3 12 2 15 0 0 15 0 0 15 0 0 15

Appendix B. Pattern Growth Parameters and Tables 213 4 5 8 9 1 Fault Tolerant Structure 1 4 4 4 9 0 0 7 8 8 8 11 0 0 15 8 8 15 8 8 15 6 0 9 8 8 11 3 2 2 0 7 8 8 4 12 2 2 15 8 8 15 8 8 15 2 2 12 0 2 0 0 0 2 2 0 12 2 0 0 0 2 15 8 8 15 0 0 0 1 9 8 4 0 0 0 0

6 3 11 5 1 Hyper Tree 1 5 5 7 10 0 0 0 6 8 8 8 8 11 0 0 0 0 0 15 8 8 15 8 8 15 0 0 0 3 2 10 7 8 11 3 2 0 0 3 3 13 3 12 0 3 12 2 0 15 4 0 15 0 0 15 0 0 15 0

Appendix B. Pattern Growth Parameters and Tables 214 Appendix c: Relationship Between Pattern Size and Bloomtime In this section we show the relationship between the

Bloomtime Band the maximum pattern dimension D for several pattern shapes of interest. Similar results hold for other pattern shapes.

For square patterns, X = Y = D, and

B = D if Dis odd B = D+l if Dis even.

For Diamond shaped patterns,

Girth= 2*B - 1 or,

B = (Girth+l)/2. For Rectangular patterns,

D = max(X,Y) S 2*B - 1, and depending on whether X and Y are even or odd,

2*B S X + Y S 2*B + 2.

Although the above is not a proof for any arbitrary

shape, we could show that the Bloomtime Band the maximum pattern dimension D are always linearly related for any pattern. Furthermore,

M = O(f(D)) <=> M = O(f(B)) and,

T = O(f(D)) <=> T = O(f(B)).

Thus, whenever a memory or time complexity is given in terms of B we can assume that the same complexity also holds if expressed in terms of D.

Appendix B. Pattern Size vs. Bloomtime 215 The vita has been removed from the scanned document