Periodic Points and Iteration in Sequential Dynamical Systems

Taom Sakal Advised by Padraic Bartlett

Senior Thesis

CCS Mathematics University of California Santa Barbara May 2016 CONTENTS

Introduction...... 4

1. Predicting Periodic Points in Cycle Graphs ...... 5 1.0.1 WhatisaSimpleCyclicSDS? ...... 5 1.0.2 The Phase Space ...... 7 1.1 The Compatibility Graph ...... 8 1.2 The Transition Graph ...... 10 1.2.1 Construction and Definitions ...... 11 1.2.2 Calculating SDS with the Transition Graph: an Algo- rithm ...... 14 1.3 MainResults ...... 17 1.3.1 ShiftingStates ...... 19 1.3.2 Finding other graphs with points of the same period . 20 1.4 FutureDirections...... 23

2. The Iterated Phase Space ...... 25 2.1 GeneralizedSDS ...... 25 2.2 The Iterated Phase Space ...... 25 2.2.1 Definitions for IPS ...... 28 2.3 Classification of IPS ...... 29 2.3.1 Behavior of Or and And ...... 29 2.3.2 Behavior of Parity and Parity+1 ...... 35 2.3.3 Behavior of Majority and Majority+1 ...... 36 2.3.4 Behavior of Nor and Nand ...... 38 2.4 Otherresults ...... 38

3. Color SDS ...... 41 3.1 What is a Color SDS? ...... 41 3.2 Color Distance ...... 41 3.3 Other Coloring SDSs, Other Color Distances ...... 43 3.4 Lonely Colorings ...... 44 Contents 3

Future Directions ...... 46

Acknowledgments ...... 47

Bibliography ...... 48 INTRODUCTION

Graph dynamical systems capture a phenomena found in nearly all of sci- ence: emergence. Individuals which follow simple rules can, when viewed collectively, create complex behavior. Graph dynamical systems have mod- eled everything from ants to trac to economies. In all these models indi- viduals look to their immediate environment and the actions of those around them to decide their next action. The classic mathematical example of such a system is Conway’s Game of Life. Moving creations, pulsing galaxies, and even the of self reproducing organisms can emerge from the game’s simple and mechanical rules [2, Chapter 7.3] [3]. Sequential Dynamical Systems are time and space discrete systems, sim- ilar to the Game of Life. However, they have the added dimension of an vertex-by-vertex update order (compared to the Game of Life’s simultane- ous update order). The order in which a system updates matters, and this is what gives SDS its applications, some examples being trac simulations, genetic models, queuing theory, and even a mathematically precise founda- tion for computer simulation [4, Chapter 1.3, Chapter 8].

We begin in Chapter 1 by describing results on a special class of SDS called cyclic SDS. We then consider the Iterated Phase Space and more general SDS in Chapter 2. Finally in Chapter 3 we consider Color SDS and some results on 2-cycles. We will assume basic knowledge of graph theory throughout (ie. what a graph is, what vertices and edges are, and the basic types of graphs). 1. PREDICTING PERIODIC POINTS IN CYCLE GRAPHS

Often we wish to know when SDS are periodic. Given a starting state, an SDS may update back to this starting state after n updates. A state that does this is called a point of period n. Knowing which points are periodic is important because it tells what inputs cause our systems to loop. Mortveit and Reidys characterized points of period one for a special class of SDS (called Simple Cyclic SDS) on cycle graphs. [4, Chapter 5.1]. In this chapter we generalize their technique to find points of arbitrary period on Cn for such SDS. To do this we first we introduce basic terminology and Simple Cyclic SDS in section 1.0.1. Next we review the compatibility graph, the tool Mortveit and Reidy studied fixed points with. From there we introduce the transition graph as a generalization of the compatibility graph and show how to transform the problem of calculating an SDS into a problem of finding walks on the transition graph. To finish we prove two theorems which allow us to, when given a point of period k on Cn, find points of period k on the graphs Cn and Cn+ank,wherea is a natural number.

1.0.1 What is a Simple Cyclic SDS? A Sequential (SDS) is a discrete dynamical system on a graph, and a formal construction of them can be found in [4, Chapter 4.1]. The results in this paper are limited to Simple Cyclic SDS, a special class of SDS on Cn. We label the vertices of Cn in the natural way: vi is adjacent to vi+1, and vn is adjacent to v1. A Simple Cyclic SDS consists of 1. An undirected cycle graph C with vertices v ,...v with n 3. n { 1 n} 2. A set of possible vertex states K. (For this paper we let K = F2.) 3. A local vertex f : K3 K. ! 4. An ordering ⇡ =((v1), (v2),...,(vn)) of the vertices in Cn,where is a permutation in Sn. For this paper, will be the identity. 1. Predicting Periodic Points in Cycle Graphs 6

The local vertex function f is defined as

f : K3 K ! (xi 1,xi,xi+1) (xi0 ) 7! Where x K and the subscripts are taken modulo n.Givenavertexv in i0 2 i state xi with neighbors vi 1,vi+1 in states xi 1,xi+1, we can update vi by replacing xi with the state given by f(xi 1,xi,xi+1). An update function is symmetric if it does not care about the order of the x’s in the input (xi 1,xi,xi+1). These local functions can be composed to give the SDS-map.This map, denoted [fCn , ⇡], applies the local vertex function to each vertex in our graph, in the order specified by ⇡. We call an application of the SDS- map a system update. A system state is an assignment of vertices in our graph to elements of K.Weletthetuple(x ,...x ) Kn denote the system state in which 1 n 2 vertex vi has state xi. A system update changes the system state (x1,...,xn) to a new system state

Definition 1. Let F =[fCn , ⇡] be an SDS-map and let X and X0 be system states. If F (X)=X0 we write

X X0 7!F and read it as “X updates to X0,” and we say that we have applied a system update.

Generally, Cn and ⇡ are assumed to be fixed and we write F in place of

[fCn , ⇡].

Example: Consider the SDS on C3, the cycle graph on three vertices. Let the vertex set be v1,v2,v3 and let the vertex state set be F2.Set { } ⇡ =(v1,v2,v3), the identity update order. Define the vertex update function to be

Parity(vi 1,vi,vi+1):=(vi 1 + vi + vi+1)mod2.

Take the system state (1, 0, 0), as shown in Figure 1.1, and apply our local vertex functions in the order given by ⇡. We begin with v1.Itsees that its left neighbor is 0, itself is 1, and the right neighbor is 0. Thus

we have fv1 (0, 1, 0) = 0 + 1 + 0 mod 2 = 1. Thus v1 updates to 1. 1. Predicting Periodic Points in Cycle Graphs 7

Do this for v2. It sees the updated v1 to its left, a 0 for itself, and a 0 to

its right. Thus it updates to fv2 (1, 0, 0) = 1 + 0 + 0 mod 2 = 1. Finally

we do this for v3 and we find it updates to fv3 (1, 0, 1) = 0. The resulting system state is (1, 1, 0). As we have updated all the vertices in the order specified, we have finished an entire system update, and we have that (1, 0, 0) (1, 1, 0). 7!F

Fig. 1.1: A single system update using Parity

1.0.2 The Phase Space The phase space of an SDS is a directed graph that represents how an SDS updates. The vertex set of this graph is the collection of all possible system states for our SDS. For an SDS-map F , we draw an edge from one system state X to system state X if X X . 0 7!F 0

Fig. 1.2: The phase space of Parity over C3.

The phase space gives us a complete view of an SDS’s behavior. A special structure in the phase space is the periodic point: a system state that returns to itself after a certain number of system updates. For example, the phase space illustrated above contains four points of period four, two points of period two, and two points of period one. A point of period one is called a fixed point. 1. Predicting Periodic Points in Cycle Graphs 8

Observe that finding periodic points in an SDS is equivalent to finding directed cycles in the phase space.

1.1 The Compatibility Graph

In this section we review the compatibility graph, first introduced in [4, Chapter 5.1]. The compatibility graph allows us to find all fixed points in a simple cyclic SDS. It does this by “stitching together” local fixed points, which we define below. Definition 2. Let f be a local vertex function. A local fixed point for f is a vertex state (xi 1,xi,xi+1) such that

f(xi 1,xi,xi+1)=xi. 3 Example: Consider the function Majority : F F2, defined as follows: 2 ! 1ifa + b + c>1, Majority(a, b, c)= (0ifa + b + c 1.  The local fixed points for Majority are labeled in the table below.

(xi 1,xi,xi+1) Majority(xi 1,xi,xi+1) Fixed Point? (0, 0, 0) 0 Yes (0, 0, 1) 0 Yes (0, 1, 0) 0 No (0, 1, 1) 1 Yes (1, 0, 0) 0 Yes (1, 0, 1) 1 No (1, 1, 0) 1 Yes (1, 1, 1) 1 Yes

The compatibility graph for a given simple cyclic SDS describes a way to fit local fixed points together into a global fixed point. We do this by finding which local fixed points are compatible with each other, as defined below. Definition 3. Atriple(a, b, c) is compatible with a triple (x, y, z) if and only if b = x and c = y.

Example: For Majority,thetriple(0, 0, 1) is compatible with the triples (0, 1, 1) and (0, 1, 0), but not with any other triples. 1. Predicting Periodic Points in Cycle Graphs 9

Definition 4. The compatibility graph of an SDS is a directed graph where the vertex set is all local fixed points. There is an edge from (a, b, c) to (x, y, z) if and only if (a, b, c) is compatible with (x, y, z). Example: The compatibility graph for Majority is as follows.

000

001 100

011 110

111

To denote a walk that goes from v to v to . . . v we write v v 1 2 n 1 ! 2 ! v . The compatibility graph encodes all the information about fixed ··· ! n points on Cn as follows.

Proposition 1. Let [fCn , ⇡] be a simple cyclic SDS function where ⇡ is an arbitrary permutation of our vertices and let G be the corresponding com- patibility graph. Then if W =(a ,b ,c ) (a ,b ,c ) (a ,b ,c ) (a ,b ,c ) 1 1 1 ! 2 2 2 ! ···! k k k ! 1 1 1 is a closed walk in G such that k divides n, we have that

b1,...,bk repeated n/k times

B = (b1,...,bk,b1,...,bk,...... ,b1,...,bk) z }| { is a fixed point for [fCn , ⇡].Thatis,[fCn , ⇡](b1,b2,...,bk)=(b1,b2,...,bk). Likewise, every fixed point corresponds to a closed walk on the compatibility graph. 1. Predicting Periodic Points in Cycle Graphs 10

Furthermore, the number of fixed points of our SDS-map [fCn , ⇡] is equal the trace of An, where A is any adjacency matrix corresponding to this com- patibility graph.

Proof. As described above, take any simple cyclic SDS [fCn , ⇡] and any walk W that gives the state B =(b1,...,bk,b1,...,bk,...... ,b1,...,bk). When- ever we update the vertex b⇡(i) our function takes in (b⇡(i) 1,b⇡(i),b⇡(i)+1), which is a local fixed point, meaning the vertex state stays the same. As the vertex state never changes for any vertex, B is a fixed point. If instead we are given a fixed point B, then whenever we update the ver- tex b⇡(i) the vertex’s state does not change, meaning that (b⇡(i) 1,b⇡(i),b⇡(i)+1) is a local fixed point. Likewise, when we update the adjacent vertex b⇡(i)+1 it’s state also does not change, meaning that (b⇡(i),b⇡(i)+1,b⇡(i)+2) is also a local fixed point. This corresponds to the edge that goes from (b⇡(i) 1,b⇡(i),b⇡(i)+1) to (b⇡(i),b⇡(i)+1,b⇡(i)+2) in the compatibility graph. In this way we create a path in the compatibility graph. Since our SDS is over Cn, this path is a closed cycle of length k,wherek divides n. Finally, let A denote the adjacency matrix of our compatibility graph. It is well-known that the entry (i, j) of the n-th power of an adjacency matrix is equal to the number of distinct walks length n from the i-th vertex to the j-th vertex. Hence an entry in the diagonal of An gives the total number of closed walks length n. Thus the trance of An gives the total number of fixed points of our SDS-map.

Example: We find that (000) (001) (011) (111) (110) ! ! ! ! ! (100) (000) is a closed walk of length 6 in the compatibility graph for ! Majority. Therefore the state (0, 0, 1, 1, 1, 0, 0) is a fixed point.

1.2 The Transition Graph

The transition graph is a generalization of the compatibility graph. While the compatibility graph gives information about points of period one on simple cyclic SDS, the transition graph gives information about points of any period on simple cyclic SDS, provided we limit the update order to the identity permutation. For the remainder of this paper we limit our study simple cyclic SDS with the SDS-map [fCn , id]. For notational convenience, we will denote [fCn , id] as simply F whenever the map f and the cycle Cn are understood. 1. Predicting Periodic Points in Cycle Graphs 11

1.2.1 Construction and Definitions The transition graph is similar to the compatibility graph in that it is a 3 directed graph made of the eight triples in F2. However, instead of com- patibility we draw an edge from one triple to another if, when we update the first triple, we move to the next vertex and see the second triple. More formally,

Definition 5. Let our local update function be f and our SDS-map be 3 [fCn ,id].Thetransition graph of [fCn ,id] has F2 as the vertex set, and for any two triples (a, b, c), (x, y, z) F3, we have (a, b, c) (x, y, z) if and 2 2 ! only if x = f(a, b, c) and y = c.

000 101

100 110

001 011

010 111

Fig. 1.3: The transition graph for Parity+1

The following notation is helpful for understanding the transition graph.

Definition 6. Let X =(x1,...,xn) be a system state. Then

f(xn,x1,x2) if i =1

xi0 = 8f(xi0 1,xi,xi+1) if 1 Observe that, as long as ⇡ = id, we have that F (x1,...,xn)=(x10 ,...,xn0 ). 2 (This follows directly from the definition of our SDS-map.) Likewise F (x1,...,xn)= (x100,...,xn00 ). In general, we apply the following notation when needed.

a (a) (a) F (x1,...,xn)= x1 ,...,xn ⇣ ⌘ 1. Predicting Periodic Points in Cycle Graphs 12

(a) What xi says is, that, after X has been updated a times, the ith vertex (a) has the state xi .

Definition 7. Let X =(x1,...,xn) be a system state. Updating X corre- sponds to the following walk on our transition graph.

W =(xn,x1,x2) (x10 ,x2,x3) (x20 ,x3,x4) (xn0 2,xn 1,xn) (xn0 1,xn,x10 ). ! ! ! ···! ! If a walk corresponds to a state then we call the walk a state-walk. We denote the ith vertex in W with wi. Intuitively, the state-walk tells us what the SDS “sees” as it updates the associated state. We illustrate this with the following example. Given only a state and f, we can find the corresponding state-walk by directly calculating each xi0 . Each state has a unique walk; however, not every walk is a state-walk.

Example: Let f = Parity+1. We claim that the state X =(1, 0, 1, 1) corresponds to the walk W =(1, 1, 0) (1, 0, 1) (1, 1, 1) (0, 1, 1). ! ! ! This is not too hard to see. When we calculate F (X), we update the first vertex, then the second, then the third, and finally the fourth.

Step Vertex being Current system State vi New system i updated state updates to state 1 v1 (xn,x1,x2)=(1, 1, 0) 1 (1, 1, 0) 2 v2 (x10 ,x2,x3)=(1, 0, 1) 1 (1, 1, 1) 2 v2 (x20 ,x3,x4)=(1, 1, 1) 0 (1, 0, 1) 2 v2 (x30 ,x4,x10 )=(0, 1, 1) 1 (0, 1, 1)

Observe that the triples in the third column above coincide exactly with those of W . This is because the triples record what the SDS sees at that vertex before it updates it. The state-walk captures these perspectives.

Notice that on any given walk we may find vertices vi such that (xi 1,xi,xi+1) are local fixed points and vertices where this is not true. If a triple is not a local fixed point we call it a local swap. In the previous example, (1, 0, 1) and (1, 1, 1) are local swaps. The local fixed points and local swaps let us associate our state-walks to binary strings. These strings encode the position of the local swaps and local fixed points. 1. Predicting Periodic Points in Cycle Graphs 13

Definition 8. Let W = w w w be a state-walk. Then the 1 ! 2 ! ··· ! n string associated with W is

S =(s1,s2,...,sn), where si =1if wi is a local swap and 0 if wi is a local fixed point. Example: Let f = Parity+1 and take the state-walk W =(1, 1, 0) ! (1, 0, 1) (1, 1, 1) (0, 1, 1) from the previous example. As (1, 1, 0) and ! ! (0, 1, 1) are local fixed points, the associated string is S =(0, 1, 1, 0).

At this point we have that each system state corresponds to a unique walk on the transition graph, and each walk corresponds to a string, hence each state is associated with a string. Note that the strings are not necessarily unique, while the walks are. The strings, with their encoded information, give us a way to update a system state without using the SDS function. This is done through string addition, which is simply vector addition between a string and a state. That is, for system state X =(x1,...,xn) and string S =(s1,...,sn)wehave

X + S =(x1 + s1,...,xn + sn) where xi + si is taken modulo 2. Similarly we can add strings to strings.

Lemma 1. Let X =(x1,...,xn) be a system state and let S =(s1,...,sn) be the associated string. Then F (X)=X + S where we add these strings as vectors modulo 2.

Proof. Notice that for any xi,wehave

xi0 = f(a, xi,b) for some a and b. Say that (a, xi,b) is a local swap. Then

xi0 = f(a, xi,b)=xi +1 mod2.

Since (a, xi,b) is a local swap, si = 1. Hence

xi + si = xi +1=x10 mod 2

The reasoning is similar if (a, xi,b) is a local fixed point. We have shown that x + s = x , and thus (x ,...,x ) (x + s ,...,x + s )=X + S. i i i0 1 n 7!F 1 1 n n 1. Predicting Periodic Points in Cycle Graphs 14

Example: Let our SDS function be Parity+1 and let the state be (1, 0, 1, 1). The associated string is S =(0, 1, 1, 0). Then

(1, 0, 1, 1) + 0, 1, 1, 0=(1+0, 0+1, 1+1, 1 + 0) = (1, 1, 0, 1),

and so (1, 1, 0, 1) = Parity+1(1, 0, 1, 1).

1.2.2 Calculating SDS with the Transition Graph: an Algorithm The following algorithm transforms the problem of updating an SDS to that of finding paths on the transition graph. This algorithm takes in F =[fCn , ⇡] and a system state X1 and outputs a sequence of states X1,X2,X3,... such that X X X 1 7!F 2 7!F 3 7!F ··· Notation-wise, let the jth element of Xi be denoted xi,j. We use similar notation for walks and strings. Furthermore, on a directed graph there is an edge from v1 to v2,thenwe call v2 the direct successor of v1. Likewise, v1 is the direct predecessor of v2.

Algorithm

1. Set i = 1 and input X1 and F . Find the walk corresponding to X1 by directly calculating it through the SDS-map. Call the walk W1.

2. Find the string corresponding to Wi. Call this string Si.

3. Set Xi+1 equal to Xi + Si.

4. Find the walk Wi+1 corresponding to Xi+1.

(a) First we find wi+1,1. Consider wi,n, whose states we denote as (xi+1,n 1,xi,n,xi+1,1). Set wi+1,1 =(xi+1,n,xi+1,1,xi+1,2). No- tice that this state is a successor of wi,n

(b) Now we find wi+1,j for 1

5. Increment i by one and return to Step 2.

In this way the algorithm updates the SDS without ever applying the SDS function (except in Step 1, where we need to directly calculate W1). We will prove that running this algorithm generates the same system states as repeatedly applying our SDS-map. But first we give a lemma which shows that, in Step 4, Wi+1 is indeed the walk corresponding to Xi+1.

Lemma 2. Let Xi be the system state (x1,...,xn) and let Wi be the cor- responding walk and Si the corresponding state. Let Xi+1 = Xi + Si. Then Wi+1 can always be created through the process given in Step 4, and is the walk corresponding to Xi+1.

Proof. By construction Wi+1 is a state-walk. It corresponds to the state Xi+1, again by construction.

Proposition 2. Input the system state X1 and the SDS function F into our algorithm. The algorithm generates X1,X2,X3,... such that

X X X 1 7!F 2 7!F 3 7!F ···

Proof. We know W1 corresponds to X1. For i>1, Lemma 2 guarantees that Wi is the walk corresponding to Xi.ThusthestringSi generated from Wi is the string associated with Xi. Whenever we reach Step 3 the algorithm outputs X . By Lemma 1 we have that X X . i+1 i 7!F i+1 An example of the algorithm follows.

Example: For Step 1, input F = Parity+1 and X1 =(0, 0, 0, 0). By direct calculation we find that the walk corresponding to X1 is

W =(0, 0, 0) (1, 0, 0) (0, 0, 0) (1, 0, 1). 1 ! ! ! This is the red walk pictured below. Moving onto Step 2 we find that S1 =1, 0, 1, 1. (The local swaps are colored gray.) In Step 3 we calculate that

X2 = X1 + S1 =(0+1, 0+0, 0+1, 0 + 1) = (1, 0, 1, 1)

In Step 4 we find W2. This walk is colored green. 1. Predicting Periodic Points in Cycle Graphs 16

000 101

100 110

001 011

010 111

Step 4(a): We find w2,1. First look at w1,4, which equals (1, 0, 1). It’s direct successors are (1, 1, 0) and (1, 1, 1). As x1,2 = 0 we see that w2,1 =(1, 1, 0).

Step 4(b): We now find w2,2 and w2,3. For w2,2 we first look at w2,1 = (1, 1, 0). Its direct successors are (1, 0, 1) and (1, 0, 0). Since x2,3 =1we have that w2,2 =(1, 0, 1).

We repeat the process to find w2,3. We see that w2,2 =(1, 0, 1) and that the direct successors are (1, 1, 0) and (1, 1, 1). Since x2,4 =1wehave that w2,3 =(1, 1, 1).

Step 4(c): We now find the last vertex, w2,4. First we see that s2,1 =0 because w2,1 is a fixed point. Next we look at w2,3 =(1, 1, 1) and see that the direct successors are (0, 1, 1) and (0, 1, 0). Since x2,1 +s2,1 =1+0=1 we have that w2,4 =(0, 1, 1). Thus we have found that that

W =(1, 1, 0) (1, 0, 1) (1, 1, 1) (0, 1, 1), 2 ! ! ! which corresponds to the green walk.

Notice that, if W1 is the red walk and W2 is the blue walk then the combined walk W W =(0, 0, 0) (1, 0, 0) (0, 0, 0) (1, 0, 1) 1 ! 2 ! ! ! ! (1, 1, 0) (1, 0, 1) (1, 1, 1) (0, 1, 1) has the edge colors red-red-red- ! ! ! black-green-green-green. For Step 5 we increment i from 1 to 2, and begin again at Step 2. Here we see that the string corresponding to W2 is S2 =0, 1, 1, 0. Step 3 then gives us that

X3 = X2+S2 =(1, 0, 1, 1)+0, 1, 1, 0 = (1+0, 0+1, 1+1, 1+0) = (1, 1, 0, 1). 1. Predicting Periodic Points in Cycle Graphs 17

At Step 4 we repeat (a), (b) and (c) to find that

W =(1, 1, 1) (0, 1, 0) (0, 0, 1) (0, 1, 0), 3 ! ! ! which corresponds to the blue walk. Step 5 increments i to 3 and sends us back to Step 2. If we continue the algorithm will loop and we will find that X4 = X1 and W4 = W1.

Observe that the only time we need to calculate Parity+1(xi 1,xi,xi+1) once, and that is to construct W1. After that we simply read information from the transition graph.

1.3 Main Results

Through the algorithm we gain powerful information about cycles. Our two main results rest on the following proposition. Before we give it we must define a small piece of notation. For what follows, let F be our SDS-map.

Definition 9. Let W = w w and V = v v be two 1 ! ··· ! n 1 ! ··· ! n walks. Then W leads into V if there is an edge from wn to v1 on the transition graph. If W leads into V , then W V denotes the walk w w ! 1 ! ··· ! n ! v v . 1 ! ···! n Proposition 3. Let F be our SDS-map, and let X1,...,Xk be a sequence of system states generated by inputting X1 into our algorithm. Say that W1,...,Wk is the corresponding sequence of walks and S1,...,Sk is the corresponding sequence of strings. Say that the Wi form the closed walk

W W W w . 1 ! 2 ! ···! k ! 1,1 Then

X X X X S + S + + S =(0, 0,...,0). 1 7!F 2 7!F ···7!F k 7!F 1 () 1 2 ··· k

Proof. Let ~0 denote (0, 0,...,0). For the “ = ” direction: we have that ( S + S + + S = ~0. By Proposition 2, 1 2 ··· k X X ... X . 1 7!F 2 7!F 7!F k All we have to prove is that X X . k 7!F 1 1. Predicting Periodic Points in Cycle Graphs 18

Observe that Xk = X1 + S1 + S2 + + Sk 1. By Lemma 1 we know ··· that X X + S .Then k 7!F k k

Xk + Sk =(X1 + S1 + S2 + + Sk 1)+Sk, ··· and since S + S + S = ~0, we have 1 2 ··· k Xk + Sk = X1 + ~0=X1. Thus X X . Hence X X X X . k 7!F 1 1 7!F 2 7!F ···7!F k 7!F 1 For the “ = ” direction: we have X X ... X X . ) 1 7!F 2 7!F 7!F k 7!F 1 Since X X ,wemusthaveX + S = X . Make the following two k 7!F 1 k k 1 observations

X1 = X1 + ~0

Xk = X1 + S1 + S2 + + Sk 1 ··· Combining these with the fact that Xk + Sk = X1 tells us X + S + S + + S = X = X + ~0, 1 1 2 ··· k 1 1 and hence S + S + + S = ~0. 1 2 ··· k

Proposition 4. Let F be our SDS-map and let X1,...,Xk be a sequence of system states such that X X X X . Let W ,...,W 1 7!F 2 7!F ···7!F k 7!F 1 1 k be the walks corresponding to this sequence of system states. Then the fol- lowing is a closed walk on the transition graph. W = W W W w . 1 ! 2 ! ···! k ! 1,1

Proof. Let X1 =(x1,...,xn). From Step 4(a) we see that wi,1 is a direct succesor of wi 1,n, for any i>1. This means the walk W1 W2 ! ! ··· ! W W exists. k ! k+1 As X X ,wehaveX = X . Since states correspond to unique k 7!F 1 k+1 1 walks, W1 must equal Wk+1, which means wk+1,1 = w1,1. If we combine this with our conclusion about Step 4(a) we see that w1,1 is the direct successor of wk,n. Hence W exists and is a closed walk on the transition graph. Note that the converse is not true. We cannot choose any closed walk and find phase space cycles – the walk must be one that our algorithm could generate. This implies certain conditions of the walk’s structure (in particular, it divides cleanly into some number of state walks, all of which are the same size). 1. Predicting Periodic Points in Cycle Graphs 19

1.3.1 Shifting States Here we reach the first of our theorems.

Theorem 1. Suppose that X X ... X X . Let T 1 7!F 2 7!F 7!F k 7!F 1 denote the string of length kn formed by

(x1,1,x1,2,...x1,nx2,1,...x2,n ...xk,n)

Let m denote any rotation permutation of Skn by m, and let ~yi denote the sequence (m(t(i 1)+1), m(t(i 1)+2),...m(t(i 1)+n)). Then Y Y ... Y Y 1 7!F 2 7!F 7!F k 7!F 1 Example: In Parity+1 over C we have that (0, 0, 0, 0) (1, 0, 1, 1) 4 7!F 7!F (1, 1, 0, 1) (0, 0, 0, 0). This gives us the string T = 000010111101. If 7!F we shift this string by one we get 000101111010. Therefore Theroem 1 tells us that (0, 0, 0, 1) (0, 1, 1, 1) (1, 0, 1, 0) (0, 0, 0, 1). 7!F 7!F 7!F

Proof. We now prove the theorem. Let W1,...,Wk be the walks corre- sponding to X1,...,Xn and let V1,...,Vk be the walks corresponding to Y ,...,Y . Furthermore, let W = W W . 1 k 1 ! ···! n Proving the theorem amounts to proving it for a rotation of one. If we know a rotation of one works, then we can repeatedly rotate by one to reach a rotation of any size. With this in mind, let the rotation size be one. Let us say that, given i, we have Xi =(x1,...,xn). Due to rotation by one,

Yi =(y1,...,yn)=(x2,...,xn,x10 ).

As this holds for any i, notice that Yi+1 =(y10 ,y20 ,...yn0 )=1(Xi+1)= (x20 ,x30 ,...xn0 ,x100). The walk corresponding to Yi is

Vi =(yn,y1,y2) (y10 ,y2,y3) (yn0 1,yn 1,yn) (yn0 1,yn,y10 ). ! ! ···! ! Substitute the x’s in for the y’s to find that

Vi =(x10 ,x2,x3) (x20 ,x3,x4) (xn0 1,xn,x10 ) (xn0 ,x10 ,x20 ) ! ! ···! ! = w w w w . i,2 ! i,3 ! ···! i,n ! i+1,1 1. Predicting Periodic Points in Cycle Graphs 20

Let the string associated with Wi be (s1,s2,...,sn) and the string asso- ciated with Wi+1 be (s10 ,s20 ,...,sn0 ). The string corresponding to Vi is then S =(s2,s3,...,sn,s10 ). By Lemma 1,

Yi+1 = Yi + S

=(x2,x3,...,xn,x10 )+(s2,s3,...,sn,s10 )

=(x2 + s2,x3 + s3,...,xn + sn,x10 + s10 )

=(x20 ,x30 ,...,xn0 ,x100).

A direct calculation of the path corresponding to Yi+1 gives

Vi+1 =(x100,x20 ,x30 ) (x200,x30 ,x40 ) (xn00 1,xn0 ,x100) (xn00 ,x100,x200) ! ! ···! ! = w w w w i+1,2 ! i+1,3 ! ···! i+1,n ! i+2,1 Thus Vi leads into Vi+1, and it is a subwalk of W . As this is true for every i, the walk V V v is simply a rotated version of the closed walk 1 ! ··· k ! 1,1 W . By Lemma 2, Y Y Y Y . 1 7!F 2 7!F ···7!F n 7!F 1 This proves the theorem for rotation by one. Repetitions of the single rotation gives the theorem for arbitrary rotations.

Given one phase space cycle this theorem allows us to generate more. This tells us why, when we have a cycle of size k in the phase space, we usually have many of that size.

1.3.2 Finding other graphs with points of the same period

Here we prove that if an SDS over Cn has a k-cycle in the phase space, then that SDS has a k-cycle in the phase space of Cn+ank, for any natural number a. To do this we require some definitions. For what follows, say that X ,...,X are states such that X X X , and that 1 k 1 7!F ··· 7!F n 7!F 1 W1,...,Wk and S1,...,Sk are the corresponding walks and strings. Definition 10. For any a N, define the following objects: 2 a := W W W Wi i ! i+1 ! ···! i+ak a := the string associated with a Si Wi a := (x ,...,x ,x ,...,x ,...... ,x ,...,x ). Xi i,1 i,n i+1,1 i+1,n i+ak,1 i+ak,n where the subscripts and superscripts are calculated mod k. 1. Predicting Periodic Points in Cycle Graphs 21

Observe that

a = s ,...,s ,s ,...,s ,...... ,s ,...,s . Si i,1 i,n i+1,1 i+1,n i+ak,1 i+ak,n Lemma 3. The walk a is a state-walk. In particular, it is the walk cor- Wi responding to the state a. Xi

Proof. Since X X X we know that 1 7!F ···7!F k 7!F 1

xi,j = xI,j where I = i mod k. Likewise, Wi = WI . A direct calculation then verifies that the walk corresponding to a is a. Xi Wi

Lemma 4. a a . Xi 7!F Xi+1

Proof. Input a into the algorithm and work through the first three steps. Xi Step 1 Find the walk corresponding to a. By Lemma 3 it is a. Xi Wi Step 2 Find the string corresponding to a. We see that it is Wi a =(s ,...,s ,s ,...,s ,...... ,s ,...,s ). Si i,1 i,n i+1,1 i+1,n i+ak,1 i+ak,n Step 3 Calculate that

a + a =(x ,...,x ,x ,...,x ,...... ,x ,...,x ) Xi Si i,1 i,n i+1,1 i+1,n i+ak,1 i+ak,n +(si,1,...,si,n,si+1,1,...,si+1,n,...... ,si+ak,1,...,si+ak,n)

=(xi,1 + si,1,...... ,xi+ak,n + si+ak,n)

=(xi+1,1,...,xi+1,n,xi+2,1,...,xi+2,n,...... ,xi+ak+1,1,...,xi+ak+1,n) = a Xi+1

By Lemma 1 we have that a a . Xi 7!F Xi+1

Theorem 2. Say there is a point of period k for [fCn,id]. Then there is a point of period k for [fCn+ank , id] for any natural number a. 1. Predicting Periodic Points in Cycle Graphs 22

Proof. Let X ,...,X be the states of C such that X X 1 k n 1 7!F ···7!F k 7!F X1, where each Xi is a point of period k. (Observe that this means there are no duplicate states among X1,...,Xk.) By Lemma 4, we have a a .Since a = a we have Xi 7!F Xi+1 Xk+1 X1 a a a. X1 7!F ···7!F Xk 7!F X1

Furthermore, because there are no duplicate states among X1,...,Xk it follows that there are no duplicate states among a,..., a. (Indeed, each X1 Xk Xa begins with X .) Thus each a is a point of period k for [f , id]. k k Xi Cn+ank

Corollary 1. If X1,...,Xk forms a k-cycle in the phase space of Cn, then a,..., a forms a k-cycle in the phase space of C . X1 Xk n+ank

Example: Recall that (0000) is a point of period 3 in C4 for par- ity+1. One can directly calculate that (0000) (1011) (1101) 7!F 7!F 7!F (0000). By Corollary 1, concatenating these states into the state (0000101111010000) yields a point of period 3 in C16. Similarly, in C28 we have that

(0000101111010000101111010000)

is a point of period 3, and in C40

(0000101111010000101111010000101111010000)

is a point of period 3. Recall that (0, 0, 0, 0) (1, 0, 1, 1) (1, 1, 0, 1) (0, 0, 0, 0). No- 7!F 7!F 7!F tice that the larger states are made up of X1 tailed by repeating chunks of X2,...,Xk,X1. Below is the state for the C40 example, with dots inserted to visually highlight this.

(0000 101111010000 101111010000 101111010000). · · · Breaking down the large chunks, we have

(0000 1011 1101 0000 1011 1101 0000 1011 1101 0000). · · · · · · · · · 1. Predicting Periodic Points in Cycle Graphs 23

1.4 Future Directions.

Theorem 1’s statement was phrased in terms of

X X X X , 1 7!F 2 7!F ···7!F k 7!F 1 however, these to not necessarily correspond to k cycles within the phase space; they may correspond to cycles of a length that divides k. For example, if x = 6, then the following would be of the above form but correspond to a 3-cycle. X X X X X X X 1 7!F 2 7!F 3 7!F 1 7!F 2 7!F 3 7!F 1 Characterizing when this happens is a future goal. Relatedly, the closed walk corresponding to a k-cycle in the phase space is not always made up of k unique state-walks – there may be repeats. For example, the walk W W W W W W may correspond 1 ! 2 ! 3 ! 4 ! 5 ! 6 to a 6-cycle in the phase space, as it meets the criteria of Lemma 3. It is possible, however, that W1 = W4,W2 = W5, and W3 = W6.Whetherthere are any special properties for such walks is open. On another note, a fully developed theory of the transition graph may be able completely characterize periodic points with the same ease that the compatibility graph characterizes fixed points with. This is possible for points of period 2 on certain functions. We can mimic the construction of the compatibility graph, except we let our vertices be local swaps rather than local fixed points. This creates an “anti-compatibility graph.” Through the same methods used in the compatibility graph we can analyze a special type of state: the inverse pair.

Definition 11. The states X and X are inverse pairs if X X and 0 7!F 0

X = X0 + S

Where S =1, 1, 1,...,1. (In other words, X0 is X just with the 0’s and 1’s flipped.)

We wonder what kind of functions have only inverse pairs, because it is those functions that we can completely characterize with this “anti- compatibility graph.” There do exist functions which have points of period 2 which are not inverse pairs. However, we have the following conjecture.

Conjecture 1. Points of period 2 for symmetric functions on Cn are always inverse pairs. 1. Predicting Periodic Points in Cycle Graphs 24

We have confirmed the conjecture for some basic symmetric functions, including the All 0, Nand + 1, Majority, Parity, Nor + Nand, Nor + 1, Parity + 1, Minority, Nand, and All 1 functions. Finally, generalizing the transition graph to general SDS and arbitrary permutations would be quite fruitful and help characterize the periodic points of many other structures. 2. THE ITERATED PHASE SPACE

What happens when we “apply an SDS to it’s own phase space?” This question leads us to the iterated phase space. To understand this structure, we must first extend our notion of SDS.

2.1 Generalized SDS

General SDS extend simple cyclic SDS to arbitrary simple graphs. Our construction mimics that of [4, Chapter 4.1]. A general SDS (or, from now on, simply SDS) has the same construction as a simple cyclic SDS. The di↵erence is, instead having each local vertex function be f : K3 K we have the family of functions ! f : Kn K. n ! This gives us a local vertex function for arbitrary number of neighbors (in- stead of only three.) In light of this, instead of applying the local vertex function to a vertex, we apply the local function F to the vertex v.Ifv has d(v) neighbors, F will apply the corresponding local vertex function from our family of functions. F : Kn Kn ! F (v)=fd(v)+1(v) (Note the change in notation – F is no longer shorthand for an SDS-map, but is rather a local vertex function.)

2.2 The Iterated Phase Space

In this section we define the iterated phase space (IPS). Let [FG, ⇡] be the SDS-map with local vertex function F , graph G and permutation ⇡. Consider the phase space of this graph. By ignoring edges, labels, and direction which we can transform the phase space into a graph G0. From G0 we can retrieve its connected components to form the set of connected graphs T⇡. 2. The Iterated Phase Space 26

Now, given F and G as above, consider a set of permutations S = ⇡ , ⇡ ,...,⇡ for G.ThenG generates the set of graphs T = T { 1 2 k} ⇡1 [ T T under the function F and the permutation set S.(IfF and ⇡2 [ ···[ ⇡k S are assumed, we just say that G generates T .) We call T the children of G.

Definition 12. Let IPSn[FG,S] denote nth iteration of an iterated phase space (IPS) formed formed by a seed graph G, the local update function F , the permutation set S.

IPSn[FG,S] is a graph whose vertices are graphs. We call these vertices the elements of our graph. In particular

1. IPS0[FG,S] has a single element, which is the graph G.

2. The elements of IPS1[FG,S] are G and the children of G. We draw an directed edge from G to each child. If two elements are the same we identify them.

3. For IPSn[FG,S], we consider all the elements in IPSn 1[FG,S] but not in IPSn 2[FG,S]. (That is, we consider all the new children created in the previous iteration.) Call this set M.

Then IPSn[FG,S] consists of IPSn 1[FG,S] along with all the children of each element in M, where each element has an edge to its corre- sponding children. We identify any elements which are the same.

If we let n go to infinity we create the IPS, which we denote as IPS[FG,S]. Taking S to be the set of all possible permutations for G gives us all possible phase spaces from a seed graph. It is worth emphasizing that an element of an IPS as a graph representing aphasespace.

Example: Let K be our seed graph. Then IPS [Parity , id ] consists 2 0 K2 { } of a single element, which is K2.

The phase space of the graph K2 gives under Parity and the identity permutation is a directed 3-cycle and a single point. If we remove the loop, the directions, and the states the phase space appears as follows. 2. The Iterated Phase Space 27

Fig. 2.1: The children of K under Parity and id . 2 { }

Thus K generates C and a K . Hence IPS[Parity , id ] is as follows 2 3 1 K2 { }

Fig. 2.2: IPS [Parity , id ] 1 K2 { }

For the second iteration can apply Parity to these two new graphs. The single point gives a trivial phase space consisting of two points both looping to themselves (which we see as the loop in the IPS). The C3 gives the phase space described in Figure 1.2 (ie. to a two cycle and a four cycle), so we draw an arrow from C3 to K2 and to a C4 in the second iteration.

Fig. 2.3: IPS [Parity , id ] 2 K2 { }

The only new graph is C4. This generates C3 and a single point under the identity permutation. Thus for the fourth iteration of our IPS, when considering only the identity permutation, is looks like 2. The Iterated Phase Space 28

Fig. 2.4: IPS [Parity , id ], which is the same as IPS[Parity , id ] 4 K2 { } K2 { }

Any further iterations will not change this graph, as no new children are created. In particular, this means that IPS [Parity , id ]=IPS[Parity , id ]. 4 K2 { } K2 { }

2.2.1 Definitions for IPS We wish to easily refer the location of elements in an IPS. For the following, let L and K be elements in IPS[FG,S].

Definition 13. The generation of L is the smallest n such that IPSn[FG,S] contains G.

Definition 14. K is an ancestor of L (and G is a descendant of K)if K has lower generation than G and there is a directed path from K to G in IPS[FG,S]. Definition 15. K is a parent of L (and L is a child of K)ifK is an ancestor of G and adjacent to K in IPS [FG,S]. 2. The Iterated Phase Space 29

2.3 Classification of IPS

We wish to classify the behavior of IPSs over symmetric functions. The rest of this section is devoted to this goal. We will say a vertex is white if its state is 0, and black if its state is 1. We will freely interchange “white” with “0” and “black” with “1.” (Likewise we also interchange a graph with a coloring of black and white with the a graph with states for each vertex.)

2.3.1 Behavior of Or and And Or and And are dynamically equivalent. We will focus on Or, but all results also apply to And. Or takes any vertex next to a 1 and turns it into 1. In particular, this means that any graph with a state that contains a 1 eventually updates into the all 1’s state. In this way Or causes the 1’s to “spread”. The only starting state that does not eventually becomes the all 1’s state the all 0’s state, which will always remain the all 0’s state. This means that, for any graph, the phase space of Or is a single un- connected fixed point (which corresponds to the all 0’s state) and a large directed tree with all states leading towards the root, which is the fixed all 1’s state. The IPS in this case would be infinite, as after each iteration we get two graphs: a single point and a tree with 2n 1 elements. Since each it- eration gives a larger tree graph we will never get a graph we had previously.

For example, IPS2[OrK2 ] looks like 2. The Iterated Phase Space 30

Where each color represents a separate permutation and black represents all permutations.

One question is whether, in an arbitrary element of IPS[OrK2 ], there is branching.

It is possible for the paths to branch. Take the following graph of our Or IPS with the permutation (1, 2, 3, 4, 5, 6, 7).

A subgraph of the above’s phase space under Or is

Thus Or can branch, which suggests that the Or IPS is not entirely simple. We further analyze this branching now. 2. The Iterated Phase Space 31

Lemma 5. Consider a graph with vertices p and q distance n away. Say that p is black and all other vertices are white. Then it takes n system updates for q to become black under any permutation such that a vertex v updates before v0 if and only if its distance to p is less than or equal to its distance to v0.

Proof. Let k be the distance of q to the nearest black vertex. Because of our choice of permutation each system update reduces k by exactly 1. Thus it takes n system updates.

We can find the length of branches in the elements of the IPS. Let H be an element of the IPS. These elements resemble star graphs, and have a natural center at the vertex that, in the phase space, would correspond to their all ones state. A branch is then a path graph with one end at the center.

Proposition 5. Let G be an arbitrary graph. Then IPSn[OrG] has an el- n 1 ement that has a branch of length 2 . Furthermore, there are no other elements in IPSn[Or, ⇡] with strictly larger branches.

Proof. Proceed by induction. The base case is immediately true. For the induction step, take an element of IPSn[OrG] with longest branch of length. n 1 By induction, this length is 2 . Call this chosen element H. Consider the largest path graph in H and place the natural identity permutation on it. (ie the left to right permutation.) Make this path and rest of graph all white except for one black at the end of the path that is not the center. By Lemma 5 it takes 2n updates to make this path all black. This corresponds to a directed branch of length 2n in the correspond- ing phase space. Hence it we have a element with branch length 2n in IPSn+1[OrG]. Observe that branching is common and easy to make. We expect phase spaces of Or to be littered with branches. For example, if we have a graph with a coloring that creates branches and we reverse the coloring of the graph, the inverse state also branches. Next we give two definitions in preparation for Theorem 3.

Definition 16. Say L is a graph such that there exists a vertex which, if it is deleted, makes the graph disjoint into two parts, G and K.Wecallthis vertex a connector vertex. 2. The Iterated Phase Space 32

Under Or,ifL has a vertex order ⇡ such that, the connector vertex updating to 1 on a system update implies that all of K updates to 1 on the same system update, then we say K extends G under ⇡. Note that the connector vertex is part of G.

Example: We can extend any graph G by adding a line to any vertex v. Here v forms the connector. This graph G with the line is an extender graph if we give the line the identity permutation and update it after we update v.

Definition 17. Let G be a graph. Given a local update function F and permutation ⇡ of G we let G denote the phase space of G under [FG, ⇡]. We say a phase space G is embedded in the phase space L if, when viewed as digraphs,d G a subgraph of L. We denote this G L. ⇢ Theorem 3. Let our local update function be Or and let our graphs be G and L, such that L extends G under some vertex order ⇡. Then G L, ⇢ where L is the phase space of [OrL, ⇡] and G is the phase space of [OrG, ], where is any vertex order for G.

Proof. Consider G. For now we ignore that all 0’s state and consider G0, which is G with the all zeros state removed. Notice that G0 is a tree. (Indeed, the phase space of Or on any graph gives a phase space with a tree structure and a single point.) Consider one of the leaves in G0. From this state we can follow a path down to the all 1’s state. Now consider L. Any vertex in L has a part of its state corresponding to G. Denote G-part g and the rest as k. Observe that g and k can be seen as states that together give us a state of L. Set g to the state of one of the outer leaves of G.Setk to all 0’s state. Notice that, if we were applying the SDS-map to L,thek part never influences how the g part updates. That is, if G has state g and it updates to g0,theninL the g part of its state will also update to g0. Why is this true? Consider any update of L from g with k all zeros. Because k is all zero, any vertex that updates ignores vertices in the k-part. This is because Or only cares about vertices that are 1. Thus the k part, as long as it is 0, never influences how the g-part updates. By the next update we have updated the g-part, and k is either all 0 or all 1. If the former, then we can repeat the same logic to find that the g-part updates independent of what k is. If k is all 1’s then the same happens. 2. The Iterated Phase Space 33

This is because for k to have become all 1 the connector vertex must of become 1. Once the connector is 1 it cannot change. From the perspective the g-part it only sees that the connector is 1. The rest of k does not matter to it. As the connector becoming 1 is what would of happened to G without k, it continues updating as always. Thus g inside the state of L updates the same as g as a state of G would. That is, g in L follows the same path that g in G would in G. Note that at the end the G-part will be at the all 1’s state. The k-part will also be all 1’s because the connector vertex must of updated to 1. This tells us that, given a leaf in G, we get a path down to the all 1’s state. Furthermore, this path is embedded in L. However, we do not know if the branching in G is consistent with that of L. That is, two paths that intersect in G may not intersect in L. We show that this actually does happen through two cases.

Case 1 Let v be the connector vertex. Recall that v is in G. Say that the two paths intersect at a point in G such that v has a vertex state of 1. Let (a, 1), (b, 1) and (c, 1) denote states of G, where the 1 is the state of the connector vertex and a, b and c are the states for rest of the vertices. Note that these states correspond to the g part of L. Say that (a, 1) and (b, 1) both update to (c, 1). Then (c, 1) is the intersection in G. This same structure will also be found in L. That is, (a, 1,la) and (b, 1,lb) both update to (c, 1,lc), where (a, 1,la), (b, 1,lb), (c, 1,lc) are states of L corresponding to the states (a, 1), (b, 1) and (c, 1), with the l’s being the extra states that correspond to the k part of L.

If this structure did not exists then it would mean (b, 1,lb) updates to (b, 1,~0). (That is, k is all zeros.) But this cannot happen because the connector vertex v has state of 1, and since L extends G this would cause k to become all ones. This is a contradiction. This this branching in G must also be seen in L.

Case 2 Say that the connector vertex has state of 0. This is proved similar to Case 1. Construct the same structure in G and see that if the cor- responding structure is not found in L then we have a contradiction.

Thus each path intersects where in should. As we know that, starting from a leaf, each path is in L, and that they intersect where they should, 2. The Iterated Phase Space 34 and that the all end at the all 1’s state, we can conclude that

G L ⇢ Finally, the all 0’s state of G corresponds to the all 0’s state in L.

Definition 18. Take graphs G and L, L1,...,Ln such that L is L is an extension of L1 which is an extension of L2 which is . . . an extension of Ln which is an extension of G, all under appropriate permutations. We then say that L multi-extends G.

Corollary 2. The previous theorem holds if G multi-extends L under a set of permutations.

Proof. By a simple induction on the number of extensions, where the pre- vious theorem forms the base case.

Corollary 3. Say that S is the set of all permutations and K2 is the seed graph. Let the graph G be an element of IPS[OrK2 ,S]. Let C be the set of all children of G. Each child c C is generated from G via a permutation 2 ⇡. Then c contains G as a subgraph if c extends G under ⇡.

Proof. By induction. The base case is immediate as K2 goes to K3 in the IPS. Now consider G. Let U(G) denote the undirected graph underlying G. We must show that if U(G)extendsG,thenU(G) is a subgraph of U(U(G)). By induction we know that G is a subgraph of U(G). Because Or forms trees as its phase space, U(G)extendsG for some set of permutations. (This is because U(G) is just many path graphs added onto G.) Then by Corollary 2 we find that U(G) is a subgraph of U(U(G)) for appropriate permutations.

That is, IPS of Or, when we look only the correct permutations, consists of growing trees. In particular, the IPS contains a tree of growing trees. 2. The Iterated Phase Space 35

2.3.2 Behavior of Parity and Parity+1 Parity and Parity+1 are the only invertible symmetric SDS, and so have been well studied by others. The following theorem early on in [4].

Theorem 4. For any graph G, [ParityG,id] and [Parity +1G,id] are invert- ible.

Corollary 4. For arbitrary graph G and permutation ⇡,thephasespaceof [ParityG, ⇡] or [Parity +1G, ⇡] is made up of disjoint cycles.

Proof. Since [ParityG, ⇡] or [Parity +1G, ⇡] are invertible, each state has ex- actly one unique parent. This is only possible in a cycle.

Theorem 5. Consider C with k 3.Thephasespaceof[Parity ,id] are k Ck the cycles in S, where

C x is a factor of k-1 if k is even S = { x | } ( Cx x is a factor of 2(k-1) if k is odd { | }

Proof. See [4, Section 5.4.2]

Example: Theorem 5 tells us that phase space of [ParityC3 ,id] is made up of cycles with length that are factors of 4, while the phase space of

[ParityC4 ,id] is predicted to be made up of cycles with length that are factors of 3. 2. The Iterated Phase Space 36

Fig. 2.5: The phase space of [ParityC4 ,id]

Given any k N, Defant has completely characterized the number of 2 periodic points length k of [Parityn, ⇡] and [Parity +1n, ⇡] in [1].

2.3.3 Behavior of Majority and Majority+1 Majority and Majority+1 are dynamically equivalent. We will focus on Ma- jority, but all results hold for Majority+1. We can characterize what happens to any star graph as long as the permutation starts with the center vertex.

Theorem 6. If we apply Majority to the star graph under a permutation that starts with the center, then the resulting phase space is two star graphs, and appears as 2. The Iterated Phase Space 37

Proof. If all vertices are white or black then we have a fixed point. If half or more of the vertices are black we will update to the system state where all vertices are black. If less than half of the vertices are black we will update to an all white system state.

In particular, this means if we start with a star graph as our seed graph, we always get star graphs in our IPS. Because these star graphs are always larger than the one we began with, repeating gives larger and larger star graphs. This means the IPS is infinite. Theorem 7. If we apply Majority to the star graph under a permutation that ends with the center, then we have the following phase space. 2. The Iterated Phase Space 38

Proof. The proof is a case bash, and similar to the previous proof.

2.3.4 Behavior of Nor and Nand Many have studied Nor and Nand to great detail. We have not looked at their IPS in detail yet as they appear complicated. However we have three theorems that give us a partial picture of their behavior. Because the functions are dynamically equivalent, all results for Nor apply to Nand.

Theorem 8. Nand or Nor are the only symmetric functions with no fixed points.

Theorem 9. The phase space of Nor is made up of directed cycles and states updating directly into the cycles.

[4] contains a deep result about cycles in the Norphase space. Loosely speaking, it says

Theorem 10. There are always a fixed number of periodic points in a phase space made from Nor, and we can distribute the cycles however we want by choosing a correct update order (which is now no longer limited to a permutation, but is a word).

For example, if we have 8 periodic points, then there exist a word such that the phase space has four 2-cycles. Likewise, there exists a word such that the phase space has one 8-cycle. This means the IPS is infinite if we use words, as we can always create larger and larger cycles. It would be interesting to investigate whether the IPS is infinite when we limit ourselves to permutations.

2.4 Other results

We have a small lemma and one conjecture about what phase-spaces cannot exist. These are useful in that they tell us what phase spaces cannot come out of the seed graph K2.

Lemma 6. Take a symmetric binary SDS over K2. Then the phase space made up of two disjoint 2-cycles cannot exist. 2. The Iterated Phase Space 39

Proof. By contradiction. Assume that it can exist. Without loss of gener- ality, assume that we update the left vertex first and then the right one. Let ” ” denote a single vetex update. Let the string ↵ denote the graph, ! where ↵ is the state of the first vertex and is the state of the second. Then the state 11 can be paired with three di↵erent states, giving us three cases.

Case 1: Then have the following vertex updates

11 01 01 ! ! 00 10 10 ! ! When we vertex update the second vertex in the state 01 we get 1. How- ever, when we do the same in 10 we get 0. This is a contradiction, because we are working with symmetric functions and 10 and 01 have the same num- ber of 1’s and 0’s, and so should update to the same value.

Case 2: We have the following vertex updates.

11 10 10 ! ! 00 01 01 ! ! Like before, we find 10 and 01 give di↵erent updates when they should give the same. Contradiction.

Case 3: Same argument as before, using the following vertex updates

11 01 00 ! ! 00 10 11 ! !

Finally we have the following conjecture. 2. The Iterated Phase Space 40

Conjecture 2. Let (a1,a2,...,an) represent the state of a graph, and let inv be the inverse function that changes any 0 to a 1 and any 1 to a 0. Then, given a symmetric binary SDS, the following subgraph cannot exist in the phase space. 3. COLOR SDS

Our research into Color SDS was motivated by looking graph coloring prob- lem in terms of SDSs. The graph coloring problem asks how many colors one needs to color each vertex of a graph so that no two vertices of the same color are adjacent. We call such a coloring a proper coloring.

3.1 What is a Color SDS?

A color SDS is a SDS that is designed to color a graph, and “stop” when the graph reaches a proper coloring (or never stop if there is no coloring). For SDS, stopping means we’ve reach a fixed point. One example of a color SDS (and the one that we worked most on) is the following

Graph: Any graph we want colored. • States: the colors 1 ...n • Vertex Function: Look at the vertex. It has color i. If it adjacent to • a vertex of the same color change the color to i +1 mod n.Otherwise leave it alone.

When we write out the phase space of this graph, the fixed points cor- respond to all possible colorings.

3.2 Color Distance

We begin our investigation with the idea of Color Distance. This is a number assigned to each colored graph that represents how properly colored it is. One way to get such a measure is to calculate the phase space for the color SDS and say the color distance is the number of updates a state needs before reaching a fixed point. 3. Color SDS 42

However, this measure requires one to calculate the entire phase space. We would like to find another measure that approximates the same concept. Consider this alternate definition of color distance. Definition 19. The color distance is defined as the number of vertices which are adjacent to the same color. In this case a lower color distance corresponds to a better coloring. Theorem 11. Take the Color SDS and any colored graph G. After an update the coloring distance stays the same or lowers.

Proof. Take coloring on graph G and apply the coloring function. Consider when an arbitrary vertex vi updates. We have three cases.

Case 1: vi is not adjacent to a vertex of the same color and so does not change. Then the color distance does not change either.

Case 2: vi is the same color as an adjacent vertex vj and so updates into a new color, say c. Because vi is not the same color as vj, the color distance goes down by one. Furthermore, if we assume the updated vi is not adjacent to any vertices of the color c, then the color distance does not change further and we are done.

Case 3: This is case 2 without the last assumption. After updating, the vertex vi is next to at least one vertex vk of the same color. Because vi is not the same color as vj, the color distance goes down by one. Because vi is the same color as vk, the color distance goes up by one. Thus the net change in the color distance is 0. We now asked a few questions about the color SDS. Almost all of the following are conjectures that we’ve proved false.

Conjecture 3. Does running the color SDS on Kn always improve the color distance when possible?

Answer: No. For example, let 322 represent a coloring of K3 on three colors. Updating this using any permutations gives 322

# 332 3. Color SDS 43

This single update does not change the color distance. This example also works on arbitrary Kn by 322 ...2

# 333 ...2 Conjecture 4. (A weaker version of conjecture 3.) Does running the col- oring function on Kn eventually result in a proper coloring? We think the above is true, but could not find a proof. Conjecture 5. For any coloring of graph G, is there a permutation ⇡ such that a single run of the coloring SDS improves the coloring when possible. That is, after updating the graph requires less colors to color, unless it is already colored with the minimal amount of colors. (Note we are allowed to choose the permutation with each update.) This conjecture is also false. Consider the following graph over four colors (where the number corresponds to the color of the vertex).

No permutation of the above graph will decrease the color distance after a single update. However, we can update this graph so that it eventually ends in a fixed point. We wonder what classes of graphs give counterexamples, as this may help characterize the graphs which are hard to color.

3.3 Other Coloring SDSs, Other Color Distances

We of course could modify our coloring SDS. For example, a slightly smarter one would, when it finds a vertex that is adjacent to one of the same color, 3. Color SDS 44 updates the vertex to the first color that is free. If no colors are free, then it adds 1 to the color. Note that the adding of 1 is important. If we instead did nothing then we can make the following graph on 3 colors which is fixed point but not a proper coloring.

Perhaps more importantly than modifying our color SDS is modifying our concept of color distance. Our counterexample of conjecture 5 suggests that our color distance may not be “fine” enough. One possible measure is the following.

Definition 20 (Fixed point coloring distance). Take a colored graph and a color SDS. The color distance is the number of vertices such that, during the next update, they do not change color.

In this case a higher color distance corresponds to a better coloring.

3.4 Lonely Colorings

Definition 21. A lonely coloring is a coloring of a graph such that, when represented on the color SDS phase space, is an isolated fixed point discon- nected from the rest of the phase space.

In other words, nothing updates to a lonely coloring and a lonely coloring only updates to itself. We investigated whether our Color SDS had lonely colorings. The an- swer, somewhat surprisingly, is that they do, but only if we allow more colors than the graph needs to color itself. These results are summed up in the next few theorems.

Theorem 12. There does not exist lonely colorings if there are only two colors.

Theorem 13. For a graph that needs minimum of n colors to properly color, there does not exist a lonely coloring if we have n or less colors. 3. Color SDS 45

The proofs for these are not dicult and go by contradiction. We assume that there is a lonely coloring, then modify that coloring into a new one so that, under a specific permutation, it updates into the lonely coloring. However, if we have n + 1 or more colors, then there may exist a lonely coloring. The phase space of K2 on three colors serves as an immediate counterexample. This gives us the following conjecture.

Conjecture 6. Let G be a graph that requires a minimum of n colors to properly color. Then the coloring SDS over graph G with n +1 or more colors always has a lonely coloring. FUTURE DIRECTIONS

The world of sequential dynamical systems is vast, and much waiting to be discovered. Our foray into periodic points is only a small improvement in our understanding, but we believe that the transition graph can be generalized to further shine light on periodic points. Little is known about the IPS of Nor and Nand, reflecting Nor and Nands strange complexity. IPS in general are not well understood, save for a few simple functions. Finally, the color SDS is just one of attempt to work SDS into a tool with which to tackle a pure mathematical problem. Certainly many other applications of SDS exist, both pure and applied. ACKNOWLEDGMENTS

I thank my colleagues Gwendolyn Claflin, Sophie D’Arcy, Colin Defant and Cory Saunders for their encouragement and mathematical support. I also thank the University of California Santa Barbara Math REU program and NSF reward DMS-1358884 for making the research for the first chapter possible, along with Maribel Bueno Cachadina for organizing the REU. I thank the CCS Small Research Grant for funding the second two chapters. Finally I thank Padraic Bartlett who – along with helping organize the REU, editing, and fleshing out the algorithm in the first chapter – served as a magnificent mentor and guide. BIBLIOGRAPHY

[1] Colin Defant. Mixing induced CP asymmetries in inclusive B decays. Phys. Lett., B393:132–142, 1997.

[2] Daniel C. Dennet. Darwin’s Dangerous Idea. Simon and Schuster, 1995.

[3] Martin Gardner. Mathematical games: The fantastic combinations of john conways new solitaire game life. Scientific American, 223(4):120– 123, 1970.

[4] Henning Mortveit and Christian Reidys. An introduction to sequen- tial dynamical systems. Springer Science & Business Media, 2007.