EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE

HARRY CRANE

Abstract. The transition law of every exchangeable Feller process on the space of countable graphs is determined by a σ-finite measure on the space of 0, 1 0, 1 -valued arrays. { } × { } In discrete-time, this characterization amounts to a construction from an independent, identically distributed sequence of exchangeable random functions. In continuous-time, the behavior is enriched by a Levy–It´ o-typeˆ decomposition of the jump measure into mutually singular components that govern global, vertex-level, and edge-level dynamics. Furthermore, every such process almost surely projects to a Feller process in the space of graph limits.

1. Introduction

A graph, or network, G = (V, EG) is a set of vertices V and a binary relation of edges ij EG V V. For i, j V, we write G to indicate the status of edge (i, j), i.e., ⊆ × ∈ ( ij 1, (i, j) EG, G := 1 (i, j) EG := ∈ { ∈ } 0, otherwise.

We write V to denote the space of graphs with vertex set V. NetworksG represent interactions among individuals, particles, and variables throughout science. In this setting, consider a population of individuals labeled distinctly in V and related to one another by the edges of a graph G = (V, EG). The population size is typically large, but unknown, and so we assume a countably infinite population and take V = N := 1, 2,... . In practice, the labels N are arbitrarily assigned for the purpose of distinguishing{ individuals,} and data can only be observed for a finite sample S N. Thus, we often take S = [n]:= 1,..., n whenever S N is finite with cardinality n ⊂1. These practical matters{ relate} to the operations⊆ of ≥ ij relabeling: the relabeling of G = (G )i,j 1 by any permutation σ : N N is • ≥ → σ σ(i)σ(j) (1) G := (G )i,j 1, and ≥ ij restriction: the restriction of G = (G )i,j 1 to a graph with vertices S N is • ≥ ⊆ S ij (2) G = G S := (G )i,j S. | ∈ In many relevant applications, the network structure changes over time, resulting in a time-indexed collection (Gt)t T of graphs. We consider exchangeable, consistent Markov processes as statistical models∈ in this general setting.

Date: July 11, 2015. 1991 Mathematics Subject Classification. 05C80; 60G09; 60J05; 60J25. Key words and phrases. exchangeability; ; graph limit; Levy–It´ oˆ decomposition; Aldous–Hoover theorem; Erdos–R˝ enyi´ random graph. The author is partially supported by NSF grant DMS-1308899 and NSA grant H98230-13-1-0299. 1 2 HARRY CRANE

1.1. Graph-valued Markov processes. A N-valued Markov process is a collection Γ = Γ : G { G G N for which each ΓG = (Γt)t T is a family of random graphs satisfying Γ0 = G and ∈ G } ∈ the , i.e., the past (Γs)s t and future (Γs)s t are conditionally indepen- • < > dent given the present Γt, for all t T. ∈ In addition, we assume Γ is exchangeable, i.e., all Γ share a common exchangeable transition law such that • G σ σ (3) P Γt Γt = F = P Γ Γ = F , F N, t0 > t, and { 0 ∈ · | } { t0 ∈ · | t } ∈ G consistent, i.e., the transition law of Γ satisfies • [n] [n] (4) P Γ = F Γt = F0 = P Γ = F Γt = F00 , F [n], { t0 | } { t0 | } ∈ G for all F , F N such that F = F , for every n N. 0 00 ∈ G 0|[n] 00|[n] ∈ [n] Consistency implies that Γ = (Γt [n])t T satisfies the Markov property for every n N, for G | ∈ ∈ all G N. We call any Γ satisfying these properties an exchangeable, consistent Markov process. In Proposition∈ G 4.3, we observe that consistency and the Feller property are equivalent for exchangeable Markov processes on N, so we sometimes also call Γ an exchangeable Feller process. G The process Γ = ΓG : G N is enough to determine the law of any collection (Γt)t T { ∈ G } ∈ with a given transition law and initial distribution ν by first drawing Γ0 ν and then ∼ putting Γ = Γ on the event Γ0 = G. With the underlying process Γ = Γ : G N ν G { G ∈ G } understood, we write Γν = (Γt)t T to denote a collection with this description. Our main theorems characterize∈ the behavior of exchangeable Feller processes in both discrete- and continuous-time. In discrete-time, we show that every exchangeable Feller process can be constructed by an iterated application of independent, identically distributed exchangeable random functions N N, called rewiring maps; see Section 4.5. In continuous-time, Γ admits a constructionG → G from a Poisson whose intensity measure has a Levy–It´ o-typeˆ decomposition. The Levy–It´ oˆ representation classifies every discontinuity of Γ as one of three types. In addition, when ν is an exchangeable initial distribution on N, both discrete- and continuous-time processes Γν project to a Feller process in the spaceG of graph limits. These outcomes invoke connections to previous work on the theory combinatorial stochastic processes [4, 10], partially exchangeable arrays [1, 2, 5], and limits of dense graph sequences [8, 9]. 1.2. Outline. Before summarizing our conclusions (Section 3), we first introduce key definitions and assumptions (Section 2). We then unveil essential concepts in more detail (Section 4) and prove our main theorems in discrete-time (Section 5) and in continuous- time (Section 6). We highlight some immediate extensions of our main theorems in our concluding remarks (Section 7).

2. Definitions and assumptions

2.1. Graphs. For any graph G = (V, EG), we impose the additional axioms of (i) anti-reflexivity,(i, i) < EG for all i V, and ∈ (ii) symmetry,(i, j) EG implies (j, i) EG for all i, j V. ∈ ∈ ∈ Thus, we specialize V to denote the set of all graphs satisfying (i) and (ii). G By the symmetry axiom (ii), we write ij = i, j EG to indicate that there is an edge between i and j in G. By definition, G has no multiple{ } ∈ edges, condition (i) forbids edges EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 3 from a vertex to itself, and condition (ii) makes G undirected. In terms of the adjacency array ij G = (G )i,j V, (i) and (ii) above correspond to ∈ (i’) Gii = 0 for all i V and (ii’) Gij = Gji for all∈i, j V, ∈ respectively. As we discuss in Section 7, our main theorems remain valid under some relaxation of each of these conditions. The above two notions of a graph—as a pair (V, EG) and as an adjacency array—are equivalent; we use them interchangeably and with the same notation. With SV denoting the space of permutations of V N, i.e., bijective functions σ : V V, ⊆ → relabeling (1) associates every σ SV to a map V V. For all S S0 V, restriction (2) ∈ S G → G ⊆ ⊆ ij determines a map S0 S, G G = G S. Specifically, for n m 1, G [m] = (G )1 i,j m G → G 7→ ij | ≥ ≥ | ≤ ≤ is the leading m m submatrix of G = (G )1 i,j n. By combining (1) and (2), every injection × ≤ ≤ φ : S S determines a projection S S by → 0 G 0 → G φ φ(i)φ(j) (5) G G := (G )i,j S. 7→ ∈ We call a sequence of finite graphs (Gn)n N compatible if Gn [n] and Gn [m] = Gm for every m n, for all n N. Any compatible sequence∈ of finite graphs∈ G determines| a unique ≤ ∈ countable graph G , the projective limit of (G1, G2,...) under restriction. We endow N with the product-discrete∞ topology induced, for example, by the ultrametric G

(6) d(G, G0):= 1/ max n N : G = G0 , G, G0 N . { ∈ |[n] |[n]} ∈ G From (6), we naturally equip N with the Borel σ-field σ [n] n N generated by the restriction G h ·| i ∈ maps. Under (6), N is a compact and, therefore, complete and separable metric space; G hence, N is Polish and standard measure-theoretic outcomes apply in our analysis. G 2.2. Graph limits. For n m 1, F , and G , we define the density of F in G by ≥ ≥ ∈ G[m] ∈ G[n] m (7) δ(F, G):= ind(F, G)/n↓ , m where ind(F, G) is the number of embeddings of F into G and n↓ := n(n 1) (n m + 1). Specifically, − ··· − X ind(F, G):= 1 Gφ = F { } φ:[m] [n] → is the number of injections φ :[m] [n] for which Gφ = F. → S Given a sequence G = (Gn)n N in ∗ := n N [n], we define the limiting density of F in G by ∈ G ∈ G δ(F, G):= lim δ(F, Gn), if it exists. n →∞ In particular, for G N, we define the limiting density of F in G by ∈ G (8) δ(F, G):= lim δ(F, G [n]), if it exists. n →∞ | Definition 2.1 (Graph limit). The graph limit of G N is the collection ∈ G S G := (δ(F, G))F m N [m] , | | ∈ ∈ G S provided δ(F, G) exists for every F m N [m]. If δ(F, G) does not exist for some F, then we put ∈ ∈ G S G := ∂. We write to denote the closure of G : G N ∂ in [0, 1] m N [m] . | | D∗ {| | ∈ G }\{ } ∈ G 4 HARRY CRANE

If the graph limit of G exists, then G determines a family of exchangeable probability distributions on ( [n])n N by G ∈ (9) P Γn = F G = δ(F, G), F , n N . { | } ∈ G[n] ∈ Moreover, the distributions determined by (δ(F, G))F and (δ(F, G))F , m n, are ∈G[m] ∈G[n] ≤ mutually consistent, i.e., the distribution in (9) satisfies X P Γn = F0 G = δ(F, G) = δ(F0, G), for every F0 . { |[m] | } ∈ G[m] F : F =F ∈G[n] |[m] 0 Thus, every D ∗ determines a unique probability measure on N, which we denote by ∈ D G σ γD. In addition, γD is exchangeable in the sense that Γ γD satisfies Γ = Γ for all σ SN, where = denotes equality in law. ∼ L ∈ L Remark 2.2. In our notation, D and γD correspond to the same object, but we use the former ∈ D∗ to emphasize that D is the graph limit of some G N and the latter to specifically refer to the ∈ G probability distribution that D determines on N. As an element of ∗, D = (DF)F is an element S ∈G∗ m N [m] G D of [0, 1] ∈ G . The connection between D and γD is made explicit by

DF = D(F) = γD( G N : G = F ), F , n N . { ∈ G |[n] } ∈ G[n] ∈ Definition 2.3 (Dissociated random graphs). A random graph Γ is dissociated if

(10) Γ S and Γ S are independent for all disjoint subsets S, S0 N . | | 0 ⊆ We call a probability measure ν on N dissociated if Γ ν is a dissociated random graph. G ∼ Dissociated measures play a central role in the theory of exchangeable random graphs: the measure γD determined by any D is dissociated, and conversely the Aldous– ∈ D∗ Hoover theorem (Theorem 4.2) states that every exchangeable probability measure on N is a mixture of exchangeable, dissociated probability measures. In particular, to everyG exchangeable probability measure ν on N there exists a unique probability measure ∆ on G ∗ such that ν = γ∆, where D Z (11) γ ( ):= γD( )∆(dD) ∆ · · D∗ is the mixture of γD measures with respect to ∆. 2.3. Notation. We use the capital Roman letter G to denote a generic graph, capital Greek letter Γ to denote a random graph, bold Greek letter with subscript Γ to denote a collection of random graphs with initial condition , and bold Greek letter•Γ = Γ to denote a graph-valued process indexed by the initial• conditions . { •} • ij Superscripts index edges and subscripts index time; therefore, for (Γt)t T, we write Γt to S S ∈ indicate the status of edge ij at time t T and Γ = (Γ )t T to denote the trajectory of the ∈ T0 t ∈ 0 edges ij S N over the set of times in T0 T. We adopt the same notation for processes ⊆ ⊆ S S ⊆ Γ = ΓG : G N , with Γ := Γ : G N denoting the restriction of Γ to edges ij S { ∈ G } T0 { G,T0 ∈ G } ∈ and times T T. 0 ⊆ We distinguish between discrete-time (T = Z+ := 0, 1, 2,... ) and continuous-time { } (T = R+ := [0, )) processes by indexing time by m and t, respectively; thus, (Γm)m 0 ∞ ≥ denotes a discrete-time process and (Γt)t 0 denotes a continuous-time process. To further distinguish these cases, we call Γ a Markov≥ chain if its constituents are indexed by discrete- time and a Markov process if its constituents are indexed by continuous-time. Whenever a EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 5 discussion encompasses both discrete- and continuous-time, we employ terminology and notation for the continuous-time case.

3. Summary of main theorems We now state our main theorems, saving many technical details for later. Of primary importance are the notions of rewiring maps and rewiring limits, which we introduce briefly in Section 3.1 and develop formally in Section 4.5.

ij 3.1. Discrete-time Feller chains. Let W = (W )i,j V be a symmetric 0, 1 0, 1 -valued ∈ { } × { } array with Wii = (0, 0) for all i V. For i, j V, we express the (i, j) component of W as a ij ij ij ∈ ∈ pair W = (W , W ). For any such W and any G V, we define the rewiring of G by W by 0 1 ∈ G G0 = W(G), where   ij ij ij  W0 , G = 0 (12) G0 := , i, j V.  ij ij ∈  W1 , G = 1 ij ij Note that W acts on G by reconfiguring its adjacency array at each entry: if G = 0, then G0 ij ij ij is taken from W0 ; otherwise, G0 is taken from W1 . Thus, we can regard W as a function V V, called a rewiring map. Lemma 4.9 records some basic facts about rewiring maps. G Our→ G definition of relabeling and restriction for rewiring maps is identical to definitions (1) and (2) for graphs, i.e., for every σ SV and S V, we define ∈ ⊆ σ σ(i)σ(j) W := (W )i,j V and ∈ S ij W = W S := (W )i,j S. | ∈ We write V to denote the collection of rewiring maps V V. W G → G Given a probability measure ω on N, we construct a discrete-time W Γ = Γ∗ : G N as follows. We first generate W , W2,... independent and identically ω∗ { G,ω ∈ G } 1 distributed (i.i.d.) according to ω. For every G N, we define Γ∗ = (Γm∗ )m 0 by putting ∈ G G,ω ≥ Γ0∗ = G and

(13) Γm∗ := Wm(Γm∗ 1) = (Wm W1)(G), m 1, − ◦ · · · ◦ ≥ where W W indicates the composition of W and W as maps N N. ◦ 0 0 G → G From the definition in (12), Γω∗ in (13) is a consistent Markov chain. If, in addition, ij σ(i)σ(j) W ω satisfies (W )i,j 1 = (W )i,j 1 for all permutations σ : N N, then Γω∗ is also exchangeable.∼ Theorem≥ 3.1L establishes≥ the converse: to every discrete-time→ exchangeable, consistent Markov chain Γ = Γ : G N there corresponds a probability measure ω on { G ∈ G } N such that Γ = Γω∗ , that is, ΓG = Γ∗ for all G N. W L L G,ω ∈ G By appealing to the theory of partially exchangeable arrays, we further characterize ω uniquely in terms of a measure Υ on the space of rewiring limits, which we express as ω = ΩΥ. We present these outcomes more explicitly in Section 4.5.

Theorem 3.1 (Discrete-time characterization). Let Γ = Γ : G N be a discrete-time, { G ∈ G } exchangeable, consistent Markov chain on N. Then there exists a probability measure Υ such that G Γ = Γ∗ , where Γ∗ is constructed as in (13) from W1, W2,... independent and identically distributed L Υ Υ according to ω = ΩΥ. 6 HARRY CRANE

Our proof of Theorem 3.1 relies on an extension (Theorem 5.1) of the Aldous–Hoover theorem (Theorem 4.2) to the case of exchangeable and consistent Markov chains on 0, 1 - { } valued arrays. Heuristically, W1, W2,... must be i.i.d. because of the Markov and consistency [n] assumptions: by consistency, every finite restriction ΓG has the Markov property and, therefore, for every m 1 the conditional law of Wm, given Γm 1, must satisfy ≥ − [n] N [n] N Wm and Γm \1 are independent for every n ; − ∈ [n] [n] exchangeability further requires that Wm is independent of Γm 1, and so we expect Wm to − be independent of Γm 1 for every m 1; by time-homogeneity, W1, W2,... should also be − ≥ identically distributed; independence of W1, W2,... then follows from the Markov property. Each of these above points on its own requires a nontrivial argument. We make this heuristic rigorous in Section 5.

3.1.1. Projection into the space of graph limits. From our discussion of graph limits in Section 2.2, any exchangeable random graph Γ projects almost surely to a graph limit Γ = D, which | | corresponds to a random element γD in the space of exchangeable, dissociated probability measures on N. Likewise, any exchangeable rewiring map W N projects to a rewiring limit W = υ,G which corresponds to an exchangeable, dissociated∈ W probability measure Ω | | υ on N. We write ∗ and ∗ to denote the spaces of graph limits and rewiring limits, respectively.W D V We delay more formal details about rewiring limits until Section 4.5. For now, we settle for an intuitive understanding by analog to graph limits. Recall the definition of γ∆ in (11) for a probability measure ∆ on , i.e., Γ γ is a random graph obtained by first sampling D∗ ∼ ∆ D ∆ and, given D, drawing Γ from γD. Similarly, for a probability measure Υ on , ∼ V∗ W ΩΥ is a random rewiring map obtained by first sampling υ Υ and, given υ, drawing ∼ ∼ W from Ωυ. In particular, W ΩΥ implies W Υ. Just as for graph limits, we regard υ and Ω as the same object,∼ but use the| former| ∼ to refer to a rewiring limit of some ∈ V∗ υ W N and the latter to refer to the corresponding exchangeable, dissociated probability ∈ W distribution on N. W From the construction of Γ∗ in (19), every probability measure ω on N determines a ω W transition probability P ( , ) on N by ω · · G (14) P (G, ):= ω( W N : W(G) ), G N, ω · { ∈ W ∈ ·} ∈ G and thus every υ determines a transition probability P ( , ) by taking ω = Ω in (14). ∈ V∗ υ · · υ Consequently, every υ ∗ acts on ∗ by composition of probability measures, ∈ V D Z (15) D Dυ( ):= Pυ(G, )γD(dG). 7→ · N · G In other words, Dυ is the probability measure of Γ0 obtained by first generating Γ γD and, given Γ = G, taking Γ = W(G), where W Ω is a random rewiring map. ∼ 0 ∼ υ Alternatively, we can express D as a countable vector (D(F))F with ∈G∗ D(F):= γD( G N : G = F ), F , n N . { ∈ G |[n] } ∈ G[n] ∈ Moreover, υ gives rise to an infinite by infinite array (υ(F, F0))F,F with 0∈G∗ ( Ω ( W : W (F) = F ), F, F for some n N, υ(F, F ):= υ N [n] 0 0 [n] 0 { ∈ W 0, | } ∈ Gotherwise. ∈ EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 7

In this way, (υ(F, F0))F,F has finitely many non-zero entries in each row and the usual 0∈G∗ definition of Dυ = (D0(F))F ∗ by ∈G X (16) D0(F):= D(F0)υ(F0, F), F , n N, ∈ G[n] ∈ F 0∈G[n] coincides with (15). (Note that the array (υ(F, F0))F,F0 ∗ does not determine υ, but its action on through (16) agrees with the action of υ in (15).)∈G D∗ For a graph-valued process Γ = ΓG : G N , we write Γ = ΓD : D ∗ to denote { ∈ G } D∗ { ∈ D } the process derived from Γ by defining the law of ΓD = (Γt)t T as that of the collection of ∈ random graphs obtained by taking Γ0 γD and, given Γ0 = G, putting Γ = Γ . Provided ∼ D G Γt exists for all t T, we define ΓD := ( Γt )t T as the collection of graph limits at all | | ∈ | | | | ∈ times of ΓD. If ΓD exists for all D ∗, we write Γ ∗ := ΓD : D ∗ to denote the associated -valued| | process. ∈ D | D | {| | ∈ D } D∗ Theorem 3.2 (Graph limit chain). Let Γ = Γ : G N be a discrete-time, exchangeable Feller { G ∈ G } chain on N. Then Γ exists almost surely and is a Feller chain on ∗. Moreover, ΓD = D∗ G | D∗ | D | | L D,Υ for every D ∗, where D∗ := (Dm∗ )m 0 satisfies D∗ = D and ∈ D D,Υ ≥ 0 (17) Dm∗ := Dm∗ 1Ym = DY1 Ym, m 1, − ··· ≥ with Y1, Y2,... i.i.d. from Υ, the probability measure associated to Γ through Theorem 3.1. 3.2. Continuous-time Feller processes. In discrete time, we use an i.i.d. sequence of rewiring maps to construct the representative Markov chain ΓΥ∗ , cf. Equation (13) and Theorem 3.1. In continuous time, we construct a representative Markov process Γ = Γ∗ : ω∗ { G,ω G N from a W = (t, Wt) [0, ) N with intensity dt ω. Here,∈ Gdt}denotes Lebesgue measure on [0, ){ and ω}satisfies ⊂ ∞ × W ⊗ ∞ (18) ω( IdN ) = 0 and ω( W N : W , Id ) < for every n N, { } { ∈ W |[n] [n]} ∞ ∈ with IdV denoting the identity V V, V N. G → G ⊆ [n] [n] We construct each Γ∗ through its finite restrictions Γ∗ := (Γ∗ )t 0 on [n] by first G,ω G,ω t ≥ [n] G putting Γ∗ := G and then defining 0 |[n] [n] [n] Γt∗ := Wt [n](Γt∗ ), if t > 0 is an atom time of W for which Wt [n] , Id[n], or (19) • [n] [n|] − | Γt∗ := Γt∗ , otherwise. • − [n] [n] [n] Above, we have written Γ∗ := lims t Γ∗s to denote the state of Γ∗ in the instant before t ↑ G,ω time t > 0. − [n] ∗ By (18), every ΓG,ω has cadl` ag` paths and is a Markov chain on [n]. Moreover, if ω is [n] [m] G [n] ∗ ∗ ∗ exchangeable, then so is Γω . By construction, ΓG,ω is the restriction to [m] of ΓG,ω, for [n] [n] G every m n and every G N. In particular, Γ∗ = Γ∗ for all G, G0 N such that G,ω G0,ω ≤ [n] ∈ G ∈ G G [n] = G0 [n]. Thus, (Γ∗ )n N determines a unique collection Γ∗ = (Γ∗)t 0 in N, for every | | G,ω ∈ G,ω t ≥ G G N. We define the process Γ∗ = Γ∗ : G N as the family of all such collections ∈ G ω { G,ω ∈ G } generated from the same Poisson point process.

Theorem 3.3 (Poissonian construction). Let Γ = Γ : G N be an exchangeable Feller { G ∈ G } process on N. Then there exists an exchangeable measure ω satisfying (18) such that Γ = Γω∗ , as constructedG in (19). L 8 HARRY CRANE

3.2.1. Levy–It´ oˆ representation. Our next theorem builds on Theorem 3.3 by classifying the discontinuities of Γ into three types. If s > 0 is a discontinuity time for ΓG = (Γt)t 0, then either ≥ ij ij (I) there is a unique edge ij for which Γs , Γs , − ij (II) there is a unique vertex i 1 for which the edge statuses (Γ )j i are updated ≥ s , according to an exchangeable transition probability on 0, 1 N and− all edges not incident to i remain fixed, or { } (III) the edges of Γs are updated according to a random rewiring map as in discrete-time. − Qualitatively, the above description separates the jumps of ΓG according to their local and global characteristics. Type (I) discontinuities are local—they involve a change in status to just a single edge—while Type (III) discontinuities are global—they involve a change to a positive proportion of all edges. Type (II) discontinuities lie in between—they involve a change to infinitely many but a zero limiting proportion of edges. According to this characterization, there can be no discontinuities affecting the status of more than one but a zero limiting fraction of vertices or more than one but a zero limiting fraction of edges. The decomposition of the discontinuities into Types (I)-(III) prompts the L´evy–Itˆodecom- position of the intensity measure ω from Theorem 3.3. More specifically, Theorem 3.4 below decomposes the jump measure of ω into a unique quadruple (e, v, Σ, Υ), where e = (e0, e1) is a unique pair of non-negative constants, v is a unique non-negative constant, Σ is a unique probability measure on the space of 2 2 stochastic matrices, and Υ is a unique measure on × the space of rewiring limits. These components contribute to the behavior of ΓG, G N, as follows. ∈ G

(I) For unique constants e0, e 0, each edge ij, i , j, changes its status independently 1 ≥ of all others: each edge in Γ is removed independently at Exponential rate e0 0 G ≥ and each absent edge in ΓG is added independently at Exponential rate e1 0. A jump of this kind results in a Type (I) discontinuity. ≥ (II) Each vertex jumps independently at Exponential rate v 0. When vertex i N ≥ ∈ experiences a jump, the statuses of edges ij, j , i, are updated conditionally independently according to a random transition probability matrix S generated from a unique probability measure Σ on the space of 2 2 stochastic matrices: × P ij ij Γs = k0 Γs = k, S = Skk0 , k, k0 = 0, 1, { | − }

where S = (Skk0 ) Σ. Edges that do not involve the specified vertex i stay fixed. A jump of this kind∼ results in a Type (II) discontinuity. (III) A unique measure Υ on ∗ determines a transition rate measure ΩΥ, akin to the induced transition probabilityV measure (14) discussed in Section 3.1.1. A jump of this kind results in a Type (III) discontinuity. For i N and any probability measure Σ on 2 2 stochastic matrices, we write Ω(i) to ∈ × Σ denote the probability measure induced on N by the procedure applied to vertex i in W (II) above. That is, we generate W Ω(i) by first taking S Σ and, given S, generating ∼ Σ ∼ ij ij ij W = (W0 , W1 ), j , i, as independent and identically distributed from ij P W = k S = S , k = 0, 1 and j , i, and { 0 | } 0k ij P W = k S = S , k = 0, 1 and j , i. { 1 | } 1k EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 9

i j For all other edges i0 j0, i0 , j0, such that i < i0, j0 , we put W 0 0 = (0, 1). Thus, the action of (i) { } W ΩΣ on G N results in a discontinuity of Type (II). By slight abuse of notation, we write∼ ∈ G

X∞ (i) (20) ΩΣ := ΩΣ . i=1

For k = 0, 1, we define k as the measure that assigns unit mass to each rewiring map with a single off-diagonal entry equal to (k, k) and all other off-diagonal entries (0, 1). In the next theorem, I := IdN denotes the rewiring limit of the identity IdN : N (2) | | W → N and υ is the mass assigned to Id[2] by Ωυ, for any υ ∗. W ∗ ∈ V Theorem 3.4 (Levy–It´ oˆ representation). Let Γ∗ = Γ∗ : G N be an exchangeable Feller ω { G,ω ∈ G } process constructed in (19) based on intensity ω satisfying (18). Then there exist unique constants e0, e , v 0, a unique probability measure Σ on 2 2 stochastic matrices, and a unique measure Υ 1 ≥ × on ∗ satisfying V Z (21) Υ( I ) = 0 and (1 υ(2))Υ(dυ) < { } − ∗ ∞ V∗ such that

(22) ω = ΩΥ + vΩΣ + e00 + e11.

Remark 3.5. By Theorem 3.3, every exchangeable Feller process Γ admits a construction by Γω∗ in (19) for some ω satisfying (18). Therefore, Theorem 3.4 provides a Levy–It´ o-typeˆ representation for all exchangeable N-valued Feller processes Γ. G Remark 3.6. The regularity condition (21) and characterization (22) recall a similar description for Rd-valued Levy´ processes, which decompose as the superposition of independent with drift, a , and a pure-jump martingale. In Rd, the Poissonian component is characterized by a Levy´ measure Π satisfying Z (23) Π( (0,..., 0) ) = 0 and (1 x 2)Π(dx) < ; { } ∧ | | ∞ see Theorem 1 of [3]. Only a little imagination is needed to appreciate the resemblance between (21) and (23).

Intuitively, (21) is the naive condition needed to ensure that ΩΥ satisfies (18). By the Aldous–Hoover theorem (Theorem 4.2), υ(2) corresponds to the probability that a random ∗ rewiring map from Ωυ restricts to Id[2] 2; thus, Z ∈ W (1 υ(2))Υ(dυ) < − ∗ ∞ V∗ guarantees that the restriction of Γ to has cadl` ag` sample paths. Under exchangeability, G[2] this is enough to guarantee that Γ[n] has cadl` ag` paths for all n N. ∈ 3.2.2. Projection into the space of graph limits. As in discrete-time, we write Γ for the process D∗ derived from Γ by mixing with respect to each initial distribution γD, D ∗. Analogous to Theorem 3.2, every exchangeable Feller process almost surely projects to∈ Da Feller process in the space of graph limits. 10 HARRY CRANE

Theorem 3.7 (Graph limit process). Let Γ = Γ : G N be an exchangeable Feller process on { G ∈ G } N. Then Γ ∗ exists almost surely and is a Feller process on ∗. Moreover, each ΓD is continuous atG all t > 0|exceptD | possibly at the times of Type (III) discontinuities.D | |

4. Preliminaries:Graph-valued processes Proof of our main theorems is spread over Sections 5 and 6. In preparation, we first develop the machinery of exchangeable random graphs, graph-valued processes, rewiring maps, graph limits, and rewiring limits.

4.1. Exchangeable random graphs.

Definition 4.1 (Exchangeable random graphs). A random graph Γ V is exchangeable if σ ∈ Gσ Γ = Γ for all σ SV. A measure γ on N is exchangeable if γ(S) = γ(S ) for all σ SV and L ∈ σ G σ ∈ all measurable subsets S N, where S := G : G S . ⊆ G { ∈ } In the following theorem, is any Polish space. X ij Theorem 4.2 (Aldous–Hoover [2], Theorem 14.21). Let X := (X )i,j 1 be a symmetric - σ(i)σ(j) ≥ X valued array for which X = (X )i,j 1 for all σ SN. Then there exists a measurable function 4 L ≥ ∈ ij f : [0, 1] such that f ( , b, c, ) = f ( , c, b, ) and X = X∗, where X∗ := (X∗ )i,j 1 is defined by → X · · · · L ≥ ij (24) X∗ := f (ζ , ζ i , ζ j , ζ i,j ), i, j 1, ∅ { } { } { } ≥ for ζ ;(ζ i )i 1;(ζ i,j )j>i 1 i.i.d. Uniform random variables on [0, 1]. { ∅ { } ≥ { } ≥ } By taking = 0, 1 , the Aldous–Hoover theorem provides a general construction for all exchangeableX random{ } graphs with countably many vertices. We make repeated use of the representation in (24) throughout our discussion, with particular emphasis on the especially nice Aldous–Hoover representation and other special properties of the Erdos–R˝ enyi´ process.

4.2. Erd˝os–R´enyirandom graphs. For fixed p [0, 1], the Erdos–R˝ enyi´ measure εp on N is ∈ G defined as the distribution of Γ N obtained by including each edge independently with ∈ G probability p. The Erdos–R˝ enyi´ measure is exchangeable for every p [0, 1]. We call Γ εp an Erd˝os–R´enyiprocess with parameter p. ∈ ∼ By including each edge independently with probability p, the finite-dimensional distri- butions of Γ εp satisfy ∼ Y (n) Fij 1 Fij (25) P Γ = F = ε (F):= p (1 p) − > 0, F , n N . { |[n] } p − ∈ G[n] ∈ 1 i

Thus, εp has full support in the sense that it assigns positive measure to all open subsets of N. G Important to our proof of Theorems 3.1, 3.4, and 5.1, Γ εp admits an Aldous–Hoover ∼ representation Γ = (g0(ζ , ζ i , ζ j , ζ i,j ))i,j 1, where L ∅ { } { } { } ≥ ( 1, 0 ζ i,j p, (26) g0(ζ , ζ i , ζ j , ζ i,j ) = g0(ζ i,j ):= ≤ { } ≤ ∅ { } { } { } { } 0, otherwise. EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 11

4.3. Feller processes. Let Z := Zz : z be a Markov process in any Polish space . { ∈ Z} Z The semigroup (Pt)t T of Z acts on bounded, measurable functions g : R by ∈ Z → Pt g(z):= E(g(Zt) Γ0 = z), z . | ∈ Z We say that Z possesses the Feller property if for all bounded, continuous g : R Z → (i) z Pt g(z) is continuous for every t > 0 and 7→ (ii) limt 0 Pt g(z) = g(z) for all z . ↓ ∈ Z Proposition 4.3. The following are equivalent for any exchangeable Markov process Γ = Γ : G { G ∈ N on N. G } G (i) Γ is consistent. (ii) Γ has the Feller property. Proof. (i) (ii): Compactness of the topology induced by (6) and the Stone–Weierstrass theorem⇒ imply that

g : N R : n N such that G = G0 implies g(G) = g(G0) { G → ∃ ∈ |[n] |[n] } is dense in the space of continuous functions N R. Therefore, every bounded, continu- G → ous g : N N can be approximated to arbitrary accuracy by a function that depends on G → G N only through G [n], for some n N. For any such approximation, the conditional expectation∈ G in the definition| of the Feller∈ property depends only on the restriction of Γ to [n], which is a Markov chain. The first point in the Feller property follows immediately andG the second follows soon after by noting that is a finite state space and, therefore, G[n] Γ[n] must have a strictly positive hold time in its initial state with probability one.

(ii) (i): For fixed n N and F , we define ψF : N 0, 1 by ⇒ ∈ ∈ G[n] G → { } ψF(G):= 1 G = F , G N, { |[n] } ∈ G which is bounded and continuous on N. To establish consistency, we must prove that G P Γt = F Γ0 = G = P Γt = F Γ0 = G { |[n] | } { |[n] | |[n] |[n]} for every t 0, G N, and F , for all n N, which amounts to showing that ≥ ∈ G ∈ G[n] ∈ (27) PtψF(G) = PtψF(G∗) for all G, G N for which G = G . ∗ ∈ G |[n] ∗|[n] By part (i) of the Feller property, PtψF( ) is continuous and, therefore, it is enough · to establish (27) on a dense subset of G N : G [n] = F . Exchangeability implies that σ { ∈ G | } PtψF(G) = PtψF(G ) for all σ SN that coincide with the identity [n] [n]. To establish (27), ∈ σ → we identify G N such that G = F and G : σ coincides with the identity [n] [n] ∗ ∈ G ∗|[n] { ∗ → } is dense in G N : G [n] = F . For this,{ we∈ Gdraw G| ε} conditional on the event G = F . Now, for every ∗ ∼ 1/2 { ∗|[n] } F0 [n0], n0 n, such that F0 [n] = F, Kolmogorov’s zero-one law guarantees a permutation σ :∈N G N that≥ fixes [n] and| for which G σ = F . Therefore, → ∗ |[n0] 0 σ G∗ : σ SN, σ coincides with identity [n] [n] { ∈ → } is dense in G N : G = F . By the invariance of PtψF( ) with respect to permutations { ∈ G |[n] } · that fix [n], PtψF is constant on a dense subset of G N : G = F . Continuity implies { ∈ G |[n] } that PtψF( ) must be constant on the closure of this set and, therefore, can depend only on the restriction· to . G[n] 12 HARRY CRANE

By part (ii) of the Feller property,

PtψF(G∗) ψF(G∗) as t 0. → ↓ It follows that each finite restriction Γ[n] has cadl` ag` paths and is consistent.  4.4. More on graph limits. Recall Definition 2.3 of dissociated random graphs. In this context, corresponds to the space of exchangeable, dissociated probability measures on D∗ N, so we often conflate the two notions and write G D(F):= γD( G N : G = F ), F , n N, { ∈ G |[n] } ∈ G[n] ∈ for each D . The space of graph limits corresponds to the set of exchangeable, ∈ D∗ D∗ dissociated probability measures on N and is compact under metric X G X n (28) D D0 := 2− D(F) D0(F) , D, D0 ∗ . k − k | − | ∈ D n N F ∈ ∈G[n] S m N [m] We furnish ∗ with the trace of the Borel σ-field on [0, 1] ∈ G . The Aldous–HooverD theorem (Theorem 4.2) provides the link between dissociated mea- sures and exchangeable random graphs. In particular, dissociated graphs are ergodic with respect to the action of SN in the space of exchangeable random graphs, i.e., the law of every exchangeable random graph is a mixture of dissociated random graphs, and to every exchangeable random graph Γ there is a unique probability measure ∆ so that Γ γ∆ as in (11). ∼ (n) Example 4.4 (Graph limit of the Erdos–R˝ enyi´ process). For fixed p [0, 1], let (εp )n N be ∈ ∈ the collection of finite-dimensional Erdos–R˝ enyi´ measures in (25). The graph limit Γ of Γ εp is | | ∼ deterministic and satisfies δ(F, Γ) = ε(n) a.s. for every F , n N. p ∈ G[n] ∈ Example 4.5 (Random graph limit). In general, the graph limit of an exchangeable random graph Γ is a random element of ∗. For example, let εα,β be the mixture of Erdos–R˝ enyi´ measures by the Beta distribution with parameterD (α, β), i.e., Z εα,β( ):= εp( )Bα,β(dp), · [0,1] · where Bα,β denotes the Beta distribution with parameter (α, β). The finite-dimensional distributions of Γ ε are ∼ α,β n n α↑ 1 β↑ 0 (29) P Γ = F = ε(n) (F):= , F , n N, [n] α,β (n) [n] { | } (α + β)↑ 2 ∈ G ∈ P P j where n1 := 1 i

The distribution of Γ corresponds to γD for some D ∗. Some comments about the Lov´asz–Szegedynotion∈ ofD graph limit.

(1) For a given sequence of graphs (Gn)n N, Lovasz´ and Szegedy’s graphon ([9], Section 1) is not unique. In contrast, our interpretation∈ of a graph limit as a probability measure is essentially unique (up to sets of measure zero). (2) Lovasz´ and Szegedy’s definition of graphon is a recasting of the Aldous–Hoover theorem in the special case of countable graphs; see Theorem 4.2 above. (3) Our definition of a countable graph G N as the projective limit of a compatible ∈ G sequence of finite graphs (Gn)n N should not be confused with a graph limit. Any G N ∈ ∈ G corresponds to a unique compatible sequence (G1, G2,...) of finite graphs, whereas a graph 1 limit D ∗ corresponds to the measurable set of countable graphs D − := G N : G = D ∈. D | | { ∈ G | | } 4.5. Rewiring maps and their limits. As defined in Section 3.1, a rewiring map W is a ij symmetric 0, 1 0, 1 -valued array (W )i,j 1 with (0, 0) along the diagonal. We often { } × { } ≥ regard W as a map N N, a pair (W0, W1) N N, or a 0, 1 0, 1 -valued array, without explicit specification.G → G ∈ G × G { } × { } φ φ(i)φ(j) Given any W [n] and any injection φ :[m] [n], we define W := (W )1 i,j m. In ∈ W → ≤ ≤ particular, W [n] := (W0 [n], W1 [n]) denotes the restriction of W to a rewiring map [n] [n] | | σ | G → G and, for any σ SN, W is the image of W under relabeling by σ. We equip N with the product-discrete∈ topology induced by W

d(W, W0):= 1/ max n N : W = W0 , W, W0 N, { ∈ |[n] |[n]} ∈ W and the associated Borel σ-field σ [n] n N generated by restriction maps. h ·| i ∈ Definition 4.7 (Exchangeable rewiring maps). A random rewiring map W N is exchange- σ ∈ W able if W = W for all σ SN. A measure ω on N is exchangeable if L ∈ W σ ω(S) = ω(S ) for all σ SN, ∈ σ σ for every measurable subset S N, where S := W : W S . In particular, a probability ⊆ W { ∈ } measure ω on N is exchangeable if it determines the law of an exchangeable random rewiring map. W

Appealing to the notion of graph limits, we define the density of V [m] in W [n] by ∈ W ∈ W m δ(V, W):= ind(V, W)/n↓ , where, as in (7), ind(V, W) is the number of injections φ :[m] [n] for which Wφ = V. We → define the limiting density of V in W N by ∈ W[m] ∈ W δ(V, W):= lim δ(V, W [n]), if it exists. n →∞ | Definition 4.8 (Rewiring limits). The rewiring limit of W N is the collection ∈ W S W := (δ(V, W))V m N [m] , | | ∈ ∈ W S provided δ(V, W) exists for every V m N [m]. If δ(V, W) does not exist for some V, then we ∈ ∈ W S put W := ∂. We write to denote the closure of W : W N ∂ in [0, 1] m N [m] . | | V∗ {| | ∈ W }\{ } ∈ W Following the program of Section 2.2, every υ determines an exchangeable prob- ∈ V∗ ability measure Ω on N. We call Ω the rewiring measure directed by υ, which is the υ W υ 14 HARRY CRANE essentially unique exchangeable measure on N such that Ω -almost every W N has W υ ∈ W W = υ. Given any measure Υ on ∗, we define the mixture of rewiring measures by | | V Z (30) ΩΥ( ):= Ω ( )Υ(dυ). · υ · V∗ The space of rewiring limits corresponds to the closure of exchangeable, dissociated V∗ probability measures on N and, therefore, is compact under the analogous metric to (28), W X X n υ υ0 := 2− υ(V) υ0(V) , υ, υ0 ∗ . k − k | − | ∈ V n N V ∈ ∈W[n] S As for the space of graph limits, we furnish with the trace Borel σ-field of [0, 1] m N [m] . V∗ ∈ W Lemma 4.9. The following hold for general rewiring maps.

(i) As a map N N, every W N is Lipschitz continuous with Lipschitz constant 1 and the rewiringG → operation G is associative∈ W in the sense that

(W W0) W00 = W (W0 W00), for all W, W0, W00 N . ◦ ◦ ◦ ◦ ∈ W σ σ σ (ii)( W(G)) = W (G ) for all W N,G N, and σ SN. ∈ W ∈ G ∈ (iii) If W N is an exchangeable rewiring map, then W exists almost surely. Moreover, for ∈ W | | every υ , Ω -almost every W N has W = υ. ∈ V∗ υ ∈ W | | Proof. Parts (i) and (ii) are immediate from the definition of rewiring. Part (iii) follows from a straightforward modification of the almost sure existence of Γ for any exchangeable random graph Γ, cf. Theorem 4.2. | | 

5. Discrete-time chains and the rewiring measure 5.1. Conditional Aldous–Hoover theorem for graphs. Theorem 5.1 (Conditional Aldous–Hoover theorem for graphs). Let Γ be an exchangeable Feller chain on N whose transitions are governed by transition probability measure P. Then there G exists a measurable function f : [0, 1]4 0, 1 0, 1 satisfying f ( , b, c, , ) = f ( , c, b, , ) such × { } → { } · · · · · ·ij that, for every fixed G N, the distribution of Γ0 P(G, ) is identical to that of Γ0∗ := (Γ0∗ )i,j 1, where ∈ G ∼ · ≥ ij ij (31) Γ0∗ = f (ζ , ζ i , ζ j , ζ i,j , G ), i, j 1, ∅ { } { } { } ≥ for ζ ;(ζ i )i 1;(ζ i,j )j>i 1 i.i.d. Uniform random variables on [0, 1]. { ∅ { } ≥ { } ≥ } The representation in (31) requires a delicate argument. By the implication (ii) (i) in ⇒ij Proposition 4.3, we can immediately deduce a weaker representation for Γ0 by Γ0∗ = (Γ0∗ )i,j 1 with ≥ ij Γ0∗ = f (ζ , ζ i , ζ j , ζ i,j , G [i j]), i, j 1, ∅ { } { } { } | ∨ ≥ where i j := max(i, j). The stronger representation in (31) says that the conditional ∨ distribution of each entry of Γ0∗ depends only on the corresponding entry of G.

Proof of Theorem 5.1. Let Γ = Γ : G N be an exchangeable Feller chain on N with { G ∈ G } G transition probability measure P. Let ε1/2 denote the Erdos–R˝ enyi´ measure with p = 1/2. To prove (31), we generate a jointly exchangeable pair (Γ, Γ0), where

Γ ε and Γ0 Γ = G P(G, ). ∼ 1/2 | ∼ · EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 15

By exchangeability of ε1/2 and the transition probability measure P,(Γ, Γ0) is jointly ex- σ σ changeable, i.e., (Γ , Γ0 ) = (Γ, Γ0) for all σ SN, and, therefore, determines a weakly exchangeable 0, 1 0, 1 -valuedL array. By∈ the Aldous–Hoover theorem, there exists a { } × { } 4 measurable function f : [0, 1] 0, 1 0, 1 such that (Γ, Γ0) = (Γ∗, Γ0∗), where → { } × { } L ij (Γ∗, Γ0∗) = f (ζ , ζ i , ζ j , ζ i,j ), i, j 1, ∅ { } { } { } ≥ for ζ ;(ζ i )i 1;(ζ i,j )j>i 1 i.i.d. Uniform[0, 1]. By separating components, we can write { ∅ { } ≥ { } ≥ } f = ( f0, f1) so that ij ij (Γ∗ , Γ0∗ ) = ( f0(ζ , ζ i , ζ j , ζ i,j ), f1(ζ , ζ i , ζ j , ζ i,j )), i, j 1. ∅ { } { } { } ∅ { } { } { } ≥ With g0 and g0 defined as in (26), we have

(g0(ζ , ζ i , ζ j , ζ i,j ))i,j 1 ε1/2 ∅ { } { } { } ≥ ∼ and

( f0(ζ , ζ i , ζ j , ζ i,j ))i,j 1 = (g0(ζ , ζ i , ζ j , ζ i,j ))i,j 1 = (g0(ζ i,j )i,j 1. ∅ { } { } { } ≥ L ∅ { } { } { } ≥ { } ≥ Using Kallenberg’s notation [7], we write ζˆJ = (ζI)I J for subsets J N, so that ζˆ i,j = ⊆ ⊂ { } (ζ , ζ i , ζ j , ζ i,j ), i, j 1, and ζˆ i = (ζ , ζ i ), i 1. By Theorem 7.28 of [7], there exists a ∅ { } { } { } ≥ 2 { } 4 ∅ { } 8 ≥ measurable function h : [0, 1] [0, 1] [0, 1] [0, 1] such that for η ;(η i )i 1;(η i,j )j>i 1 ∪ ∪ → { ∅ { } ≥ { } ≥ } i.i.d. Uniform[0, 1] random variables, h(ζˆJ, ηˆJ) Uniform[0, 1] and is independent of ζˆJ ζJ ∼ \{ } and ηˆJ ηJ for every J N with two or fewer elements and \{ } ⊆ ˆ ˆ ˆ ˆ g0(ζ i,j ) = g0(ζ i,j ) = f0(h(ζ , η ), h(ζ i , ηˆ i ), h(ζ j , ηˆ j ), h(ζ i,j , ηˆ i,j )) a.s. for every i, j 1. { } { } ∅ ∅ { } { } { } { } { } { } ≥ Now, put ξ = h(ζ , η ), ξ i = h(ζˆi, ηˆ i ), and ξ i,j = h(ζˆij, ηˆ i,j ) so that ξ ;(ξ i )i 1;(ξ i,j )j>i 1 are i.i.d. Uniform[0∅ , 1]∅ random∅ { } variables.{ } The generic{ } Aldous–Hoover{ } { representation∅ { } ≥ { allows} ≥ } us to construct (Γ∗, Γ0∗) by ij ij (Γ∗ , Γ0∗ ) = ( f0(ξˆ i,j ), f1(ξˆ i,j )), i, j 1. { } { } ≥ From Kallenberg’s Theorem 7.28, (Γ∗, Γ0∗) also has the representation ij ij (Γ∗ , Γ0∗ ) = ( f0(ξˆ i,j ), f1(ξˆ i,j )) = (g0(ζˆ i,j ), g1(ζˆ i,j , ηˆ i,j )), a.s., i, j 1, { } { } { } { } { } ≥ where g : [0, 1]8 0, 1 is the composition of f with h. Again by the Coding Lemma, we 1 → { } 1 can represent (ζˆ i,j , ηˆ i,j )i,j 1 by { } { } ≥ (ζˆ i,j , ηˆ i,j )i,j 1 = (a(αˆ i,j ))i,j 1, { } { } ≥ L { } ≥ where α ;(α i )i 1;(α i,j )j>i 1 are i.i.d. Uniform[0, 1]. { ∅ { } ≥ { } ≥ } It follows that (Γ, Γ0) possesses a joint representation by ij Γ∗ = g0(α i,j ) and(32) { } ij Γ0∗ = g0 (α , α i , α j , α i,j ). 1 ∅ { } { } { } By the Coding Lemma, we can represent

(g0(α i,j ), α i,j ) = (g0(α i,j ), u(g0(α i,j ), χ i,j )) { } { } L { } { } { } for (χ i,j )j>i 1 i.i.d. Uniform[0, 1] random variables independent of (α i,j )j>i 1. Thus, we define{ } ≥ { } ≥ ij ij g0 (α , α i , α j , χ i,j , G ):= g1(α , α i , α j , u(G , χ i,j )) = g1(α , α i , α j , α i,j ), 1 ∅ { } { } { } ∅ { } { } { } L ∅ { } { } { } 16 HARRY CRANE so that

(Γ, Γ0) = (g0(α i,j ), g0 (α , α i , α j , α i,j , g0(α i,j )))i,j 1. L { } 1 ∅ { } { } { } { } ≥ Conditioning on Γ = G gives { } ij P(G, ) = P (g0 (α , α i , α j , α i,j , G ))i,j 1 for ε1/2-almost every G N . · { 1 ∅ { } { } { } ≥ ∈ ·} ∈ G By the Feller property, P(G, ) is continuous in the first argument and, thus, the above · equality holds for all G N and representation (31) follows.  ∈ G

5.2. Discrete-time characterization. The above construction of Γ from the single process Γ† is closely tied to the even stronger representation in Theorem 3.1, according to which Γ can be constructed from a single i.i.d. sequence of exchangeable rewiring maps.

Proposition 5.2. For an exchangeable probability measure ω on N, let Γ = Γ∗ : G N W ω∗ { G,ω ∈ G } be constructed as in (13) from an i.i.d. sequence W1, W2,... from ω. Then Γω∗ is an exchangeable Feller chain on N. G Proof. By our assumption that W1, W2,... are independent and identically distributed, each ΓG∗ ,ω has the Markov property. Exchangeability of ω and Lemma 4.9(ii) pass exchangeability along to Γω∗ . By Lemma 4.9(i), every W N is Lipschitz continuous with Lipschitz [n] ∈ W constant 1, and so each Γ∗ also has the Markov property for every n N. By Proposition G,ω ∈ 4.3, Γω∗ is Feller and the proof is complete. 

Definition 5.3 (Rewiring Markov chains). We call Γω∗ constructed as in (13) a rewiring chain with rewiring measure ω. If ω = ΩΥ for some probability measure Υ on ∗, we call ΓΥ∗ a rewiring chain directed by Υ. V

Theorem 5.4. Let Γ be an exchangeable Feller chain on N. Then there exists an exchangeable G probability measure ω on N so that Γ = Γω∗ , where Γω∗ = Γ∗ : G N is a rewiring Markov W L { G,ω ∈ G } chain with rewiring measure ω.

Proof. By Theorem 5.1, there exists a measurable function f : [0, 1]4 0, 1 0, 1 so that × { } → { } the transitions of Γ can be constructed as in (31). From f , we define f : [0, 1]4 0, 1 2 by ∗ → { } f ∗(a, b, c, d):= ( f (a, b, c, d, 0), f (a, b, c, d, 1)).

Given i.i.d. Uniform[0, 1] random variables ζ ;(ζ i )i 1;(ζ i,j )j>i 1 , we construct a weakly 2 ij{ ∅ { } ≥ { } ≥ } exchangeable 0, 1 -valued array W∗ := (W∗ )i,j 1 by { } ≥ ij W∗ := f ∗(ζ , ζ i , ζ j , ζ i,j ), i, j 1. ∅ { } { } { } ≥ We write ω to denote the essentially unique distribution of W∗. Treating W∗ as a rewiring map, the image W∗(G) = Γ∗ satisfies

ij ij Γ∗ = f (ζ , ζ i , ζ j , ζ i,j , G ) for all i, j 1, ∅ { } { } { } ≥ for every G N; whence, Γω∗ in (13) has the correct transition probabilities. The proof∈ is G complete.  EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 17

5.3. Characterizing the rewiring measure. Theorem 5.4 asserts that any exchangeable Feller chain on N can be constructed as a rewiring chain. In the discussion surrounding Lemma 4.9 above,G we identify every exchangeable probability measure ω with a unique probability measure Υ on ∗ through the relation ω = ΩΥ, with ΩΥ defined in (30). Our next proposition records thisV fact.

Proposition 5.5. Let ω be an exchangeable probability measure on N. Then there exists an RW essentially unique probability measure Υ on ∗ such that ω = ΩΥ := ΩυΥ(dυ). V V∗ Proof. This is a combination of the Aldous–Hoover theorem and Lemma 4.9(iii). By Lemma 4.9(iii), ω-almost every W N possesses a rewiring limit, from which the change of variables formula gives ∈ W Z Z ω(dW):= Ω V (dW)ω(dV) = Ωυ(dW) ω (dυ) = Ω ω (dW), N | | | | | | W V∗ where ω denotes the image measure of ω by : N ∂ .  | | | · | W → V∗ ∪{ } Proof of Theorem 3.1. Follows from Theorem 5.4 and Proposition 5.5.  5.4. The induced chain on . Any υ corresponds to a unique transition probability D∗ ∈ V∗ P on N N, as defined in (14). Moreover, υ acts on by composition of υ G × G ∈ V∗ D∗ probability measures, as in (15). For υ, υ0 ∗, we define the product P0 = PυPυ0 by Z ∈ V

(33) P0(G, dG0):= Pυ0 (G00, dG0)Pυ(G, dG00), G, G0 N . N ∈ G G Lemma 5.6. Let W, W0 N be independent exchangeable random rewiring maps such that W and W exist almost surely.∈ W Then W W exists almost surely and | | | 0| | 0 ◦ | P W W = P W P W a.s. | 0◦ | | | | 0| Proof. Follows by independence of W and W , the definition of W W, and the definition 0 0 ◦ of P W P W in (33).  | | | 0| Proof of Theorem 3.2. By Theorem 3.1, there exists a measure Υ on such that we can V∗ regard Γ as Γ∗ , the rewiring chain directed by Υ. Thus, for every G N, ΓG = (Γm)m 0 is Υ ∈ G ≥ determined by Γ0 = G and

Γm = (Wm W1)(G), m 1, L ◦ · · · ◦ ≥ where W1, W2,... are i.i.d. with distribution ΩΥ. For any D ∗, ΓD = (Γm)m 0 is generated from Γ = ΓG : G N by first taking Γ0 γD ∈ D ≥ { ∈ G } ∼ and then putting ΓD = ΓG on the event Γ0 = G. By the Aldous–Hoover theorem along with exchangeability of γD and ΩΥ, the graph limit Γm exists almost surely for every m 0 and, | | ≥ thus, ΓD := ( Γm )m 0 exists almost surely in ∗. | | | | ≥ D Also, by Lemma 4.9(iii), W1 , W2 ,... is i.i.d. from Υ, and, by Lemma 5.6, Wm W1 exists almost surely and | | | | | ◦ · · · ◦ |

P Wm W = P W P Wm a.s. for every m 1. | ◦···◦ 1| | 1| ··· | | ≥ Finally, by Theorem 3.1,

Γm = (Wm W1)(Γ0) = Γ0 W1 Wm a.s. for every m 1, | | L | ◦ · · · ◦ | | || | · · · | | ≥ and so each ΓD is a Markov chain on with initial state D and representation (17). | | D∗ 18 HARRY CRANE

To establish the Feller property, we use the fact that is compact and, therefore, any D∗ h : ∗ R is uniformly continuous and bounded. The dominated convergence theorem andD Lipschitz→ continuity of the action of υ on immediately ∈ V∗ D∗ give part (i) of the Feller property. Part (ii) of the Feller property is trivial because Γ ∗ is a discrete-time Markov chain. | D | 

We could interpret P0 in (33) as the two-step transition probability measure for a time- inhomogeneous Markov chain on N with first step governed by P and second step G υ governed by Pυ0 . By this interpretation, Theorem 3.2 leads to another description of any exchangeable Feller chain Γ as a mixture of time-inhomogeneous Markov chains. In particular, we can construct Γ by first sampling Y1, Y2,... i.i.d. from the unique directing measure Υ. Each Yi induces an exchangeable transition probability PYi as in (14). At each time m 1, given Γm 1,..., Γ0 and Y1, Y2,..., we generate Γm PYm (Γm 1, ). ≥ − ∼ − · 5.5. Examples.

Example 5.7 (Product Erdos–R˝ enyi´ chains). Let (p0, p ) (0, 1) (0, 1) and put ω = εp εp , 1 ∈ × 0 ⊗ 1 where εp denotes the Erdos–R˝ enyi´ measure with parameter p. This measure generates transitions ij out of state G N by flipping a pk-coin with k = G independently for every j > i 1. The limiting trajectory∈ G in is deterministic and settles down to degenerate stationary distribution≥ at D∗ the graph limit of an εq-random graph, with q := p0/(1 p + p0). − 1 Example 5.8 (A reversible family of Markov chains). The model in Example 5.7 generalizes by mixing (p0, p ) with respect to any measure on [0, 1] [0, 1]. For example, let α, β 0 and define 1 × ≥ ω = εP0 εP1 for (P0, P1) Bα,β Bβ,α, where Bα,β is the Beta distribution with parameter (α, β). The finite-dimensional⊗ transition∼ probabilities⊗ of this mixture model are

n00 n01 n10 n11 [n] [n] α↑ β↑ β↑ α↑ P Γ = F0 Γm = F = , F, F0 [n], m+1 n0 n1 { | } (α + β)↑ • (α + β)↑ • ∈ G P ij ij where nrs := 1 i

We now let Γ := ΓG : G N be a continuous-time exchangeable Feller process on N. Any such process can{ jump∈ infinitely G } often in arbitrarily small time intervals; however,G by the consistency property (4), every finite restriction Γ[n] is a finite-state space Markov chain and can jump only finitely often in bounded intervals. As we show, the interplay between these possibilities restricts the behavior of Γ in a precise way. As before, IdN stands for the identity map N N. Let ω be an exchangeable measure G → G on N that satisfies W (34) ω( IdN ) = 0 and ω( W N : W , Id ) < . { } { ∈ W |[2] [2]} ∞ We proceed as in (19) and construct a process Γω∗ := ΓG∗ ,ω : G N on N from a Poisson + { ∈ G } G point process W = (t, Wt) R N with intensity dt ω. { } ⊆ × W ⊗ EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 19

Proposition 6.1. Let ω be an exchangeable measure satisfying (34). Then Γω∗ constructed in (19) is an exchangeable Feller process on N. G [n] Proof. For each n N, Γω∗ is the restriction of Γω∗ to a process on [n]. By construction, [n] ∈ G (Γω∗ )n N is a compatible collection of Markov processes and, thus, determines a process Γω∗ ∈ on N. By exchangeability of ω and (34), G    [    ω( W N : W [n] , Id[n] ) = ω  W N : W i,j , Id i,j  { ∈ W | }  { ∈ W |{ } { }} 1 i

Corollary 6.2. Every exchangeable measure ω satisfying (34) determines the jump rates of an exchangeable Feller process on N. G By the Feller property, the transition law of each finite restriction Γ[n] is determined by the infinitesimal jump rates

1 [n] [n] (35) Qn(F, F0):= lim P Γ = F0 Γ = F , F, F0 [n], F , F0. t 0 t { t | 0 } ∈ G ↓ Exchangeability (3) and consistency (4) of Γ imply

σ σ Qn(F , F0 ) = Qn(F, F0) for all permutations σ :[n] [n] and(36) X→ (37) Qm(F , F0) = Qn(F, F00 : F00 = F0 ) = Qn(F, F00), |[m] { ∈ G[n] |[m] } F : F =F 00∈G[n] 00|[m] 0 for all F [n] and F0 [m] such that F [m] , F0, m n. Thus, for every G N,(Qn)n N ∈ G ∈ G | ≤ ∈ G ∈ determines a pre-measure Q(G, ) on N G by · G \{ } Q(G, G0 N : G0 = F0 ):= Qn(G , F0), F0 G , n N . { ∈ G |[n] } |[n] ∈ G[n] \{ |[n]} ∈ By (37), Q(G, ) is additive. By Caratheodory’s´ extension theorem, Q(G, ) has a unique · · extension to a measure on N G . G \{ } In Γω∗ constructed above, ω determines the jump rates through

Q(G, ) = ω( W N : W(G) ), G N G . · { ∈ W ∈ ·} ∈ G \{ } In fact, we can show that the infinitesimal rates of any exchangeable Feller process are determined by an exchangeable measure ω that satisfies (34). Our main theorem (Theorem 3.4) gives a Levy–It´ oˆ representation of this jump measure. 20 HARRY CRANE

6.1. Existence of an exchangeable jump measure. Let (Pt)t 0 be the Markov semigroup of ≥ an exchangeable Feller process Γ on N. For every t > 0, Theorems 5.1 and 5.4 guarantee the G existence of an exchangeable probability measure ωt on N such that, for every G N, W ∈ G (38) ωt( W N : W(G) ) = P Γt Γ0 = G . { ∈ W ∈ ·} { ∈ · | } By the relationship between (ωt)t 0 and the time-homogeneous Markov process (Γt)t 0, ωt is ≥ ≥ exchangeable for every t > 0, and the Chapman–Kolmogorov theorem implies that (ωt)t>0 satisfies Z (39) ωt+s( W N : W(G) ) = ωt( W0 N :(W0 W)(G) )ωs(dW), { ∈ W ∈ ·} N { ∈ W ◦ ∈ ·} W for all s, t 0 and all G N. The family (ωt)t 0 is not determined by these conditions, ≥ ∈ G > but time-homogeneity and (38) allows us to choose a family (ωt)t 0 that satisfies the further semigroup property ≥ Z ωt+s( ) = ωt( W0 N : W0 W )ωs(dW), s, t 0. · N { ∈ W ◦ ∈ ·} ≥ W By the Feller property, Wt ωt must also satisfy ∼ Pt g(G) = Eg(Wt(G)) g(G) as t 0 for all G N, → ↓ ∈ G for all continuous g : N R. Thus, as t 0, Wt P IdN and ωt w δ IdN , the point mass G → ↓ → → { } at the identity map. (Here, P denotes convergence in probability and w denotes weak → → convergence of probability measures.) By right-continuity at t = 0, we have a family (ωt)t 0 ≥ of measures on N. W We obtain an infinitesimal jump measure ω on N IdN through its finite-dimensional W \{ } restrictions ω(n) on Id : W[n] \{ [n]} (n) 1 (40) ω (V):= lim ωt( W N : W [n] = V ), V [n] Id[n] , n N . t 0 t { ∈ W | } ∈ W \{ } ∈ ↓ (n) Proposition 6.3. The family of measures (ω )n N in (40) determines a unique exchangeable measure ω that satisfies (34). ∈ DS E Proof. The Borel σ-field σ n N [n] is generated by the π-system of events ∈ W W N : W = V , V , n N. { ∈ W |[n] } ∈ W[n] ∈ For every n N and V , we define ∈ ∈ W[n] (n) ω( W N : W = V ) = ω (V). { ∈ W |[n] } (n) By construction, (ω )n N is consistent and satisfies ∈ X (m) (n) ω (V) = ω (V0) V :V =V 0∈W[n] 0|[m] for every m n and V Idm . Therefore, the function ω( W N : W = V ) = ≤ ∈ W[m] \{ } { ∈ W |[m] } ω(m)(V) is additive and Caratheodory’s´ extension theorem guarantees a unique extension to a measure on N IdN . For each n NW, ω\{(n) determines} the jump rates of an exchangeable Markov chain on ∈(n) [n], and so ω ( [n] Id[n] ) < for all n N. Specifying ω( IdN ) = 0 gives (34). G W \{ } ∞ ∈ (n) { } Exchangeability of every ωt, t 0, makes each ω , n N, exchangeable and, hence, implies ω is exchangeable. ≥ ∈  EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 21

Theorem 3.3. (Poissonian construction). Let Γ = Γ : G N be an exchangeable Feller { G ∈ G } process on N. Then there exists an exchangeable measure ω satisfying (18) such that Γ = Γω∗ , as constructedG in (19). L

Proof. Let ω be the exchangeable measure from Proposition 6.3 and let W = (t, Wt) be a Poisson point process with intensity dt ω. Since ω satisfies (34), Proposition 6.1{ allows} us ⊗ [n] to construct Γω∗ from W as in (19). The jump rates of each Γω∗ are determined by a thinned [n] version W of W that only maintains the atoms (t, Wt) for which Wt , Id . By the |[n] [n] thinning property of Poisson random measures (e.g. [6]), the intensity of W[n] is ω(n), as [n] defined in (40), and it follows immediately that the jump rate from F to F0 , F in Γω∗ is (n) ω ( W : W(F) = F0 ) = Qn(F, F0), { ∈ W[n] } for Qn( , ) in (35). Furthermore, · · (n) Qn(F, F ) = ω ( W : W(F) , F ) < G[n] \{ } { ∈ W[n] } ∞ [n] for all F [n], n N. By construction, (Γ∗ω )n N is a compatible collection of cadl` ag` exchangeable∈ G Markov∈ chains governed by the finite-dimensional∈ transition law of Γ. The proof is complete. 

6.1.1. Standard ω-process. For ω satisfying (18), we define the standard ω-process Wω∗ := (Wt∗)t 0 on N as the N-valued Markov process with W0∗ = IdN and infinitesimal jump rates≥ W W Q(W, dW0):= ω( W00 N : W00 W dW0 ), W , W0 N . { ∈ W ◦ ∈ } ∈ W Associativity of the rewiring maps (Lemma 4.9(i)) makes W = WV : V N a consistent { ∈ W } Markov process, where WV := (W∗ V)t 0 for each V N. Thus, an alternative description t ◦ ≥ ∈ W to the construction of Γω∗ = Γ∗ : G N in (19) is to put Γ∗ = (W∗(G))t 0, where Wω∗ is { G,ω ∈ G } G,ω t ≥ a standard ω-process.

Corollary 6.4. Let Γ = Γ : G N be an exchangeable Feller process on N. Then there exists { G ∈ G } G an exchangeable measure ω satisfying (18) such that ΓG = (Wt∗(G))t 0 for every G N, where L ≥ ∈ G Wω∗ = (W∗)t 0 is a standard ω-process. t ≥ 6.2. L´evy–Itˆorepresentation. Our main theorem refines Theorem 3.3 with an explicit Levy–´ Ito-typeˆ characterization of any exchangeable ω satisfying (34). The representation entails a few special measures, which we now introduce.

ij 6.2.1. Single edge updates. For j > i 1 and k 0, 1 , we define ρk : N N by ij ≥ ∈ { } G → G G G = ρ (G), where 7→ 0 k ( i j i j G 0 0 , i0 j0 , ij G0 0 0 = k, i0 j0 = ij. ij ij In words, G = ρ (G) coincides with G at every edge except i, j : irrespective of Gij, ρ puts 0 k { } k ij ij G0 = k. We call ρk a single edge update map and define the single edge update measures by X (41) ( ):= ij ( ) k = 0 1 k δρ , , , · k · 1 i

6.2.2. Single vertex updates. For any vertex i N and G N, the sequence of edges adjacent ij ∈ ∈ G1 2 1 2 to i is an infinite 0, 1 -valued sequence (G )j i. Let x0 = x x and x = x x be infinite { } , 0 0 ··· 1 1 1 ··· sequences in 0, 1 and let i N be any distinguished vertex. For x = (x0, x1) and i N, we i { } ∈ i ∈ define v : N N by G = v (G), where x G → G 0 x  i j  G 0 0 , i0 , i and j0 , i  j  x 0 i i Gi0 j0  0 , 0 = and = 0 i0 j0  i0 i0 j0 (42) G0 =  x0 , j0 = i and G = 0 , i0 , j0.  j  x 0 , i = i and Gi0 j0 = 1  1 0  i0 i0 j0 x1 , j0 = i and G = 1 i We call vx a single vertex update map, as it affects only edges incident to the distinguished vertex i. ij When Γ is the realization of an exchangeable random graph, the sequence (Γ )j,i is exchangeable for all fixed i N. We ensure that the resulting Γ = vi (Γ) is exchangeable by ∈ 0 x choosing x from an exchangeable probability distribution on pairs X = (X0, X1) of infinite binary sequences. Let 2 denote the space of 2 2 stochastic matrices equipped with the S × Borel σ-field, i.e., each S 2 is a matrix (Sii )i i such that ∈ S 0 , 0=0,1 Sii 0 for all i, i = 0, 1 and • 0 ≥ 0 Si + Si = 1 for i = 0, 1. • 0 1 Therefore, each row of S 2 determines a probability measure on 0, 1 . From a probability ∈ S (i) { } measure Σ on 2, we write W ΩΣ to denote the probability measure of a random rewiring i S ∼ map W = vX constructed by taking S Σ and, given S, generating X0 and X1 independently of one anotherL according to ∼ j P X = k S = S , k = 0, 1 and { 0 | } 0k j P X = k S = S , k = 0, 1, { 1 | } 1k i independently for every j = 1, 2,.... We then define W = vX as in (42). We define the single vertex update measure directed by Σ by

X∞ (i) (43) ΩΣ( ):= Ω ( ). · Σ · i=1 For the reader’s convenience, we restate Theorem 3.4, whose proof relies on Lemmas 6.5, 6.6, and 6.7 from Section 6.3 below.

Theorem 3.4 (Levy–It´ oˆ representation). Let Γ∗ = Γ∗ : G N be an exchangeable Feller ω { G,ω ∈ G } process constructed in (19) based on intensity ω satisfying (18). Then there exist unique constants e0, e , v 0, a unique probability measure Σ on 2 2 stochastic matrices, and a unique measure Υ 1 ≥ × on ∗ satisfying V Z (44) Υ( I ) = 0 and (1 υ(2))Υ(dυ) < { } − ∗ ∞ V∗ such that

(45) ω = ΩΥ + vΩΣ + e00 + e11. EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 23

6.3. Key lemmas. In the following three lemmas, we always assume that ω is an exchange- able measure satisfying (34). For n N, we define ∈ (46) ωn := ω1 W N: W ,Id { ∈W |[n] [n]} as the restriction of ω to the event W N : W , Id . While ω can have infinite { ∈ W |[n] [n]} mass, the right-hand side of (34) assures that ωn is finite for all n 2. Exchangeability of ω ≥ implies that ωn is invariant with respect to all finite permutations N N that fix [n]. Thus, → ij we recover a finite, exchangeable measure from ωn by defining the n-shift of W = (W )i,j 1 ≥ n+i,n+j by ←−Wn := (W )i,j 1 and letting ←−ωn be the image of ωn by the n-shift, i.e., ≥ (47) ωn( ):= ωn( W N : ←−Wn ). ←− · { ∈ W ∈ ·} Lemma 6.5. Suppose ω is exchangeable and satisfies (34). Then ω-almost every W N possesses a rewiring limit W . ∈ W | | Proof. Let ωn be the finite measure defined in (46). The image measure ←−ωn in (47) is finite, exchangeable, and, therefore, proportional to an exchangeable probability measure on N. W By Lemma 4.9(iii), ωn-almost every W N possesses a rewiring limit. Since the rewiring ←− ∈ W limit of W N depends only on ←−Wn, for every n N, we conclude that ωn-almost every ∈ W ∈ W N possesses a rewiring limit. Finally, as n , ∈ W → ∞ ωn ω = ω1 W N: W,IdN , ↑ ∞ { ∈W } the restriction of ω to W N : W , IdN . By the left-hand side of (34), ω assigns zero { ∈ W } mass to W N : W = IdN , and so ω = ω. The monotone convergence theorem now { ∈ W } ∞ implies that ω-almost every W N possesses a rewiring limit. The proof is complete.  ∈ W Lemma 6.6. Suppose ω is exchangeable, satisfies (34), and ω-almost every W N has W , I. ∈ W | | Then ω = ΩΥ for some unique measure Υ satisfying (44).

Proof. Let ωn be as in (46) and for any measurable set A N let ωn( A) denote the ⊆ W · | measure conditional on the event A. For fixed υ and An := W N : ←−Wn , Id , ∈ V∗ { ∈ W |[2] [2]} ωn satisfies

ωn(An W = υ) = ωn( W , Id W = υ) | | | ←− { |[2] [2]} | | | = Ω ( W , Id ) υ { |[2] [2]} = 1 υ(2), − ∗ (2) where υ is the component of υ ∗ corresponding to Id[2]. By Lemma 6.5, the image of ∗ ∈ V ωn by taking rewiring limits, denoted ωn , is well-defined and Z | | Z (2) (2) ωn(An) = (1 υ )ωn( W dυ) = (1 υ ) ωn (dυ). N − ∗ | | ∈ − ∗ | | W V∗ By the monotone convergence theorem, ωn converges to a unique measure Υ := ω on | | | | . Since ω-almost every W N has W , I, this limiting measure satisfies Υ( I ) = 0. V∗ ∈ W | | { } Moreover, Z Z (2) (2) (1 υ ) ωn (dυ) (1 υ )Υ(dυ) ∗ ∗ ∗ − | | ↑ ∗ − and V V ωn(An) ω(An) = ω( W N : W , Id ) < . ≤ { ∈ W |[2] [2]} ∞ Therefore, Υ satisfies (44) and ΩΥ satisfies (34). 24 HARRY CRANE

We must still show that ω = ΩΥ. By Proposition 5.5, we can write Z ωn(dW) = Ω (dW) ωn (dυ), ←− υ |←− | V∗ for each n N. By the monotone convergence theorem and exchangeability, ∈ lim ←−ωm( W : W [n] = V ) = ω( W : W [n] = V ), m { | } { | } ↑∞ for every V [n] Id[n] , n N. Also, for every m 1, ∈ W \{ } ∈ Z ≥ ωm( W : W = V ) = Ω ( W : W = V ) ωm (dυ), ←− { |[n] } υ { |[n] } |←− | V∗ which increases to ΩΥ( W : W [n] = V ) as m . By uniqueness of limits, ΩΥ and ω agree on a generating π-system{ of the| Borel} σ-field→ and, ∞ therefore, they agree on all measurable subsets of N. The proof is complete.  W Lemma 6.7. Suppose ω is exchangeable, satisfies (34), and ω-almost every W N has W = I. ∈ W | | Then there exist unique constants e0, e , v 0 and a unique probability measure Σ on 2 such that 1 ≥ S ω = vΩΣ + e00 + e11, where k is defined in (41) and ΩΣ is defined in (43).

Proof. For any W N, i N, and ε, δ > 0, we define ∈ W ∈ Xn 1 ij LW(i):= lim sup n− 1 W , (0, 1) , n { } →∞ j=1 SW(ε):= i N : LW(i) ε , { ∈ ≥ } Xn 1 SW(ε) := lim sup n− 1 i SW(ε) , and | | n { ∈ } →∞ i=1 V(ε, δ):= W N : SW(ε) δ . { ∈ W | | ≥ } We can partition the event W N : W = I by V E, where { ∈ W | | } ∪ [∞ V := W N : LW(i) > 0 and { ∈ W } i=1 \∞ E := W N : LW(i) = 0 . { ∈ W } i=1 Thus, we can decompose ω as the sum of singular measures

ω = ωV + ωE, where ωV and ωE are the restrictions of ω to V and E, respectively. As in (46), we write

ωV,n := ωV1 W N: W ,Id and { ∈W |[n] [n]} ωE,n := ωE1 W N: W ,Id { ∈W |[n] [n]} to denote the restrictions of ω to V W N : W , Id and E W N : W , ∩ { ∈ W |[n] [n]} ∩ { ∈ W |[n] Id , respectively. By (34), both ωVn and ωE n are finite for all n 2 and their images under [n]} , , ≥ the n-shift, denoted ←−ωV,n and ←−ωE,n, are exchangeable. EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 25

Case V: Restricting our attention first to V, we note that W = I implies | | X X 2 ij 2 ij lim sup n− 1 W , (0, 1) = lim n− 1 W , (0, 1) = 0. { } n { } n 1 i,j n →∞ 1 i,j n →∞ ≤ ≤ ≤ ≤ By the for exchangeable sequences, we can replace the upper limits in our definition of LW(i) and SW(ε) by proper limits for ω-almost every W N. For any ε, δ > 0, ω-almost every W V| (ε, δ)| satisfies ∈ W ∈ X 2 ij lim sup n− 1 W , (0, 1) = n { } →∞ 1 i,j n ≤ ≤X 2 ij = lim n− 1 W , (0, 1) n { } →∞ 1 i,j n ≤ ≤ Xn Xm 1 ij = lim lim (nm)− 1 W , (0, 1) n m { } →∞ →∞ i=1 j=1 Xn Xm 1 1 ij = lim n− lim m− 1 W , (0, 1) n m { } →∞ i=1 →∞ j=1 Xn 1 lim n− ε1 i SW(ε) ≥ n { ∈ } →∞ i=1 εδ ≥ > 0, contradicting the assumption that ω-almost every W N has W = I. (In the above string of inequalities, passage from the second to third∈ line W follows| from| exchangeability and the law of large numbers and the interchange of sum and limit in the fourth line is permitted because the sum is finite and each of the limits exists by exchangeability and the law of large numbers.) Therefore, ω assigns zero mass to V(ε, δ) for all ε, δ > 0, and so ω-almost every W N must have SW(ε) = 0 for all ε > 0. In this case, either #SW(ε) < ∈ W | | ∞ or #SW(ε) = . ∞ Treating the latter case first, we define the n-shift ←−Wn of W N as above, so that ←−ωV,n is finite, exchangeable, and assigns positive mass to the event∈ W

W N : SW(ε) = 0 and #S (ε) = . { ∈ W | | ←−Wn ∞} We can regard ←−ωV,n as a constant multiple of an exchangeable probability measure θn on N, so that for fixed n N the sequence (1 L (i) ε )i N is an exchangeable 0, 1 -valued W ∈ { ←−Wn ≥ } ∈ { } sequence under θn. By de Finetti’s theorem, Xm 1 SW(ε) = lim m− 1 L (i) ε = 0 θn-a.s., | | m { ←−Wn ≥ } →∞ i=1 which implies L (i) = 0 for all i N ←−ωV,n-a.e., a contradiction. ←−Wn ∈ Now consider the case #SW(ε) < . We claim that #SW(ε) = 1 almost surely. Suppose ∞ #SW(ε) = k 2 so that i < < i is the list of indices for which LW(ij) ε. Taking ≥ { 1 ··· k} ≥ n = i 1, we know that ωVn is a finite measure that is invariant under permutations that k − , 26 HARRY CRANE

fix [n]. Under the n-shift, ←−ωV,n is finite, exchangeable, and assigns equal mass to events of the form Ai = W N : L (i) ε, L (j) < ε for j , i , i N . { ∈ W ←−Wn ≥ ←−Wn } ∈ Thus, if ωVn(Ai) > 0 for any i N, then ωVn has infinite total mass. Since this argument ←− , ∈ ←− , applies as long as #SW(ε) = k 2, it follows that ω must assign zero mass to W N : ≥ { ∈ W #SW(ε) > 1 . } On the other hand, if #SW(ε) = 1, then ω must assign the same mass to all events

Ai = W N : LW(i) ε, LW(j) < ε for all j , i . { ∈ W ≥ } In this case, Xn ωV( W N : W , Id ) ωVn(Ai) = nωVn(A ) < , for all n N, { ∈ W |[n] [n]} ≤ , , 1 ∞ ∈ i=1 which does not contradict (18). Since the above holds for all ε > 0, there must be a unique vertex i N for which ij ∈ LW(i) > 0. By exchangeability and the right-hand side of (34),(W )j,i is governed by a finite measure viΣi, where vi 0 and Σi is the unique probability measure guaranteed by de Finetti’s theorem. Thus, ≥

X∞ (i) ωV = viΩ . Σi i=1 Exchangeability of ω requires that each term is equal, so there is a unique probability measure Σ and unique v 0 for which ≥ X∞ (i) ωV = v ΩΣ . i=1

Just as any exchangeable rewiring map determines a transition kernel from N to N, any exchangeable 0, 1 0, 1 -valued sequence determines an exchangeableG transitionG { } × { } i i i kernel from 0, 1 to 0, 1 : let W = (W0, W1)i 1 be 0, 1 0, 1 -valued and for x = (x )i 1 in N { } { } ≥ { } × { } ≥ 0, 1 define x0 = W(x) by { } ( i i i W0, x = 0 x0 = i i W1, x = 1. A straightforward argument along the same lines as Theorem 5.1 (with the obvious substi- tution of de Finetti’s theorem for Aldous–Hoover) puts exchangeable 0, 1 0, 1 -valued { } × { } sequences in correspondence with exchangeable transition kernels on 0, 1 N. The ergodic measures in this space are in correspondence with the unit square [0,{1] }[0, 1] and, thus, also the space of 2 2 stochastic matrices, allowing us to regard Σ as a probability× measure × on 2.  S Case E: On event E, Wij , (0, 1) occurs for a zero proportion of all pairs ij and also a zero proportion of all j , i for a fixed i. For W N, we define ∈ W ij EW := ij : W , (0, 1) , { } which must satisfy either #EW = or #EW < . Also, since ωE n is finite, we can write ∞ ∞ ←− , ωE n θn for some exchangeable probability measure θn on N, for each n 2. ←− , ∝ W ≥ EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 27

Suppose first that #EW = . Then ←−ωE,2 < implies ←−W2 θ2 is an exchangeable 0, 1 -valued array with ∞ ∞ ∼ { } X 2 ij lim sup n− 1 ←−W , (0, 1) = 0. { 2 } n 1 i,j n →∞ ≤ ≤ By the Aldous–Hoover theorem, #E = 0 ωE,2-almost everywhere, forcing all edges in ←−W2 ←− EW to be in the first row of W. As m , ωE m forces all edges of EW to be in the first → ∞ ←− , m 1 rows of W N. By exchangeability, all elements of EW must be in the same row − ∈ W for ωE-almost every W N. Thus, ωE assigns equal mass to each event Ri E, where ∈ W ⊆ Ri := W N : i0 j0 EW if and only if i i0, j0 { ∈ W ∈ ∈ { }} is the event that all non-trivial entries of W are in the ith row. For every n 2, ωE,n is exchangeable with respect to permutations that fix [n] [n], and n,n+j ≥ → so (W )j 1 is exchangeable for each n 1 and ≥ ≥ Xm 1 n,n+j lim sup m− 1 W , (0, 1) = 0. m { } →∞ j=1 n,n+j By de Finetti’s theorem, W = 0 for all j 1 almost surely, and so ωE n-almost every ≥ , W N must have #EW < , for every n 2. By the monotone convergence theorem, ∈ W ∞ ≥ ωE-almost every W must have #EW < . ∞ P Now suppose that #EW < . Then ωE = ∞ ωE R , where ωE R is the restriction of ωE to ∞ i=1 , i , i Ri for every i N. By exchangeability, every ωE,Ri must determine the same exchangeable measure on 0∈, 1 -valued sequences. By a similar argument as for Case V above, we can { } rule out all possibilities except the event that Wij , (0, 1) for a unique pair ij, i , j. For suppose #EW = k 2, then there is a unique i N such that ≥ ∈ i0 j0 EW if and only if i i0, j0 . ∈ ∈ { } Without loss of generality, we assume i = 1 so that we can identify EW = j N : 1j { ∈ W , (0, 1) = 1 i1 < < ik with a finite subset of N. We define n = ik 1 and }1,n+{j ≤ ··· } − XW,n = (1 W , (0, 1) )j 1. By assumption, ←−ωE,n is finite and exchangeable, and so { } ≥ XW,n is an exchangeable 0, 1 -valued sequence with only a single non-zero entry. By { } (n+1,n+j) exchangeability, ωE n assigns the same mass to all sequences in X : j 1 , where ←− , { W,n ≥ } (n + 1, n + j) represents the transposition of elements n + 1 and n + j. It follows that XW,n can have positive mass only if ←−ωE,n has infinite total mass, a contradiction. On the other hand, if #EW = 1, then we can express ωE as X ij ij ij ωE = (c0 δ ij + c1 δ ij + c10δ ij ), ρ0 ρ0 ρ10 j>i 1 ≥ ij ij ij ij ij ij where c , c , c 0, ρ and ρ are the single edge update maps defined before (41), and ρ 0 1 10 ≥ 0 1 10 is the rewiring map W with all Wi0 j0 = (0, 1) except Wij = (1, 0). This measure clearly assigns ij zero mass to IdN, but it is exchangeable only if there are c0, c1, c10 0 such that c0 = c0, ij ij ≥ c = c , and c = c for all j > i 1. This measure is exchangeable and satisfies the right- 1 1 10 10 ≥ ij ij ij ij ij ij hand side of (18). Since ρ10 gives ρ10(W) = ρ1 (W) whenever W = 0 and ρ10(W) = ρ0 (W) ij when W = 1, we define e0 = c0 + c10 and e1 = c1 + c10. The proof is complete.  28 HARRY CRANE



Proof of Theorem 3.4. By Lemma 6.5, ω-almost every W N possesses a rewiring limit W and we can express ω as a sum of singular components:∈ W | | ω = ω1 W ,I + ω1 W =I . {| | } {| | } By Lemma 6.6, the first term corresponds to ΩΥ for some unique measure Υ satisfying (44). By Lemma 6.7, the second term corresponds to vΩΣ + e00 + e11 for a unique probability measure Σ on 2 and unique constants v, e0, e 0. The proof is complete.  S 1 ≥ 6.4. The projection into . Let Γ = Γ : G N by an exchangeable Feller process D∗ { G ∈ G } and, for D ∗, recall the definition of ΓD = (Γt)t 0 as the process obtained by first taking ∈ D ≥ Γ0 γD and then putting Γ = Γ on the event Γ0 = G. We now show that the projection ∼ D G ΓD = ( Γt )t 0 into ∗ exists simultaneously for all t 0 with probability one. We also prove that| | these| | projections≥ D determine a Feller process on≥ by D∗ Γ = ΓD : D ∗ . | D∗ | {| | ∈ D } Proposition 6.8. Let Γ = Γ : G N be an exchangeable Feller process on N. For any { G ∈ G } G D ∗, let ΓD = (Γt)t 0 be obtained by taking Γ0 γD and putting ΓD = ΓG on the event Γ0 = G. ∈ D ≥ ∼ Then the graph limit Γt exists almost surely for every t 0. | | ≥ Proof. For each D ∗, this follows immediately by exchangeability of Γ0 γD, exchange- ability of the transition∈ D law of Γ, and the Aldous–Hoover theorem. ∼  Proposition 6.8 only establishes that the graph limits exist at any countable collection of times. Theorem 3.7 establishes that Γt exists simultaneously for all t 0. | | ≥ Theorem 3.7. Let Γ = ΓG : G N be an exchangeable Feller process on N. Then Γ exists { ∈ G } G | D∗ | almost surely and is a Feller process on ∗. Moreover, each ΓD is continuous at all t > 0 except possibly at the times of Type (III) discontinuities.D | |

Recall that every D ∗ corresponds to a probability measure γD on N. We equip with the metric (28)∈, D under which it is a compact space. Furthermore,G any υ D∗ ∈ V∗ corresponds to a transition probability Pυ on N N and, thus, determines a Lipschitz continuous map υ : through D DυGas in× G (15). Consequently, D∗ → D∗ 7→ Dυ D0υ D D0 for all D, D0 ∗ and all υ ∗ . k − k ≤ k − k ∈ D ∈ V ij Proof of Theorem 3.7. Existence. Fix any D ∗ and let Γ[0,1] = (Γ )i,j 1 denote the array ∈ D [0,1] ≥ of edge trajectories in ΓD on the interval [0, 1]. By the consistency assumption for Γ, each ij ij Γ[0,1] is an alternating collection of finitely many 0s and 1s. Formally, each Γ[0,1] = y is a partition of [0, 1] into finitely many non-overlapping intervals Jl along with an initial status y0 0, 1 . The starting condition y0 determines the entire path y = (yt)t [0,1], because y ∈ { } ∈ must alternate between states 0 and 1 in the successive subintervals Jl. We denote this space by , and we write to denote the closure of in the Skorokhod I I∗ S I space of cadl` ag` functions [0, 1] 0, 1 . We can partition ∗ := m N m∗ , where m∗ is the closure of → { } I ∈ I I y ∗ : y has exactly m sub-intervals . { ∈ I } Consequently, ∗ is complete, separable, and Polish. The fact that ∗ is Polish allows us to apply the Aldous–HooverI theorem to study the -valued array ΓI . I∗ [0,1] EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 29

Since the initial distribution of ΓD is exchangeable, the entire path (Γt)t 0 satisfies ≥ σ (Γt )t 0 = (Γt)t 0 for all permutations σ : N N . ≥ L ≥ → In particular, the array Γ[0,1] of edge trajectories is weakly exchangeable. By the Aldous– Hoover theorem, we can express Γ as a mixture of exchangeable, dissociated -valued [0,1] I∗ arrays. Thus, it suffices to prove existence of Y = ( Yt )t [0,1] for exchangeable, dissociated ij | | | | ∈ ∗-valued arrays Y = (Y )i,j 1. I To this end, we treat Y just≥ as the edge trajectories of a graph-valued process on [0, 1], ij so Y denotes the status of edge ij at time t [0, 1] and δ(F, Yt) is the limiting density t ∈ of any finite subgraph F ∗ in Yt. To show that ( Yt )t [0,1] exists simultaneously at all ∈ G | | ∈ [n] t [0, 1], we show that the upper and lower limits of (δ(F, Y ))n N coincide for all t [0, 1] ∈ t ∈ ∈ simultaneously. In particular, for fixed F m, m N, and t [0, 1] we write ∈ G ∈ ∈ + [n] δt (F):= lim sup δ(F, Yt ) and n →∞ [n] δ−(F):= lim inf δ(F, Y ). t n t →∞ + We already know, cf. Proposition 6.8, that P δt (F) = δt−(F) = 1 for all fixed t [0, 1], but we wish to show that δ+(F) = δ (F) almost surely{ for all t [0}, 1], i.e., ∈ t t− ∈ + P supt [0,1] δt (F) δt−(F) = 0 = 1. { ∈ | − | } ij ij Each path Y = (Y )t [0,1] has finitely many discontinuities almost surely and so must [0,1] t ∈ [n] ij Y := (Y )1 i,j n, for every n N. For every ε > 0, there is a finite subset Sε [0, 1] and [0,1] [0,1] ≤ ≤ ∈ ⊂ an at most countable partition J , J2,... of the open set [0, 1] S such that 1 \ ε P ij Y[0,1] is discontinuous at s ε for every s Sε and { ij } ≥ ∈ P Y is discontinuous in J < ε, l = 1, 2,.... { [0,1] l} Without such a partition, there must be a sequence of intervals (t ρn, t + ρn) with ρn 0 − → that converges to t < Sε such that ij P Y is discontinuous in (t ρn, t + ρn) ε for every n 1. { [0,1] − } ≥ ≥ Continuity from above implies ij P Y is discontinuous at t < S ε > 0, { [0,1] ε} ≥ which contradicts the assumption t < Sε. The strong law of large numbers also implies that X 12 2 ij P Y is discontinuous in Jl = lim 1 Y is discontinuous in Jl < ε a.s. { } n n(n 1) { } →∞ 1 i

Since [0, 1] is covered by at most countably many sub-intervals J , l = 1, 2,..., and the S l non-random set S := ε>0 Sε, it follows that + P supt [0,1] δt (F) δt−(F) 2ε = 1 for all ε > 0, for every F ∗. { ∈ | − | ≤ } ∈ G Continuity from above implies that + + P supt [0,1] δt (F) δt−(F) = 0 = lim P supt [0,1] δt (F) δt−(F) 2ε = 1; { ∈ | − | } ε 0 { ∈ | − | ≤ } ↓ S thus, P δ(F, Yt) exists at all t [0, 1] = 1 for every F m N [m]. Countable additivity of { ∈ } ∈ ∈ G S probability measures implies that this limit exists almost surely for all F m N m and, ∈ ∈ G thus, Y[0,1] determines a process on ∗ almost surely. The fact that these limits are deterministicD follows from the 0-1 law and our assumption that Y is dissociated. By the Aldous–Hoover theorem, ΓD is a mixture of exchangeable, dissociated processes, so we conclude that Γ exists almost surely for every D .  | D | ∈ D∗ Feller property. By Corollary 6.4, we can assume Γ = Γ : G N is constructed by { G ∈ G } ΓG = (Wt∗(G))t 0, for each G N, where Wω = (Wt∗)t 0 is a standard ω-process. The standard ω-process≥ has the Feller∈ G property and, by Theorem≥ 3.2,

Γt = Wt∗(Γ0) = Γ0 Wt∗ a.s. for every t 0. | | L | | | || | ≥ Existence of W at fixed times follows by exchangeability and analog to Proposition 6.8. | t∗| In fact, Wω∗ = ( Wt )t 0 exists for all t 0 simultaneously by analog to the above argument | | | | ≥ ≥ for existence of ΓD , because IdN is an exchangeable initial state with rewiring limit I. The Markov property| of| Γ follows immediately. | D∗ | Let (Pt)t 0 be the semigroup of Γ , i.e., for every continuous function g : ∗ R ≥ | D∗ | D → Pt g(D) = E(g( Γt ) Γ0 = D). | | | | | To establish the Feller property, we must show that for every continuous g : R D∗ → (i) D Pt g(D) is continuous for all t > 0. 7→ (ii) limt 0 Pt g(D) = g(D) for all D ∗ and ↓ ∈ D For (i), we take any continuous function h : ∗ R. By compactness of ∗, h is uniformly continuous and, hence, bounded. By dominatedD → convergence and LipschitzD continuity of Wt : , D Pth(D) is continuous for all t > 0. | | D∗ → D∗ 7→ Part (ii) follows from exchangeability of Wω∗ and the Feller property of Γ. To see this explicitly, we show the equivalent condition

lim P Wt∗ I > ε = 0 for all ε > 0. t 0 {k| | − k } ↓ For contradiction, we assume

lim sup P Wt∗ I > ε > 0 for some ε > 0. t 0 {k| | − k } ↓ By definition of the metric (28) on ∗, XD X n W∗ I = 2− W∗ (V) I(V) , k| t | − k | | t | − | n N V ∈ ∈W[n] and so there must be some n 2, some V Id , and ε, % > 0 such that ≥ ∈ W[n] \{ [n]} (48) lim sup P δ(V, Wt) > ε %. t 0 { } ≥ ↓ EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 31

Given such a V Id , we can choose F such that V(F) , F and define ∈ W[n] \{ [n]} ∈ G[n] ψF( ):= 1 , F · {·|[n] } [n] so that PtψF(G) = P Γt , F Γ0 = G . We choose any G = F∗ N such that F∗ [n] = F. In this way, { | } ∈ G | [n] [n] PtψF(F∗) = P Γ , F Γ0 = F∗ P Γ discontinuous on [0, t] { t | } ≤ { F∗ } and [n] [n] [n] P Γ , F Γ0 = F∗ P W∗ = V = EP W∗ = V W∗ = Eδ(V, W∗). { t | } ≥ { t } { t | | t |} t Now, [n] P Γ discontinuous on [0, t] 1 exp tω(W : W [n] , Id[n]) { F∗ } ≤ − {− | } implies [n] lim sup P Γ discontinuous on [0, t] lim sup 1 exp tω(W : W [n] , Id[n]) = 0, F∗ t 0 { } ≤ t 0 − {− | } ↓ ↓ because ω(W : W [n] , Id[n]) < for all n 2 by the right-hand side of (34). On the other hand, Markov’s inequality| implies∞ ≥

Eδ(V, W∗) εP δ(V, W∗) > ε , t ≥ { t } and (48) gives

lim sup Eδ(V, Wt∗) ε lim sup P δ(V, Wt∗) > ε ε% > 0. t 0 ≥ t 0 { } ≥ ↓ ↓ Thus, the Feller property of Γ and the hypothesis (48) lead to contradicting statements

lim sup PtψF(F∗) ε% > 0 and t 0 ≥ ↓ lim sup PtψF(F∗) 0. t 0 ≤ ↓ We conclude that

lim sup P δ(V, Wt∗) > ε < % t 0 { } ↓ n 1 n for all V Id , n N, and all %, ε > 0. There are 4( −2 ) 1 < 4(2) elements in ∈ W[n] \{ [n]} ∈ − [n] Id[n] for each n 2. Thus, W \{ } ≥    [   (n)  P δ(Id , W∗) < 1 ε P δ(V, W∗) > ε4− 2 , { [n] t − } ≤  { t } V Id  ∈W[n] \{ [n]} and we have

lim sup P δ(Id[n], Wt∗) < 1 ε t 0 { − } ≤ ↓ X n (2) lim sup P δ(V, Wt∗) > ε4− ≤ t 0 { } ↓ V [n] Id[n] X ∈W \{ } (n) lim sup P δ(V, W∗) > ε4− 2 ≤ { t } V Id t 0 ∈W[n] \{ [n]} ↓ = 0. 32 HARRY CRANE

It follows that

lim sup P δ(V, Wt∗) δ(V, IdN) > ε = 0 for all ε > 0, for all V [n], for all n N . t 0 {| − | } ∈ W ∈ ↓ Now, for any ε > 0, the event W I > ε implies {k| t∗| − k } X X n W∗ I = 2− W∗ (V) I(V) > ε. k| t | − k || t | − | n N V ∈ ∈W[n] P Since V [n] Wt∗ (V) I(V) 2 for every n 1, we observe that ∈W || | − | ≤ ≥ X X X n n log (ε) 2− W∗ (V) I(V) 2 2− = 2b 2 c ε. || t | − | ≤ ≤ n 1 log (ε) V n 1 log (ε) ≥d − 2 e ∈W[n] ≥d − 2 e Writing m = 2 log (ε) , we must have ε d − 2 e mε mε+1 ( ) δ(V, W∗) δ(V, IdN) > (ε 2− )4− 2 > 0 | t − | − mε S mε+1 ( ) for some V . With % := (ε 2 )4− 2 > 0, we now conclude that ∈ n ε lim sup P  δ(V, W ) δ(V, Id ) > %  t∗  t∗ N  t 0 {k| | − k } ≤ t 0 n m {| − | } ↓ ↓  ε V [n]  X ≤ X∈W lim sup P δ(V, Wt∗) δ(V, IdN)) > % ≤ t 0 n m {| − | } ↓ ε V [n] X X≤ ∈W lim sup P δ(V, W∗) δ(V, IdN) > % ≤ {| t − | } n mε V t 0 ≤ ∈W[n] ↓ = 0. Where the interchange of sum and lim sup is permitted because the sum is finite. We conclude that Wt∗ P I as t 0 and Wt∗(Γ0) = Γ0 Wt∗ P Γ0 as t 0. The Feller property follows. | | → ↓ | | | || | → | | ↓ 

Discontinuities. Let ΓD = (Γt)t 0 have exchangeable initial distribution γD. Then ( Γt )t 0 exists almost surely and has the≥ Feller property. By the Feller property, Γ has a version| | ≥ | D | with cadl` ag` paths. For the cadl` ag` version, lims t Γt exists for all t > 0 with probability one. ↑ | | Although the map : N ∗ is not continuous, we also have lims t Γs = Γt , as we now show. | · | G → D ↑ | | | −| Exchangeability of the initial distribution γD as well as the transition probability implies σ S ΓD = ΓD for all σ N; therefore, Γt is exchangeable for all t > 0. By the Aldous–Hoover L ∈ − theorem (Theorem 4.2), Γt exists a.s. for all t > 0. | −| Now, suppose t > 0 is a discontinuity time of ΓD. If lims t Γs , Γt , then there exists ε > 0 such that ↑ | | | −| lim Γs Γt > ε. k s t | | − | −|k ↑ In particular, there exists m N and F such that ∈ ∈ G[m] lim δ(F, Γs) δ(F, Γt ) > ε. | s t − − | ↑ EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 33

By definition, 1 X φ δ(F, Γs) = lim 1 Γs = F . n n m { } →∞ ↓ φ:[m] [n] → Since the above limits exist with probability one, we can replace limits with limits inferior to obtain 1 X φ φ 0 lim inf lim inf 1 Γs = F 1 Γ = F ≤ | s t n n m { } − { t }| ≤ ↑ →∞ ↓ φ:[m] [n] − → 1 X φ φ lim inf lim inf 1 Γs = F 1 Γ = F ≤ s t n n m | { } − { t }| ↑ →∞ ↓ φ:[m] [n] − → 1 X φ 2 lim inf lim inf 1 Γ discontinuous on [s, t) . ≤ s t n n m { D } ↑ →∞ ↓ φ:[m] [n] → By the bounded convergence theorem, Fatou’s lemma, exchangeability and consistency of Γ, and the construction of Γ from a standard ω-process, we have    X   1 φ  E lim inf lim inf 1 Γ discontinuous on [s, t)   s t n n m { D } ≤ ↑ →∞ ↓ φ:[m] [n] → 1 X h φ i lim inf lim inf E 1 Γ discontinuous on [s, t) ≤ s t n n m { D } ↑ →∞ ↓ φ:[m] [n] → lim inf lim inf 1 exp (t s)ω( W N : W [m] , Id[m] ) ≤ s t n − {− − { ∈ W | } } ↑ →∞ = 0. By Markov’s inequality, we must have ( ) P lim δ(F, Γs) δ(F, Γt ) > ε = 0 for all ε > 0 and all F ∗. | s t − − | ∈ G ↑ It follows that P lim Γs Γt > ε = 0 for all ε > 0. {k s t | | − | −|k } ↑ To see that the discontinuities in Γ occur only at the times of Type (III) discontinuities, | | suppose s > 0 is a discontinuity time for ΓD . The cadl` ag` property of Γ along with the fact | | | D | S that lims t Γs = lims t Γs a.s. implies that δ(F, Γs ) δ(F, Γs) > ε for some F m N [m] and some↑ ε| >| 0.| By the↑ strong| law of large numbers,| − − | ∈ ∈ G

2 X ij ij lim 1 Γs , Γs > 0, n n(n 1) { − } →∞ 1 i

2 X ij ij 2 lim 1 Γs , Γs lim n = 0, n n(n 1) { − } ≤ n n(n 1) · →∞ 1 i



7. Concluding remarks Our main theorems characterize the behavior of exchangeable Feller processes on the space of countable undirected graphs. These processes appeal to many modern applications in network analysis. Our arguments rely mostly on the Aldous–Hoover theorem (Theorem 4.2) and its extension to conditionally exchangeable random graphs (Theorem 5.1) and, therefore, our main theorems can be stated more generally for exchangeable Feller processes on countable d-dimensional arrays taking values in any finite space. In particular, our main theorems have an immediate analog to processes in the space of directed graphs and multigraphs. In some instances, it may be natural to consider a random process on undirected graphs as the projection of a random process on directed graphs. Any G N projects to an undirected graph in at least two non-trivial ways, which we denote G∈and G G and define by ∨ ∧

ij G := Gij Gji and ∨ ∨ ij G := Gij Gji. ∧ ∧ Therefore, there is an edge between i and j in G if there is any edge between i and j in G, while there is an edge between i and j in G only∨ if there are edges i to j and j to i in G. It is reasonable to ask when an exchangeable∧ Feller process Γ on directed graphs projects to an exchangeable Feller process on the subspace of undirected graphs. But these questions are easily answered by consultation with Theorems 3.1 and 3.4 and their generalization to directed graphs. Similar questions might arise for exchangeable processes on similarly structured, but possibly higher-dimensional, spaces. In these cases, the analogs to Theorems 3.1-3.7 can be deduced, so we omit them.

References [1] D. Aldous. Representations for Partially Exchangeable Arrays of Random Variables. Journal of Multivariate Analysis, 11:581–598, 1981. [2] D. J. Aldous. Exchangeability and related topics. In Ecole´ d’et´ e´ de probabilites´ de Saint-Flour, XIII—1983, volume 1117 of Lecture Notes in Math., pages 1–198. Springer, Berlin, 1985. [3] J. Bertoin. L´evyProcesses. Cambridge University Press, Cambridge, 1996. [4] J. Bertoin. Random fragmentation and coagulation processes, volume 102 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2006. [5] D. Hoover. Relations on Probability Spaces and Arrays of Random Variables. Preprint, Institute for Advanced Study, Princeton, NJ, 1979. [6] O. Kallenberg. Random Measures. Academic Press, New York, 1986. [7] O. Kallenberg. Probabilistic Symmetries and Invariance Principles. Probability and Its Applications. Springer, 2005. [8] L. Lovasz.´ Large Networks and Graph Limits, volume 60 of AMS Colloquium Publications. American Mathe- matical Society, Providence, RI, 2012. [9] L. Lovasz´ and B. Szegedy. Limits of dense graph sequences. J. Comb. Th. B, 96:933–957, 2006. EXCHANGEABLE MARKOV PROCESSES ON GRAPHS: FELLER CASE 35

[10] J. Pitman. Combinatorial stochastic processes, volume 1875 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2006. Lectures from the 32nd Summer School on held in Saint-Flour, July 7–24, 2002, With a foreword by Jean Picard.

Rutgers University,Department of , 110 Frelinghuysen Road,Piscataway, NJ 08854, [email protected].