Exchangeable Graph-Valued Feller Processes

Total Page:16

File Type:pdf, Size:1020Kb

Exchangeable Graph-Valued Feller Processes Probability Theory and Related Fields manuscript No. (will be inserted by the editor) Exchangeable graph-valued Feller processes Harry Crane Received: date / Accepted: date Abstract The transition law of every exchangeable Feller process on the space of countable graphs is determined by a σ-finite measure on the space of 0; 1 0; 1 -valued arrays. In discrete time, this characterization gives rise tof ag construction × f g from an independent, identically distributed sequence of exchangeable random functions. In continuous time, the behavior is enriched by a Levy–It´ o–Khintchine-typeˆ decomposition of the jump measure into mu- tually singular components that govern global, vertex-level, and edge-level dynamics. Every such process almost surely projects to a Feller process in the space of graph limits. Keywords exchangeability random graph graph limit Levy–It´ o–ˆ Khintchine decomposition · Aldous–Hoover· theorem Rado· graph Erdos–R˝ enyi´ random graph· · · Mathematics Subject Classification (2000) 05C80 60G09 60J05 60J25 · · · 1 Introduction A graph, or network, G = (V; EG) is a set of vertices V and a binary relation of ij edges EG V V. For i; j V, we write G to indicate the status of edge (i; j), ⊆ × 2 that is, ( ij 1; (i; j) EG; G := 1 (i; j) EG := 2 f 2 g 0; otherwise. ij Both the pair G = (V; EG) and the array G = (G )i;j V represent the same object. 2 We write V to denote the space of graphs with vertex set V. ConsiderG a population of individuals labeled distinctly in V and related to one another by the edges of a graph G = (V; EG). The population size is typi- cally large, but unknown, and so we assume a countably infinite population Rutgers University, Department of Statistics & Biostatistics, 110 Frelinghuysen Road, Piscataway, NJ 08854. [email protected] 2 Harry Crane and take V = N := 1; 2;::: . In practice, the labels N are arbitrarily assigned for the purpose of distinguishingf g individuals, and data can only be observed for a finite sample S N. Since labels are assigned arbitrarily, we often take S = [n]:= 1;:::; n whenever⊂ S N is finite with cardinality n 1. f g ⊆ ≥ These practical matters relate to the operations of ij – relabeling: the relabeling of G = (G )i;j 1 by any permutation σ : N N is ≥ ! σ σ(i)σ(j) G := (G )i;j 1; and (1) ≥ ij – restriction: the restriction of G = (G )i;j 1 to a graph with vertex set S N is ≥ ⊆ S ij G = G S := (G )i;j S: (2) j 2 In many relevant applications, the network structure changes over time, re- sulting in a time-indexed collection (Gt)t T of graphs. We consider exchange- able Markov processes as statistical models2 in this setting. 1.1 Graph-valued Markov processes A V-valued Markov process is a collection Γ = Γ : G V for which each G f G 2 G g ΓG = (Γt)t T is a family of random graphs that has Γ0 = G and 2 – the Markov property, that is, the past (Γs)s<t and future (Γs)s>t are condition- ally independent given the present Γt for all t T. 2 We also assume Γ is – exchangeable, that is, all ΓG share a common transition law for which σ σ P Γt Γt = F = P Γ Γ = F ; (3) f 0 2 · j g f t0 2 · j t g for all permutations σ : V V, all F V, and all t0 > t 0, and – consistent, that is, for all S !V the transition2 G law of Γ satisfies≥ ⊆ S S P Γ = F Γt = F0 = P Γ = F Γt = F00 ; F S; (4) f t0 j g f t0 j g 2 G for all F0; F00 V such that F0 S = F00 S. 2 G j j S Consistency implies that Γ = (Γt S)t T satisfies the Markov property for every G j 2 S V, for all G V. We call any Γ satisfying these properties an exchangeable, consistent⊆ Markov2 G process. In Proposition 1, we prove that consistency and the Feller property are equivalent for exchangeable Markov processes on N, so we sometimes also call Γ an exchangeable Feller process. G Any process Γ = ΓG : G V determines a random collection (Γt)t T with a given transitionf law and2 initial G g distribution ν by first drawing Γ 2ν 0 ∼ and then putting Γν = ΓG on the event Γ0 = G. With the underlying process Γ = ΓG : G V understood, we write Γν = (Γt)t T to denote a collection withf this behavior.2 G g 2 Exchangeable graph-valued Feller processes 3 Theorems 1-5 below characterize the behavior of exchangeable N-valued Feller processes in both discrete and continuous time. In discrete time,G every exchangeable Feller process can be constructed by an iterated application of independent, identically distributed (i.i.d.) exchangeable random functions, called rewiring maps; see Section 4.5. In continuous time, Γ admits a con- struction from a Poisson point process whose intensity measure has a Levy–´ Ito–Khintchine-typeˆ decomposition. The Levy–It´ o–Khintchineˆ representation classifies every discontinuity of Γ as one of three types. When ν is an exchangeable initial distribution on N, both discrete and G continuous time processes Γν project to a Feller process in the space of graph limits. These outcomes invoke connections to previous work on the theory combinatorial stochastic processes [4,13], partially exchangeable arrays [1,2, 8], and limits of dense graph sequences [11,12]. 1.2 Outline Before summarizing our conclusions (Section 3), we first go over key defini- tions and assumptions (Section 2). We then explain essential concepts in more detail (Section 4) and prove our main theorems for discrete time (Section 5) and continuous time (Section 6) processes. We make concluding remarks in Section 7. 2 Definitions and assumptions 2.1 Graphs For any graph G = (V; EG), we impose the additional axioms of (i) anti-reflexivity,(i; i) < EG for all i V, and 2 (ii) symmetry,(i; j) EG implies (j; i) EG for all i; j V, 2 2 2 and we specialize V to denote the set of all graphs satisfying (i) and (ii). G By the symmetry axiom (ii), we may write ij = i; j EG to indicate that there is an edge between i and j in G. By definition,fG hasg 2 no multiple edges, condition (i) forbids edges from a vertex to itself, and condition (ii) makes ij G undirected. In terms of the adjacency array G = (G )i;j V, (i) and (ii) above correspond to 2 (i’) Gii = 0 for all i V and (ii’) Gij = Gji for all2i; j V, 2 respectively.As we discuss in Section 7, our main theorems remain valid under some relaxation of these conditions. The above two notions of a graph— as a pair (V; EG) and as an adjacency array—are equivalent. We use them interchangeably and with the same notation. With SV denoting the space of permutations of V N, that is, bijective ⊆ functions σ : V V, relabeling (1) associates every σ SV to a map V V. ! 2 G !G 4 Harry Crane S For all S S0 V, restriction (2) determines a map S0 S, G G = G S. ⊆ ⊆ ij G !G 7! j Specifically, for n m 1, G [m] = (G )1 i;j m is the leading m m submatrix of ij ≥ ≥ j ≤ ≤ × G = (G )1 i;j n. By combining (1) and (2), every injection φ : S S0 determines ≤ ≤ ! a projection S S by G 0 !G φ φ(i)φ(j) G G := (G )i;j S: (5) 7! 2 We call a sequence of finite graphs (Gn)n N compatible if Gn [n] and 2 2 G Gn [m] = Gm for every m n, for all n N. Any compatible sequence of finitej graphs determines a≤ unique countable2 graph G , the projective limit of 1 (G1; G2;:::) with respect to the restriction operation in(2). We endow N with the product discrete topology induced, for example, by the ultrametricG d(G; G0):= 1= sup n N : G n = G0 n ; G; G0 N; (6) f 2 j[ ] j[ ]g 2 G with the convention that 1= = 0. From (6), we naturally equip N with the 1 G Borel σ-field σ [n] n N generated by the restriction maps. Under (6), N is h ·| i 2 G a compact and, therefore, complete and separable metric space; hence, N is Polish and standard measure theoretic facts apply in our analysis. G 2.2 Graph limits For n m 1, F m , and G n , we define the density of F in G by ≥ ≥ 2 G[ ] 2 G[ ] m δ(F; G):= ind(F; G)=n# ; (7) m where ind(F; G) is the number of embeddings of F into G and n# := n(n 1) (n m + 1). Specifically, − ··· − X ind(F; G):= 1 Gφ = F f g φ:[m] [n] ! is the number of injections φ :[m] [n] for which Gφ = F. ! S Given a sequence (Gn)n N in ∗ := n N [n], we define the limiting density 2 G 2 G of F in (Gn)n N by 2 δ(F; (Gn)n N ):= lim δ(F; Gn); if it exists: 2 n !1 For G N, we define the limiting density of F in G by 2 G δ(F; G):= lim δ(F; G [n]); if it exists: (8) n !1 j Definition 1 (Graph limit) The graph limit of G N is the collection 2 G S G := (δ(F; G))F m N [m] ; j j 2 2 G S provided δ(F; G) exists for every F m N [m].
Recommended publications
  • Interacting Particle Systems MA4H3
    Interacting particle systems MA4H3 Stefan Grosskinsky Warwick, 2009 These notes and other information about the course are available on http://www.warwick.ac.uk/˜masgav/teaching/ma4h3.html Contents Introduction 2 1 Basic theory3 1.1 Continuous time Markov chains and graphical representations..........3 1.2 Two basic IPS....................................6 1.3 Semigroups and generators.............................9 1.4 Stationary measures and reversibility........................ 13 2 The asymmetric simple exclusion process 18 2.1 Stationary measures and conserved quantities................... 18 2.2 Currents and conservation laws........................... 23 2.3 Hydrodynamics and the dynamic phase transition................. 26 2.4 Open boundaries and matrix product ansatz.................... 30 3 Zero-range processes 34 3.1 From ASEP to ZRPs................................ 34 3.2 Stationary measures................................. 36 3.3 Equivalence of ensembles and relative entropy................... 39 3.4 Phase separation and condensation......................... 43 4 The contact process 46 4.1 Mean field rate equations.............................. 46 4.2 Stochastic monotonicity and coupling....................... 48 4.3 Invariant measures and critical values....................... 51 d 4.4 Results for Λ = Z ................................. 54 1 Introduction Interacting particle systems (IPS) are models for complex phenomena involving a large number of interrelated components. Examples exist within all areas of natural and social sciences, such as traffic flow on highways, pedestrians or constituents of a cell, opinion dynamics, spread of epi- demics or fires, reaction diffusion systems, crystal surface growth, chemotaxis, financial markets... Mathematically the components are modeled as particles confined to a lattice or some discrete geometry. Their motion and interaction is governed by local rules. Often microscopic influences are not accesible in full detail and are modeled as effective noise with some postulated distribution.
    [Show full text]
  • Interacting Stochastic Processes
    Interacting stochastic processes Stefan Grosskinsky Warwick, 2009 These notes and other information about the course are available on go.warwick.ac.uk/SGrosskinsky/teaching/ma4h3.html Contents Introduction 2 1 Basic theory5 1.1 Markov processes..................................5 1.2 Continuous time Markov chains and graphical representations..........6 1.3 Three basic IPS................................... 10 1.4 Semigroups and generators............................. 13 1.5 Stationary measures and reversibility........................ 18 1.6 Simplified theory for Markov chains........................ 21 2 The asymmetric simple exclusion process 24 2.1 Stationary measures and conserved quantities................... 24 2.2 Symmetries and conservation laws......................... 27 2.3 Currents and conservation laws........................... 31 2.4 Hydrodynamics and the dynamic phase transition................. 35 2.5 *Open boundaries and matrix product ansatz.................... 39 3 Zero-range processes 44 3.1 From ASEP to ZRPs................................ 44 3.2 Stationary measures................................. 46 3.3 Equivalence of ensembles and relative entropy................... 50 3.4 Phase separation and condensation......................... 54 4 The contact process 57 4.1 Mean-field rate equations.............................. 57 4.2 Stochastic monotonicity and coupling....................... 59 4.3 Invariant measures and critical values....................... 62 d 4.4 Results for Λ = Z ................................. 65
    [Show full text]
  • Some Investigations on Feller Processes Generated by Pseudo- Differential Operators
    _________________________________________________________________________Swansea University E-Theses Some investigations on Feller processes generated by pseudo- differential operators. Bottcher, Bjorn How to cite: _________________________________________________________________________ Bottcher, Bjorn (2004) Some investigations on Feller processes generated by pseudo-differential operators.. thesis, Swansea University. http://cronfa.swan.ac.uk/Record/cronfa42541 Use policy: _________________________________________________________________________ This item is brought to you by Swansea University. Any person downloading material is agreeing to abide by the terms of the repository licence: copies of full text items may be used or reproduced in any format or medium, without prior permission for personal research or study, educational or non-commercial purposes only. The copyright for any work remains with the original author unless otherwise specified. The full-text must not be sold in any format or medium without the formal permission of the copyright holder. Permission for multiple reproductions should be obtained from the original author. Authors are personally responsible for adhering to copyright and publisher restrictions when uploading content to the repository. Please link to the metadata record in the Swansea University repository, Cronfa (link given in the citation reference above.) http://www.swansea.ac.uk/library/researchsupport/ris-support/ Some investigations on Feller processes generated by pseudo-differential operators Bjorn Bottcher Submitted to the University of Wales in fulfilment of the requirements for the Degree of Doctor of Philosophy University of Wales Swansea May 2004 ProQuest Number: 10805290 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted.
    [Show full text]
  • Solutions of L\'Evy-Driven Sdes with Unbounded Coefficients As Feller
    Solutions of L´evy-driven SDEs with unbounded coefficients as Feller processes Franziska K¨uhn∗ Abstract Rd Rd×k Let (Lt)t≥0 be a k-dimensional L´evyprocess and σ ∶ → a continuous function such that the L´evy-driven stochastic differential equation (SDE) dXt = σ(Xt−) dLt;X0 ∼ µ has a unique weak solution. We show that the solution is a Feller process whose domain of the generator contains the smooth functions with compact support if, and only if, the L´evy measure ν of the driving L´evyprocess (Lt)t≥0 satisfies k SxS→∞ ν({y ∈ R ; Sσ(x)y + xS < r}) ÐÐÐ→ 0: This generalizes a result by Schilling & Schnurr [14] which states that the solution to the SDE has this property if σ is bounded. Keywords: Feller process, stochastic differential equation, unbounded coefficients. MSC 2010: Primary: 60J35. Secondary: 60H10, 60G51, 60J25, 60J75, 60G44. 1 Introduction Feller processes are a natural generalization of L´evyprocesses. They behave locally like L´evy processes, but { in contrast to L´evyprocesses { Feller processes are, in general, not homo- geneous in space. Although there are several techniques to prove existence results for Feller processes, many of them are restricted to Feller processes with bounded coefficients, i. e. they assume that the symbol is uniformly bounded with respect to the space variable x; see [2,3] for a survey on known results. In fact, there are only few Feller processes with unbounded coefficients which are well studied, including affine processes and the generalized Ornstein{ Uhlenbeck process, cf. [3, Example 1.3f),i)] and the references therein.
    [Show full text]
  • Final Report (PDF)
    Foundation of Stochastic Analysis Krzysztof Burdzy (University of Washington) Zhen-Qing Chen (University of Washington) Takashi Kumagai (Kyoto University) September 18-23, 2011 1 Scientific agenda of the conference Over the years, the foundations of stochastic analysis included various specific topics, such as the general theory of Markov processes, the general theory of stochastic integration, the theory of martingales, Malli- avin calculus, the martingale-problem approach to Markov processes, and the Dirichlet form approach to Markov processes. To create some focus for the very broad topic of the conference, we chose a few areas of concentration, including • Dirichlet forms • Analysis on fractals and percolation clusters • Jump type processes • Stochastic partial differential equations and measure-valued processes Dirichlet form theory provides a powerful tool that connects the probabilistic potential theory and ana- lytic potential theory. Recently Dirichlet forms found its use in effective study of fine properties of Markov processes on spaces with minimal smoothness, such as reflecting Brownian motion on non-smooth domains, Brownian motion and jump type processes on Euclidean spaces and fractals, and Markov processes on trees and graphs. It has been shown that Dirichlet form theory is an important tool in study of various invariance principles, such as the invariance principle for reflected Brownian motion on domains with non necessarily smooth boundaries and the invariance principle for Metropolis algorithm. Dirichlet form theory can also be used to study a certain type of SPDEs. Fractals are used as an approximation of disordered media. The analysis on fractals is motivated by the desire to understand properties of natural phenomena such as polymers, and growth of molds and crystals.
    [Show full text]
  • A Course in Interacting Particle Systems
    A Course in Interacting Particle Systems J.M. Swart January 14, 2020 arXiv:1703.10007v2 [math.PR] 13 Jan 2020 2 Contents 1 Introduction 7 1.1 General set-up . .7 1.2 The voter model . .9 1.3 The contact process . 11 1.4 Ising and Potts models . 14 1.5 Phase transitions . 17 1.6 Variations on the voter model . 20 1.7 Further models . 22 2 Continuous-time Markov chains 27 2.1 Poisson point sets . 27 2.2 Transition probabilities and generators . 30 2.3 Poisson construction of Markov processes . 31 2.4 Examples of Poisson representations . 33 3 The mean-field limit 35 3.1 Processes on the complete graph . 35 3.2 The mean-field limit of the Ising model . 36 3.3 Analysis of the mean-field model . 38 3.4 Functions of Markov processes . 42 3.5 The mean-field contact process . 47 3.6 The mean-field voter model . 49 3.7 Exercises . 51 4 Construction and ergodicity 53 4.1 Introduction . 53 4.2 Feller processes . 54 4.3 Poisson construction . 63 4.4 Generator construction . 72 4.5 Ergodicity . 79 4.6 Application to the Ising model . 81 4.7 Further results . 85 5 Monotonicity 89 5.1 The stochastic order . 89 5.2 The upper and lower invariant laws . 94 5.3 The contact process . 97 5.4 Other examples . 100 3 4 CONTENTS 5.5 Exercises . 101 6 Duality 105 6.1 Introduction . 105 6.2 Additive systems duality . 106 6.3 Cancellative systems duality . 113 6.4 Other dualities .
    [Show full text]
  • Lecture Notes on Markov and Feller Property
    AN INTRODUCTION TO MARKOV PROCESSES AND THEIR APPLICATIONS IN MATHEMATICAL ECONOMICS UMUT C¸ETIN 1. Markov Property During the course of your studies so far you must have heard at least once that Markov processes are models for the evolution of random phenomena whose future behaviour is independent of the past given their current state. In this section we will make precise the so called Markov property in a very general context although very soon we will restrict our selves to the class of `regular' Markov processes. As usual we start with a probability space (Ω; F;P ). Let T = [0; 1) and E be a locally compact separable metric space. We will denote the Borel σ-field of E with E . Recall that the d-dimensional Euclidean space Rd is a particular case of E that appears very often in applications. For each t 2 T, let Xt(!) = X(t; !) be a function from Ω to E such that it is F=E - −1 measurable, i.e. Xt (A) 2 F for every A 2 E . Note that when E = R this corresponds to the familiar case of Xt being a real random variable for every t 2 T. Under this setup X = (Xt)t2T is called a stochastic process. We will now define two σ-algebras that are crucial in defining the Markov property. Let 0 0 Ft = σ(Xs; s ≤ t); Ft = σ(Xu; u ≥ t): 0 0 Observe that Ft contains the history of X until time t while Ft is the future evolution of X 0 after time t.
    [Show full text]
  • Stationary Points in Coalescing Stochastic Flows on $\Mathbb {R} $
    Stationary points in coalescing stochastic flows on R A. A. Dorogovtsev, G. V. Riabov, B. Schmalfuß Abstract. This work is devoted to long-time properties of the Arratia flow with drift { a stochastic flow on R whose one-point motions are weak solutions to a stochastic differential equation dX(t) = a(X(t))dt + dw(t) that move independently before the meeting time and coalesce at the meeting time. We study special modification of such flow (constructed in [1]) that gives rise to a random dynamical system and thus allows to discuss stationary points. Existence of a unique stationary point is proved in the case of a strictly monotone Lipschitz drift by developing a variant of a pullback procedure. Connections between the existence of a stationary point and properties of a dual flow are discussed. 1 Introduction In the present paper we investigate a long-time behaviour of the Arratia flow [2, 3] and its generalizations { Arratia flows with drifts. These objects will be introduced in the framework of stochastic flows on R. Following [4], by a stochastic flow on R we understand a family f s;t : −∞ < s ≤ t < 1g of measurable random mappings of R that possess two properties. 1. Evolutionary property: for all r ≤ s ≤ t; x 2 R;! 2 Ω s;t(!; r;s(!; x)) = r;t(!; x) (1.1) and s;s(!; x) = x: 2. Independent and stationary increments: for t1 < : : : < tn mappings t1;t2 ;:::; tn−1;tn are independent and t1;t2 is equal in distribution to 0;t2−t1 : n For each n ≥ 1 an R -valued stochastic process t ! ( s;t(x1); : : : ; s;t(xn)); t ≥ s will be called an n−point motion of the stochastic flow f s;t : −∞ < s ≤ t < 1g.
    [Show full text]
  • An Introduction to Lévy and Feller Processes Arxiv:1603.00251V2 [Math.PR] 17 Oct 2016
    An Introduction to Lévy and Feller Processes – Advanced Courses in Mathematics - CRM Barcelona 2014 – René L. Schilling TU Dresden, Institut für Mathematische Stochastik, 01062 Dresden, Germany [email protected] http://www.math.tu-dresden.de/sto/schilling These course notes will be published, together Davar Khoshnevisan’s notes on Invariance and Comparison Principles for Parabolic Stochastic Partial Differential Equations as From Lévy-Type Processes to Parabolic SPDEs by the CRM, Barcelona and Birkäuser, Cham 2017 (ISBN: 978-3-319-34119-4). The arXiv-version and the published version may differ in layout, pagination and wording, but not in content. arXiv:1603.00251v2 [math.PR] 17 Oct 2016 Contents Preface 3 Symbols and notation 5 1. Orientation 7 2. Lévy processes 12 3. Examples 16 4. On the Markov property 24 5. A digression: semigroups 30 6. The generator of a Lévy process 36 7. Construction of Lévy processes 44 8. Two special Lévy processes 49 9. Random measures 55 10.A digression: stochastic integrals 64 11.From Lévy to Feller processes 75 12.Symbols and semimartingales 84 13.Dénouement 93 A. Some classical results 97 Bibliography 104 1 Preface These lecture notes are an extended version of my lectures on Lévy and Lévy-type processes given at the Second Barcelona Summer School on Stochastic Analysis organized by the Centre de Recerca Matemàtica (CRM). The lectures are aimed at advanced graduate and PhD students. In order to read these notes, one should have sound knowledge of measure theoretic probability theory and some background in stochastic processes, as it is covered in my books Measures, Integals and Martingales [54] and Brownian Motion [56].
    [Show full text]
  • Operator Methods for Continuous-Time Markov Processes
    CHAPTER11 Operator Methods for Continuous-Time Markov Processes Yacine Aït-Sahalia*, Lars Peter Hansen**, and José A. Scheinkman* *Department of Economics, Princeton University,Princeton, NJ **Department of Economics,The University of Chicago, Chicago, IL Contents 1. Introduction 2 2. Alternative Ways to Model a Continuous-Time Markov Process 3 2.1. Transition Functions 3 2.2. Semigroup of Conditional Expectations 4 2.3. Infinitesimal Generators 5 2.4. Quadratic Forms 7 2.5. Stochastic Differential Equations 8 2.6. Extensions 8 3. Parametrizations of the Stationary Distribution: Calibrating the Long Run 11 3.1. Wong’s Polynomial Models 12 3.2. Stationary Distributions 14 3.3. Fitting the Stationary Distribution 15 3.4. Nonparametric Methods for Inferring Drift or Diffusion Coefficients 18 4. Transition Dynamics and Spectral Decomposition 20 4.1. Quadratic Forms and Implied Generators 21 4.2. Principal Components 24 4.3. Applications 30 5. Hermite and Related Expansions of a Transition Density 36 5.1. Exponential Expansion 36 5.2. Hermite Expansion of the Transition Function 37 5.3. Local Expansions of the Log-Transition Function 40 6. Observable Implications and Tests 45 6.1. Local Characterization 45 6.2. Total Positivity and Testing for Jumps 47 6.3. Principal Component Approach 48 6.4. Testing the Specification of Transitions 49 6.5. Testing Markovianity 52 6.6. Testing Symmetry 53 6.7. Random Time Changes 54 © 2010, Elsevier B.V.All rights reserved. 1 2 Yacine Aït-Sahalia et al. 7. The Properties of Parameter Estimators 55 7.1. Maximum Likelihood Estimation 55 7.2. Estimating the Diffusion Coefficient in the Presence of Jumps 57 7.3.
    [Show full text]
  • Approximations for Finite Spin Systems and Occupancy Processes
    Approximations for finite spin systems and occupancy processes Liam Hodgkinson B.Sc. (Hons) A thesis submitted for the degree of Doctor of Philosophy at The University of Queensland in 2019 School of Mathematics and Physics Abstract Treating a complex network as a large collection of microscopic interacting entities has become a standard modelling paradigm to account for the effects of heterogeneity in real-world systems. In the probabilistic setting, these microscale models are often formulated as binary interacting particle systems: Markov processes on a collection of sites, where each site has two states (occupied and unoccupied). Of these, models comprised of distinct entities that transition individually form the broad class of finite spin systems in continuous time, and occupancy processes in discrete time. This class includes many general and highly detailed models of use in a wide variety of fields, including the stochastic patch occupancy models in ecology, network models and spreading processes in epidemiology, Ising-Kac models in physics, and dynamic random graph models in social and computer science. Unfortunately, the increased precision realised by this microscale approach often comes at the expense of tractability. In cases where these models become sufficiently complex to be meaningful, classical analysis techniques are rendered ineffective, and simulation and Monte Carlo methods become nontrivial. This is true especially for long-range processes, where virtually all of the existing techniques for studying interacting particle systems break down. To help meet these challenges, this thesis develops and analyses three types of practical approximations for finite spin systems and occupancy processes. They prove to be effective proxies for the macroscopic behaviour of many long-range systems, especially those of ‘mean-field type’, as demonstrated through the development of explicit, non-asymptotic error bounds.
    [Show full text]
  • An Introduction to Stochastic Processes in Continuous Time
    An Introduction to Stochastic Processes in Continuous Time Harry van Zanten November 8, 2004 (this version) always under construction ii Preface iv Contents 1 Stochastic processes 1 1.1 Stochastic processes 1 1.2 Finite-dimensional distributions 3 1.3 Kolmogorov's continuity criterion 4 1.4 Gaussian processes 7 1.5 Non-di®erentiability of the Brownian sample paths 10 1.6 Filtrations and stopping times 11 1.7 Exercises 17 2 Martingales 21 2.1 De¯nitions and examples 21 2.2 Discrete-time martingales 22 2.2.1 Martingale transforms 22 2.2.2 Inequalities 24 2.2.3 Doob decomposition 26 2.2.4 Convergence theorems 27 2.2.5 Optional stopping theorems 31 2.3 Continuous-time martingales 33 2.3.1 Upcrossings in continuous time 33 2.3.2 Regularization 34 2.3.3 Convergence theorems 37 2.3.4 Inequalities 38 2.3.5 Optional stopping 38 2.4 Applications to Brownian motion 40 2.4.1 Quadratic variation 40 2.4.2 Exponential inequality 42 2.4.3 The law of the iterated logarithm 43 2.4.4 Distribution of hitting times 45 2.5 Exercises 47 3 Markov processes 49 3.1 Basic de¯nitions 49 3.2 Existence of a canonical version 52 3.3 Feller processes 55 3.3.1 Feller transition functions and resolvents 55 3.3.2 Existence of a cadlag version 59 3.3.3 Existence of a good ¯ltration 61 3.4 Strong Markov property 64 vi Contents 3.4.1 Strong Markov property of a Feller process 64 3.4.2 Applications to Brownian motion 68 3.5 Generators 70 3.5.1 Generator of a Feller process 70 3.5.2 Characteristic operator 74 3.6 Killed Feller processes 76 3.6.1 Sub-Markovian processes 76 3.6.2
    [Show full text]