Backbone Decomposition for Superprocesses and Applications

Total Page:16

File Type:pdf, Size:1020Kb

Backbone Decomposition for Superprocesses and Applications Backbone decomposition for superprocesses and applications. Backbone decomposition for superprocesses and applications. A. E. Kyprianou1 Department of Mathematical Sciences, University of Bath 1/ 17 1Julien Berestycki, Rongli Liu, Antonio Murillo-Salas, Yanxia Ren To be advertised next week: Chair of the probability laboratory at Bath PROB-L@B Probability Laboratory at Bath Now receiving submissions in applications of probability: Acta Applicandae Mathematicae Backbone decomposition for superprocesses and applications. 2/ 17 PROB-L@B Probability Laboratory at Bath Now receiving submissions in applications of probability: Acta Applicandae Mathematicae Backbone decomposition for superprocesses and applications. To be advertised next week: Chair of the probability laboratory at Bath 2/ 17 PROB-L@B Probability Laboratory at Bath Backbone decomposition for superprocesses and applications. To be advertised next week: Chair of the probability laboratory at Bath Now receiving submissions in applications of probability: Acta Applicandae Mathematicae 2/ 17 Atomic-measure valued Markov process: Z = fZt : t ≥ 0g with Pn probabilities denoted by Pµ where µ(·) = i=1 δxi (·) with xi 2 E. Path construction: Under Pµ, from each xi 2 E initiate: iid copies of a nice2 conservative3 E-valued Markov process whose semi-group is denoted by P = fPt : t ≥ 0g, each of which have a branching generator given by 1 ! X n F (s) = q s pn − s : n=0 Markov property: Zt+s is equal in law to an independent copy of Zs under PZt . Branching property: For atomic measures µ1 and µ2, (Z; Pµ1+µ2 ) has the (1) (2) (i) same law as Z + Z where Z has law Pµi for i = 1; 2. R PNt Notation: hf; Zti = E f(x)Zt(dx) = i=1 f(zi(t)) when PNt Zt(·) = i=1 δzi(t)(·). Total mass: The process fh1;Zti : t ≥ 0g is a continuous time Galton-Watson process. Backbone decomposition for superprocesses and applications. (P;F ; E)-Branching Markov process 2At the moment, the word ‘nice’ means any E-valued Markov process for which the mathematics in this talk can be carried out! However this is a very large class of processes d including, for example many conservative diffusions in E = R . 3/ 17 3 Pt[1] = 1 for all t ≥ 0. Path construction: Under Pµ, from each xi 2 E initiate: iid copies of a nice2 conservative3 E-valued Markov process whose semi-group is denoted by P = fPt : t ≥ 0g, each of which have a branching generator given by 1 ! X n F (s) = q s pn − s : n=0 Markov property: Zt+s is equal in law to an independent copy of Zs under PZt . Branching property: For atomic measures µ1 and µ2, (Z; Pµ1+µ2 ) has the (1) (2) (i) same law as Z + Z where Z has law Pµi for i = 1; 2. R PNt Notation: hf; Zti = E f(x)Zt(dx) = i=1 f(zi(t)) when PNt Zt(·) = i=1 δzi(t)(·). Total mass: The process fh1;Zti : t ≥ 0g is a continuous time Galton-Watson process. Backbone decomposition for superprocesses and applications. (P;F ; E)-Branching Markov process Atomic-measure valued Markov process: Z = fZt : t ≥ 0g with Pn probabilities denoted by Pµ where µ(·) = i=1 δxi (·) with xi 2 E. 2At the moment, the word ‘nice’ means any E-valued Markov process for which the mathematics in this talk can be carried out! However this is a very large class of processes d including, for example many conservative diffusions in E = R . 3/ 17 3 Pt[1] = 1 for all t ≥ 0. Markov property: Zt+s is equal in law to an independent copy of Zs under PZt . Branching property: For atomic measures µ1 and µ2, (Z; Pµ1+µ2 ) has the (1) (2) (i) same law as Z + Z where Z has law Pµi for i = 1; 2. R PNt Notation: hf; Zti = E f(x)Zt(dx) = i=1 f(zi(t)) when PNt Zt(·) = i=1 δzi(t)(·). Total mass: The process fh1;Zti : t ≥ 0g is a continuous time Galton-Watson process. Backbone decomposition for superprocesses and applications. (P;F ; E)-Branching Markov process Atomic-measure valued Markov process: Z = fZt : t ≥ 0g with Pn probabilities denoted by Pµ where µ(·) = i=1 δxi (·) with xi 2 E. Path construction: Under Pµ, from each xi 2 E initiate: iid copies of a nice2 conservative3 E-valued Markov process whose semi-group is denoted by P = fPt : t ≥ 0g, each of which have a branching generator given by 1 ! X n F (s) = q s pn − s : n=0 2At the moment, the word ‘nice’ means any E-valued Markov process for which the mathematics in this talk can be carried out! However this is a very large class of processes d including, for example many conservative diffusions in E = R . 3/ 17 3 Pt[1] = 1 for all t ≥ 0. Branching property: For atomic measures µ1 and µ2, (Z; Pµ1+µ2 ) has the (1) (2) (i) same law as Z + Z where Z has law Pµi for i = 1; 2. R PNt Notation: hf; Zti = E f(x)Zt(dx) = i=1 f(zi(t)) when PNt Zt(·) = i=1 δzi(t)(·). Total mass: The process fh1;Zti : t ≥ 0g is a continuous time Galton-Watson process. Backbone decomposition for superprocesses and applications. (P;F ; E)-Branching Markov process Atomic-measure valued Markov process: Z = fZt : t ≥ 0g with Pn probabilities denoted by Pµ where µ(·) = i=1 δxi (·) with xi 2 E. Path construction: Under Pµ, from each xi 2 E initiate: iid copies of a nice2 conservative3 E-valued Markov process whose semi-group is denoted by P = fPt : t ≥ 0g, each of which have a branching generator given by 1 ! X n F (s) = q s pn − s : n=0 Markov property: Zt+s is equal in law to an independent copy of Zs under PZt . 2At the moment, the word ‘nice’ means any E-valued Markov process for which the mathematics in this talk can be carried out! However this is a very large class of processes d including, for example many conservative diffusions in E = R . 3/ 17 3 Pt[1] = 1 for all t ≥ 0. R PNt Notation: hf; Zti = E f(x)Zt(dx) = i=1 f(zi(t)) when PNt Zt(·) = i=1 δzi(t)(·). Total mass: The process fh1;Zti : t ≥ 0g is a continuous time Galton-Watson process. Backbone decomposition for superprocesses and applications. (P;F ; E)-Branching Markov process Atomic-measure valued Markov process: Z = fZt : t ≥ 0g with Pn probabilities denoted by Pµ where µ(·) = i=1 δxi (·) with xi 2 E. Path construction: Under Pµ, from each xi 2 E initiate: iid copies of a nice2 conservative3 E-valued Markov process whose semi-group is denoted by P = fPt : t ≥ 0g, each of which have a branching generator given by 1 ! X n F (s) = q s pn − s : n=0 Markov property: Zt+s is equal in law to an independent copy of Zs under PZt . Branching property: For atomic measures µ1 and µ2, (Z; Pµ1+µ2 ) has the (1) (2) (i) same law as Z + Z where Z has law Pµi for i = 1; 2. 2At the moment, the word ‘nice’ means any E-valued Markov process for which the mathematics in this talk can be carried out! However this is a very large class of processes d including, for example many conservative diffusions in E = R . 3/ 17 3 Pt[1] = 1 for all t ≥ 0. Total mass: The process fh1;Zti : t ≥ 0g is a continuous time Galton-Watson process. Backbone decomposition for superprocesses and applications. (P;F ; E)-Branching Markov process Atomic-measure valued Markov process: Z = fZt : t ≥ 0g with Pn probabilities denoted by Pµ where µ(·) = i=1 δxi (·) with xi 2 E. Path construction: Under Pµ, from each xi 2 E initiate: iid copies of a nice2 conservative3 E-valued Markov process whose semi-group is denoted by P = fPt : t ≥ 0g, each of which have a branching generator given by 1 ! X n F (s) = q s pn − s : n=0 Markov property: Zt+s is equal in law to an independent copy of Zs under PZt . Branching property: For atomic measures µ1 and µ2, (Z; Pµ1+µ2 ) has the (1) (2) (i) same law as Z + Z where Z has law Pµi for i = 1; 2. R PNt Notation: hf; Zti = E f(x)Zt(dx) = i=1 f(zi(t)) when PNt Zt(·) = i=1 δzi(t)(·). 2At the moment, the word ‘nice’ means any E-valued Markov process for which the mathematics in this talk can be carried out! However this is a very large class of processes d including, for example many conservative diffusions in E = R . 3/ 17 3 Pt[1] = 1 for all t ≥ 0. Backbone decomposition for superprocesses and applications. (P;F ; E)-Branching Markov process Atomic-measure valued Markov process: Z = fZt : t ≥ 0g with Pn probabilities denoted by Pµ where µ(·) = i=1 δxi (·) with xi 2 E. Path construction: Under Pµ, from each xi 2 E initiate: iid copies of a nice2 conservative3 E-valued Markov process whose semi-group is denoted by P = fPt : t ≥ 0g, each of which have a branching generator given by 1 ! X n F (s) = q s pn − s : n=0 Markov property: Zt+s is equal in law to an independent copy of Zs under PZt . Branching property: For atomic measures µ1 and µ2, (Z; Pµ1+µ2 ) has the (1) (2) (i) same law as Z + Z where Z has law Pµi for i = 1; 2.
Recommended publications
  • Pdf File of Second Edition, January 2018
    Probability on Graphs Random Processes on Graphs and Lattices Second Edition, 2018 GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge copyright Geoffrey Grimmett Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 56 Figures copyright Geoffrey Grimmett Contents Preface ix 1 Random Walks on Graphs 1 1.1 Random Walks and Reversible Markov Chains 1 1.2 Electrical Networks 3 1.3 FlowsandEnergy 8 1.4 RecurrenceandResistance 11 1.5 Polya's Theorem 14 1.6 GraphTheory 16 1.7 Exercises 18 2 Uniform Spanning Tree 21 2.1 De®nition 21 2.2 Wilson's Algorithm 23 2.3 Weak Limits on Lattices 28 2.4 Uniform Forest 31 2.5 Schramm±LownerEvolutionsÈ 32 2.6 Exercises 36 3 Percolation and Self-Avoiding Walks 39 3.1 PercolationandPhaseTransition 39 3.2 Self-Avoiding Walks 42 3.3 ConnectiveConstantoftheHexagonalLattice 45 3.4 CoupledPercolation 53 3.5 Oriented Percolation 53 3.6 Exercises 56 4 Association and In¯uence 59 4.1 Holley Inequality 59 4.2 FKG Inequality 62 4.3 BK Inequalitycopyright Geoffrey Grimmett63 vi Contents 4.4 HoeffdingInequality 65 4.5 In¯uenceforProductMeasures 67 4.6 ProofsofIn¯uenceTheorems 72 4.7 Russo'sFormulaandSharpThresholds 80 4.8 Exercises 83 5 Further Percolation 86 5.1 Subcritical Phase 86 5.2 Supercritical Phase 90 5.3 UniquenessoftheIn®niteCluster 96 5.4 Phase Transition 99 5.5 OpenPathsinAnnuli 103 5.6 The Critical Probability in Two Dimensions 107
    [Show full text]
  • Mean Field Methods for Classification with Gaussian Processes
    Mean field methods for classification with Gaussian processes Manfred Opper Neural Computing Research Group Division of Electronic Engineering and Computer Science Aston University Birmingham B4 7ET, UK. opperm~aston.ac.uk Ole Winther Theoretical Physics II, Lund University, S6lvegatan 14 A S-223 62 Lund, Sweden CONNECT, The Niels Bohr Institute, University of Copenhagen Blegdamsvej 17, 2100 Copenhagen 0, Denmark winther~thep.lu.se Abstract We discuss the application of TAP mean field methods known from the Statistical Mechanics of disordered systems to Bayesian classifi­ cation models with Gaussian processes. In contrast to previous ap­ proaches, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given. 1 Modeling with Gaussian Processes Bayesian models which are based on Gaussian prior distributions on function spaces are promising non-parametric statistical tools. They have been recently introduced into the Neural Computation community (Neal 1996, Williams & Rasmussen 1996, Mackay 1997). To give their basic definition, we assume that the likelihood of the output or target variable T for a given input s E RN can be written in the form p(Tlh(s)) where h : RN --+ R is a priori assumed to be a Gaussian random field. If we assume fields with zero prior mean, the statistics of h is entirely defined by the second order correlations C(s, S') == E[h(s)h(S')], where E denotes expectations 310 M Opper and 0. Winther with respect to the prior. Interesting examples are C(s, s') (1) C(s, s') (2) The choice (1) can be motivated as a limit of a two-layered neural network with infinitely many hidden units with factorizable input-hidden weight priors (Williams 1997).
    [Show full text]
  • Probability on Graphs Random Processes on Graphs and Lattices
    Probability on Graphs Random Processes on Graphs and Lattices GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 44 Figures c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Contents Preface ix 1 Random walks on graphs 1 1.1 RandomwalksandreversibleMarkovchains 1 1.2 Electrical networks 3 1.3 Flowsandenergy 8 1.4 Recurrenceandresistance 11 1.5 Polya's theorem 14 1.6 Graphtheory 16 1.7 Exercises 18 2 Uniform spanning tree 21 2.1 De®nition 21 2.2 Wilson's algorithm 23 2.3 Weak limits on lattices 28 2.4 Uniform forest 31 2.5 Schramm±LownerevolutionsÈ 32 2.6 Exercises 37 3 Percolation and self-avoiding walk 39 3.1 Percolationandphasetransition 39 3.2 Self-avoiding walks 42 3.3 Coupledpercolation 45 3.4 Orientedpercolation 45 3.5 Exercises 48 4 Association and in¯uence 50 4.1 Holley inequality 50 4.2 FKGinequality 53 4.3 BK inequality 54 4.4 Hoeffdinginequality 56 c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 vi Contents 4.5 In¯uenceforproductmeasures 58 4.6 Proofsofin¯uencetheorems 63 4.7 Russo'sformulaandsharpthresholds 75 4.8 Exercises 78 5 Further percolation 81 5.1 Subcritical phase 81 5.2 Supercritical phase 86 5.3 Uniquenessofthein®nitecluster 92 5.4 Phase transition 95 5.5 Openpathsinannuli 99 5.6 The critical probability in two dimensions 103 5.7 Cardy's formula 110 5.8 The
    [Show full text]
  • Arxiv:1504.02898V2 [Cond-Mat.Stat-Mech] 7 Jun 2015 Keywords: Percolation, Explosive Percolation, SLE, Ising Model, Earth Topography
    Recent advances in percolation theory and its applications Abbas Ali Saberi aDepartment of Physics, University of Tehran, P.O. Box 14395-547,Tehran, Iran bSchool of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM) P.O. Box 19395-5531, Tehran, Iran Abstract Percolation is the simplest fundamental model in statistical mechanics that exhibits phase transitions signaled by the emergence of a giant connected component. Despite its very simple rules, percolation theory has successfully been applied to describe a large variety of natural, technological and social systems. Percolation models serve as important universality classes in critical phenomena characterized by a set of critical exponents which correspond to a rich fractal and scaling structure of their geometric features. We will first outline the basic features of the ordinary model. Over the years a variety of percolation models has been introduced some of which with completely different scaling and universal properties from the original model with either continuous or discontinuous transitions depending on the control parameter, di- mensionality and the type of the underlying rules and networks. We will try to take a glimpse at a number of selective variations including Achlioptas process, half-restricted process and spanning cluster-avoiding process as examples of the so-called explosive per- colation. We will also introduce non-self-averaging percolation and discuss correlated percolation and bootstrap percolation with special emphasis on their recent progress. Directed percolation process will be also discussed as a prototype of systems displaying a nonequilibrium phase transition into an absorbing state. In the past decade, after the invention of stochastic L¨ownerevolution (SLE) by Oded Schramm, two-dimensional (2D) percolation has become a central problem in probability theory leading to the two recent Fields medals.
    [Show full text]
  • A Representation for Functionals of Superprocesses Via Particle Picture Raisa E
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector stochastic processes and their applications ELSEVIER Stochastic Processes and their Applications 64 (1996) 173-186 A representation for functionals of superprocesses via particle picture Raisa E. Feldman *,‘, Srikanth K. Iyer ‘,* Department of Statistics and Applied Probability, University oj’ Cull~ornia at Santa Barbara, CA 93/06-31/O, USA, and Department of Industrial Enyineeriny and Management. Technion - Israel Institute of’ Technology, Israel Received October 1995; revised May 1996 Abstract A superprocess is a measure valued process arising as the limiting density of an infinite col- lection of particles undergoing branching and diffusion. It can also be defined as a measure valued Markov process with a specified semigroup. Using the latter definition and explicit mo- ment calculations, Dynkin (1988) built multiple integrals for the superprocess. We show that the multiple integrals of the superprocess defined by Dynkin arise as weak limits of linear additive functionals built on the particle system. Keywords: Superprocesses; Additive functionals; Particle system; Multiple integrals; Intersection local time AMS c.lassijication: primary 60555; 60H05; 60580; secondary 60F05; 6OG57; 60Fl7 1. Introduction We study a class of functionals of a superprocess that can be identified as the multiple Wiener-It6 integrals of this measure valued process. More precisely, our objective is to study these via the underlying branching particle system, thus providing means of thinking about the functionals in terms of more simple, intuitive and visualizable objects. This way of looking at multiple integrals of a stochastic process has its roots in the previous work of Adler and Epstein (=Feldman) (1987), Epstein (1989), Adler ( 1989) Feldman and Rachev (1993), Feldman and Krishnakumar ( 1994).
    [Show full text]
  • Markov Random Fields and Stochastic Image Models
    Markov Random Fields and Stochastic Image Models Charles A. Bouman School of Electrical and Computer Engineering Purdue University Phone: (317) 494-0340 Fax: (317) 494-3358 email [email protected] Available from: http://dynamo.ecn.purdue.edu/»bouman/ Tutorial Presented at: 1995 IEEE International Conference on Image Processing 23-26 October 1995 Washington, D.C. Special thanks to: Ken Sauer Suhail Saquib Department of Electrical School of Electrical and Computer Engineering Engineering University of Notre Dame Purdue University 1 Overview of Topics 1. Introduction (b) Non-Gaussian MRF's 2. The Bayesian Approach i. Quadratic functions ii. Non-Convex functions 3. Discrete Models iii. Continuous MAP estimation (a) Markov Chains iv. Convex functions (b) Markov Random Fields (MRF) (c) Parameter Estimation (c) Simulation i. Estimation of σ (d) Parameter estimation ii. Estimation of T and p parameters 4. Application of MRF's to Segmentation 6. Application to Tomography (a) The Model (a) Tomographic system and data models (b) Bayesian Estimation (b) MAP Optimization (c) MAP Optimization (c) Parameter estimation (d) Parameter Estimation 7. Multiscale Stochastic Models (e) Other Approaches (a) Continuous models 5. Continuous Models (b) Discrete models (a) Gaussian Random Process Models 8. High Level Image Models i. Autoregressive (AR) models ii. Simultaneous AR (SAR) models iii. Gaussian MRF's iv. Generalization to 2-D 2 References in Statistical Image Modeling 1. Overview references [100, 89, 50, 54, 162, 4, 44] 4. Simulation and Stochastic Optimization Methods [118, 80, 129, 100, 68, 141, 61, 76, 62, 63] 2. Type of Random Field Model 5. Computational Methods used with MRF Models (a) Discrete Models i.
    [Show full text]
  • Random Walk in Random Scenery (RWRS)
    IMS Lecture Notes–Monograph Series Dynamics & Stochastics Vol. 48 (2006) 53–65 c Institute of Mathematical Statistics, 2006 DOI: 10.1214/074921706000000077 Random walk in random scenery: A survey of some recent results Frank den Hollander1,2,* and Jeffrey E. Steif 3,† Leiden University & EURANDOM and Chalmers University of Technology Abstract. In this paper we give a survey of some recent results for random walk in random scenery (RWRS). On Zd, d ≥ 1, we are given a random walk with i.i.d. increments and a random scenery with i.i.d. components. The walk and the scenery are assumed to be independent. RWRS is the random process where time is indexed by Z, and at each unit of time both the step taken by the walk and the scenery value at the site that is visited are registered. We collect various results that classify the ergodic behavior of RWRS in terms of the characteristics of the underlying random walk (and discuss extensions to stationary walk increments and stationary scenery components as well). We describe a number of results for scenery reconstruction and close by listing some open questions. 1. Introduction Random walk in random scenery is a family of stationary random processes ex- hibiting amazingly rich behavior. We will survey some of the results that have been obtained in recent years and list some open questions. Mike Keane has made funda- mental contributions to this topic. As close colleagues it has been a great pleasure to work with him. We begin by defining the object of our study. Fix an integer d ≥ 1.
    [Show full text]
  • Stable Marked Point Processes
    Stable Marked Point Processes Tucker McElroy Dimitris N. Politis U.S. Bureau of the Census University of California, San Diego University of California, San Diego Abstract In many contexts, such as queueing theory, spatial statistics, geostatistics and meteorology, data are observed at irregular spatial positions. One model of this situation is to consider the observation points as generated by a Poisson Process. Under this assumption, we study the limit behavior of the partial sums of the Marked Point Process f(ti;X(ti))g, where X(t) is a stationary random ¯eld and the points ti are generated from an independent Poisson random measure N on Rd. We de¯ne the sample mean and sample variance statistics, and determine their joint asymptotic behavior in a heavy-tailed setting, thus extending some ¯nite variance results of Karr (1986). New results on subsampling in the context of a Marked Point Process are also presented, with the application of forming a con¯dence interval for the unknown mean under an unknown degree of heavy tails. 1 Introduction Random ¯eld data arise in diverse areas, such as spatial statistics, geostatistics, and meteorology, to name a few. It often happens that the observation locations of the data are irregularly spaced, which is a serious deviation from the typical formulation of random ¯eld theory where data are located at lattice points. One e®ective way of modeling the observation points is through a Poisson Process. In Karr (1982, 1986), the statistical problem of mean estimation is addressed given a Marked Point Process 1 structure of the data where the observation locations are governed by a Poisson Ran- dom Measure N assumed independent of the distribution of the stationary random ¯eld itself.
    [Show full text]
  • A Review on Statistical Inference Methods for Discrete Markov Random Fields Julien Stoehr, Richard Everitt, Matthew T
    A review on statistical inference methods for discrete Markov random fields Julien Stoehr, Richard Everitt, Matthew T. Moores To cite this version: Julien Stoehr, Richard Everitt, Matthew T. Moores. A review on statistical inference methods for discrete Markov random fields. 2017. hal-01462078v2 HAL Id: hal-01462078 https://hal.archives-ouvertes.fr/hal-01462078v2 Preprint submitted on 11 Apr 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A review on statistical inference methods for discrete Markov random fields Julien Stoehr1 1School of Mathematical Sciences & Insight Centre for Data Analytics, University College Dublin, Ireland Abstract Developing satisfactory methodology for the analysis of Markov random field is a very challenging task. Indeed, due to the Markovian dependence structure, the normalizing constant of the fields cannot be computed using standard analytical or numerical methods. This forms a central issue for any statistical approach as the likelihood is an integral part of the procedure. Furthermore, such unobserved fields cannot be integrated out and the likelihood evaluation becomes a doubly intractable problem. This report gives an overview of some of the methods used in the literature to analyse such observed or unobserved random fields.
    [Show full text]
  • A Review on Statistical Inference Methods for Discrete Markov Random Fields Julien Stoehr
    A review on statistical inference methods for discrete Markov random fields Julien Stoehr To cite this version: Julien Stoehr. A review on statistical inference methods for discrete Markov random fields. 2017. hal-01462078v1 HAL Id: hal-01462078 https://hal.archives-ouvertes.fr/hal-01462078v1 Preprint submitted on 8 Feb 2017 (v1), last revised 11 Apr 2017 (v2) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A review on statistical inference methods for discrete Markov random fields Julien Stoehr1 1School of Mathematical Sciences & Insight Centre for Data Analytics, University College Dublin, Ireland February 8, 2017 Abstract Developing satisfactory methodology for the analysis of Markov random field is a very challenging task. Indeed, due to the Markovian dependence structure, the normalizing con- stant of the fields cannot be computed using standard analytical or numerical methods. This forms a central issue for any statistical approach as the likelihood is an integral part of the procedure. Furthermore, such unobserved fields cannot be integrated out and the like- lihood evaluation becomes a doubly intractable problem. This report gives an overview of some of the methods used in the literature to analyse such observed or unobserved random fields.
    [Show full text]
  • Critical Gaussian Multiplicative Chaos: Convergence of the Derivative Martingale
    Critical Gaussian multiplicative chaos: Convergence of the derivative martingale The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Duplantier, Bertrand et al. “Critical Gaussian Multiplicative Chaos: Convergence of the Derivative Martingale.” The Annals of Probability 42, 5 (September 2014): 1769–1808 © 2014 Institute of Mathematical Statistics As Published http://dx.doi.org/10.1214/13-AOP890 Publisher Institute of Mathematical Statistics Version Final published version Citable link http://hdl.handle.net/1721.1/115337 Terms of Use Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. The Annals of Probability 2014, Vol. 42, No. 5, 1769–1808 DOI: 10.1214/13-AOP890 c Institute of Mathematical Statistics, 2014 CRITICAL GAUSSIAN MULTIPLICATIVE CHAOS: CONVERGENCE OF THE DERIVATIVE MARTINGALE By Bertrand Duplantier1, Remi´ Rhodes2, Scott Sheffield3 and Vincent Vargas4 Institut de Physique Th´eorique, Universit´eParis-Dauphine, Massachusetts Institute of Technology and Universit´eParis-Dauphine In this paper, we study Gaussian multiplicative chaos in the criti- cal case. We show that the so-called derivative martingale, introduced in the context of branching Brownian motions and branching random walks, converges almost surely (in all dimensions) to a random mea- sure with full support. We also show that the limiting measure has no atom. In connection with the derivative martingale, we write explicit conjectures about the glassy phase of log-correlated Gaussian poten- tials and the relation with the asymptotic expansion of the maximum of log-correlated Gaussian random variables.
    [Show full text]
  • A Random-Field Approach to Inference in Large Models of Network Formation
    A Random-Field Approach to Inference in Large Models of Network Formation Michael P. Leung∗ July 11, 2015 Note: Superceded by “A Weak Law for Moments of Pairwise-Stable Networks” and “Normal Approximation in Large Network Models” Abstract. We establish a law of large numbers and central limit theorem for a large class of network statistics, enabling inference in models of network for- mation with only a single network observation. Our model allows the decision of an individual (node) to form associations (links) to depend quite generally on the endogenous structure of the network. Formally, we prove that certain node-level functions of the network constitute α-mixing random fields, objects for which central limit theorems exist. The key assumptions are that (1) nodes endowed with similar attributes prefer to link (homophily); (2) there is enough “diversity” in node attributes; and (3) certain latent sets of nodes that are uncon- nected form their links independently (“isolated societies” do not “coordinate”). Our results enable the estimation of certain network moments that are useful for inference. We leverage these moments to construct moment inequalities that define bounds on the identified set. Relative to feasible alternatives, these bounds are sharper and computationally tractable under weaker restrictions on network externalities. JEL Codes: C31, C57, D85, L14 Keywords: social networks, network formation, multiple equilibria, large- market asymptotics, random fields, α-mixing ∗Department of Economics, Stanford University. E-mail: [email protected]. I thank my advisors, Arun Chandrasekhar, Matt Jackson, and especially Han Hong, for their advice, criticism, and support and many helpful discussions. I also thank Marinho Bertanha, Tim Bresnahan, Michael Dickstein, Jessie Li, Xing Li, Josh Mollner, Frank Wolak, and participants at the Stanford structural I.O.
    [Show full text]