Lecture 5: Intrinsic Gmrfs Gaussian Markov Random fields

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 5: Intrinsic Gmrfs Gaussian Markov Random fields Lecture 5: Intrinsic GMRFs Gaussian Markov random fields David Bolin Chalmers University of Technology February 2, 2015 Intrinsic GMRFs (IGMRF) Intrinsic GMRFs are • improper, i.e., they have precision matrices not of full rank. • Often used as prior distributions in various applications. Of particular importance are IGMRFs that are invariant to any trend that is a polynomial of the locations of the nodes up to a specific order. Today we will define IGMRFs and look at some popular constructions: • Random walks models on the line • ICAR models on regular lattices • Besag models on irregular graphs Introduction David Bolin Improper GMRF Definition Let Q be an n × n SPSD matrix with rank n − k > 0. Then T x = (x1; : : : ; xn) is an improper GMRF of rank n − k with parameters (µ; Q), if its density is −(n−k) ∗ 1=2 1 T π(x) = (2π) 2 (jQj ) exp − (x − µ) Q(x − µ) : (1) 2 Further, x is an improper GMRF wrt to the labelled graph G = (V; E), where Qij 6= 0 () fi; jg 2 E for all i 6= j: j · j∗ denote the generalised determinant: product of the non-zero eigenvalues. Definition of IGMRFs — Improper GMRFs David Bolin Comments • We will continue denoting (µ; Q) “mean” and “precision”, even if these quantities formally do not exist. • The precision matrix has at least one zero eigenvalue. • An improper GMRF is always proper on an appropriate subspace. • We define sampling from an IGMRF as sampling from the proper density on the appropriate subspace. • The eigenvectors corresponding to the zero eigenvalues define directions that the GMRF ’does not care about’ • This is the important property: We can use this indifference! • If we use an improper GMRF as a prior, the posterior is typically proper. Definition of IGMRFs — Improper GMRFs David Bolin Example: GMRFs under linear constraints Consider a GMRF x under the hard contraint Ax = a. ∗ n We can define the density π (x) = π(xjAx = a) for any x 2 R . The null space of the precision of π∗(x) is spanned by the columns of AT . Take x = x~ + x^, where x~ is in the null space of the precision matrix x^ is in the orthogonal subspace. We then have π∗(x) = π∗(x^) = π(xjAx = a) Thus, π∗ is invariant to the addition of any x~ from the space spanned by the rows of A. Definition of IGMRFs — Improper GMRFs David Bolin Markov properties The Markov properties are interpreted as those obtained from the limit of a proper density. Let the columns of AT span the null space of Q Q(γ) = Q + γAT A: Each element in Q(γ) tends to the corresponding one in Q as γ ! 0. 1 X E(x j x ) = µ − Q (x − µ ) i −i i Q ij j j ii j∼i is interpreted as γ ! 0. Definition of IGMRFs — Improper GMRFs David Bolin Polynomial IGMRFs The order of the IGMRF is the rank deficiency of the precision matrix. Polynomial IGMRFs We use the term polynomial IGMRF of order k if the field is invariant to the addition of a polynomial of order < k. An intrinsic GMRF of first order is an improper GMRF of rank n − 1 where Q1 = 0. P The condition Q1 = 0 means that j Qij = 0; i = 1; : : : ; n Definition of IGMRFs — Improper GMRFs David Bolin Local behaviour Consider a first order IGMRF. With µ = 0, we have 1 X E(x j x ) = − Q x i −i Q ij j ii j:j∼i and X − Qij=Qii = 1 j:j∼i • The conditional mean of xi is a weighted mean of its neighbours. • No shrinking towards an overall level.This is important! • We can concentrate on the deviation from any overall level without having to specify what it is • This is a perfect property for a prior • Many IGMRFs are constructed such that the deviation from the overall level is a smooth curve in time or a smooth surface in space. Definition of IGMRFs — Improper GMRFs David Bolin Building IGMRFs on the line Definition (Forward difference) Define the first-order forward difference of a function f(·) as ∆f(z) = f(z + 1) − f(z): Higher-order forward differences are defined recursively: ∆kf(z) = ∆∆k−1f(z) so ∆2f(z) = f(z + 2) − 2f(z + 1) + f(z) (2) and in general for k = 1; 2;:::, k X k ∆kf(z) = (−1)k (−1)j f(z + j): j j=0 IGMRFs on the line — Forward differences David Bolin IGMRFs of first order on the line Location of the node i is i (think “time”). Assume independent increments iid −1 ∆xi ∼ N (0; κ ); i = 1; : : : ; n − 1; so (3) −1 xj − xi ∼ N (0; (j − i)κ ) for i < j: (4) The density for x is n−1 ! κ X π(x j κ) / κ(n−1)=2 exp − (∆x )2 (5) 2 i i=1 n−1 ! κ X = κ(n−1)=2 exp − (x − x )2 ; (6) 2 i+1 i i=1 or κ π(x) / κ(n−1)=2 exp − xT Rx (7) 2 IGMRFs on the line — On the line with regular locations David Bolin The structure matrix The structure matrix 0 1 −1 1 B−1 2 −1 C B C B −1 2 −1 C B C B .. .. .. C R = B . C : (8) B C B −1 2 −1 C B C @ −1 2 −1A −1 1 • The n − 1 independent increments ensure that the rank of Q is n − 1. • Denote this model by RW1(κ) or short RW1. IGMRFs on the line — On the line with regular locations David Bolin Properties Full conditionals 1 x j x ; κ ∼ N ( (x + x ); 1=(2κ)); 1 < i < n; (9) i −i 2 i−1 i+1 If the intersection between fi; : : : ; jg and fk; : : : ; lg is empty for i < j and k < l, then Cov(xj − xi; xl − xk) = 0: (10) Properties coincide with those of a Wiener process. IGMRFs on the line — On the line with regular locations David Bolin Example Samples with n = 99 and κ = 1 by conditioning on the constraint P xi = 0. IGMRFs on the line — On the line with regular locations David Bolin Example Marginal variances Var(xi) for i = 1; : : : ; n. IGMRFs on the line — On the line with regular locations David Bolin Example Corr(xn=2; xi) for i = 1; : : : ; n IGMRFs on the line — On the line with regular locations David Bolin An alternative first order IGMRF Let n ! κ X π(x) / κ(n−1)=2 exp − (x − x)2 (11) 2 i i=1 where x is the empirical mean of x. Assume n is even and ( 0; 1 ≤ i ≤ n=2 xi = : (12) 1; n=2 < i ≤ n Thus x is locally constant with two levels. Evaluate the density at this configuration under the RW1 model and the alternative (11), we obtain κ κ κ(n−1)=2 exp − and κ(n−1)=2 exp −n (13) 2 8 The log ratio of the densities is then of order O(n). IGMRFs on the line — On the line with regular locations David Bolin The local structure of the RW1 model • The RW1 model only penalises the local deviation from a constant level. • The alternative penalises the global deviation from a constant level. This local behaviour is advantageous in applications if the mean level of x is approximately or locally constant. A similar argument also apply for polynomial IGMRFs of higher order constructed using forward differences of order k as independent Gaussian increments. IGMRFs on the line — On the line with regular locations David Bolin The RW2 model for regular locations Let si = i for i = 1; : : : ; n, with a constant distance between consecutive nodes. Use the second order increments 2 iid −1 ∆ xi ∼ N (0; κ ) (14) for i = 1; : : : ; n − 2, to define the joint density of x n−2 ! κ X π(x) / κ(n−2)=2 exp − (x − 2x + x )2 (15) 2 i i+1 i+2 i=1 κ = κ(n−2)=2 exp − xT Rx (16) 2 IGMRFs on the line — RW2 model for regular locations David Bolin The structure matrix The structure matrix is now 0 1 −2 1 1 B−2 5 −4 1 C B C B 1 −4 6 −4 1 C B C B 1 −4 6 −4 1 C B C B .. .. .. .. .. C R = B . C : B C B 1 −4 6 −4 1 C B C B 1 −4 6 −4 1 C B C @ 1 −4 5 −2A 1 −2 1 IGMRFs on the line — RW2 model for regular locations David Bolin Remarks The conditional mean and precision is 4 1 E(x j x ; κ) = (x + x ) − (x + x ); i −i 6 i+1 i−1 6 i+2 i−2 Prec(xi j x−i; κ) = 6κ, respectively for 2 < i < n − 2. • Verify directly that QS1 = 0 and that the rank of Q is n − 2. • IGMRF of second order: invariant to the adding line to x. • Known as the second order random walk model, denoted by RW2(κ) or simply RW2 model. IGMRFs on the line — RW2 model for regular locations David Bolin Example Samples with n = 99 and κ = 1 by conditioning on the constraints. IGMRFs on the line — RW2 model for regular locations David Bolin Marginal variances Var(xi) for i = 1; : : : ; n. IGMRFs on the line — RW2 model for regular locations David Bolin Corr(xn=2; xi) for i = 1; : : : ; n IGMRFs on the line — RW2 model for regular locations David Bolin Smoothness and limiting properties The forward difference of kth order as an approximation to the kth derivative of f(z), f(z + h) − f(z) f 0(z) = lim h!0 h We can think of smoothness of the in the limit as the distance between the points goes to zero relative to the domain size In the limit we are asking that the kth derivative isn’t too large.
Recommended publications
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • Lecture 19: Polymer Models: Persistence and Self-Avoidance
    Lecture 19: Polymer Models: Persistence and Self-Avoidance Scribe: Allison Ferguson (and Martin Z. Bazant) Martin Fisher School of Physics, Brandeis University April 14, 2005 Polymers are high molecular weight molecules formed by combining a large number of smaller molecules (monomers) in a regular pattern. Polymers in solution (i.e. uncrystallized) can be characterized as long flexible chains, and thus may be well described by random walk models of various complexity. 1 Simple Random Walk (IID Steps) Consider a Rayleigh random walk (i.e. a fixed step size a with a random angle θ). Recall (from Lecture 1) that the probability distribution function (PDF) of finding the walker at position R~ after N steps is given by −3R2 2 ~ e 2a N PN (R) ∼ d (1) (2πa2N/d) 2 Define RN = |R~| as the end-to-end distance of the polymer. Then from previous results we q √ 2 2 ¯ 2 know hRN i = Na and RN = hRN i = a N. We can define an additional quantity, the radius of gyration GN as the radius from the centre of mass (assuming the polymer has an approximately 1 2 spherical shape ). Hughes [1] gives an expression for this quantity in terms of hRN i as follows: 1 1 hG2 i = 1 − hR2 i (2) N 6 N 2 N As an example of the type of prediction which can be made using this type of simple model, consider the following experiment: Pull on both ends of a polymer and measure the force f required to stretch it out2. Note that the force will be given by f = −(∂F/∂R) where F = U − TS is the Helmholtz free energy (T is the temperature, U is the internal energy, and S is the entropy).
    [Show full text]
  • 8 Convergence of Loop-Erased Random Walk to SLE2 8.1 Comments on the Paper
    8 Convergence of loop-erased random walk to SLE2 8.1 Comments on the paper The proof that the loop-erased random walk converges to SLE2 is in the paper \Con- formal invariance of planar loop-erased random walks and uniform spanning trees" by Lawler, Schramm and Werner. (arxiv.org: math.PR/0112234). This section contains some comments to give you some hope of being able to actually make sense of this paper. In the first sections of the paper, δ is used to denote the lattice spacing. The main theorem is that as δ 0, the loop erased random walk converges to SLE2. HOWEVER, beginning in section !2.2 the lattice spacing is 1, and in section 3.2 the notation δ reappears but means something completely different. (δ also appears in the proof of lemma 2.1 in section 2.1, but its meaning here is restricted to this proof. The δ in this proof has nothing to do with any δ's outside the proof.) After the statement of the main theorem, there is no mention of the lattice spacing going to zero. This is hidden in conditions on the \inner radius." For a domain D containing 0 the inner radius is rad0(D) = inf z : z = D (364) fj j 2 g Now fix a domain D containing the origin. Let be the conformal map of D onto U. We want to show that the image under of a LERW on a lattice with spacing δ converges to SLE2 in the unit disc. In the paper they do this in a slightly different way.
    [Show full text]
  • Pdf File of Second Edition, January 2018
    Probability on Graphs Random Processes on Graphs and Lattices Second Edition, 2018 GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge copyright Geoffrey Grimmett Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 56 Figures copyright Geoffrey Grimmett Contents Preface ix 1 Random Walks on Graphs 1 1.1 Random Walks and Reversible Markov Chains 1 1.2 Electrical Networks 3 1.3 FlowsandEnergy 8 1.4 RecurrenceandResistance 11 1.5 Polya's Theorem 14 1.6 GraphTheory 16 1.7 Exercises 18 2 Uniform Spanning Tree 21 2.1 De®nition 21 2.2 Wilson's Algorithm 23 2.3 Weak Limits on Lattices 28 2.4 Uniform Forest 31 2.5 Schramm±LownerEvolutionsÈ 32 2.6 Exercises 36 3 Percolation and Self-Avoiding Walks 39 3.1 PercolationandPhaseTransition 39 3.2 Self-Avoiding Walks 42 3.3 ConnectiveConstantoftheHexagonalLattice 45 3.4 CoupledPercolation 53 3.5 Oriented Percolation 53 3.6 Exercises 56 4 Association and In¯uence 59 4.1 Holley Inequality 59 4.2 FKG Inequality 62 4.3 BK Inequalitycopyright Geoffrey Grimmett63 vi Contents 4.4 HoeffdingInequality 65 4.5 In¯uenceforProductMeasures 67 4.6 ProofsofIn¯uenceTheorems 72 4.7 Russo'sFormulaandSharpThresholds 80 4.8 Exercises 83 5 Further Percolation 86 5.1 Subcritical Phase 86 5.2 Supercritical Phase 90 5.3 UniquenessoftheIn®niteCluster 96 5.4 Phase Transition 99 5.5 OpenPathsinAnnuli 103 5.6 The Critical Probability in Two Dimensions 107
    [Show full text]
  • Mean Field Methods for Classification with Gaussian Processes
    Mean field methods for classification with Gaussian processes Manfred Opper Neural Computing Research Group Division of Electronic Engineering and Computer Science Aston University Birmingham B4 7ET, UK. opperm~aston.ac.uk Ole Winther Theoretical Physics II, Lund University, S6lvegatan 14 A S-223 62 Lund, Sweden CONNECT, The Niels Bohr Institute, University of Copenhagen Blegdamsvej 17, 2100 Copenhagen 0, Denmark winther~thep.lu.se Abstract We discuss the application of TAP mean field methods known from the Statistical Mechanics of disordered systems to Bayesian classifi­ cation models with Gaussian processes. In contrast to previous ap­ proaches, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given. 1 Modeling with Gaussian Processes Bayesian models which are based on Gaussian prior distributions on function spaces are promising non-parametric statistical tools. They have been recently introduced into the Neural Computation community (Neal 1996, Williams & Rasmussen 1996, Mackay 1997). To give their basic definition, we assume that the likelihood of the output or target variable T for a given input s E RN can be written in the form p(Tlh(s)) where h : RN --+ R is a priori assumed to be a Gaussian random field. If we assume fields with zero prior mean, the statistics of h is entirely defined by the second order correlations C(s, S') == E[h(s)h(S')], where E denotes expectations 310 M Opper and 0. Winther with respect to the prior. Interesting examples are C(s, s') (1) C(s, s') (2) The choice (1) can be motivated as a limit of a two-layered neural network with infinitely many hidden units with factorizable input-hidden weight priors (Williams 1997).
    [Show full text]
  • Probability on Graphs Random Processes on Graphs and Lattices
    Probability on Graphs Random Processes on Graphs and Lattices GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 44 Figures c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Contents Preface ix 1 Random walks on graphs 1 1.1 RandomwalksandreversibleMarkovchains 1 1.2 Electrical networks 3 1.3 Flowsandenergy 8 1.4 Recurrenceandresistance 11 1.5 Polya's theorem 14 1.6 Graphtheory 16 1.7 Exercises 18 2 Uniform spanning tree 21 2.1 De®nition 21 2.2 Wilson's algorithm 23 2.3 Weak limits on lattices 28 2.4 Uniform forest 31 2.5 Schramm±LownerevolutionsÈ 32 2.6 Exercises 37 3 Percolation and self-avoiding walk 39 3.1 Percolationandphasetransition 39 3.2 Self-avoiding walks 42 3.3 Coupledpercolation 45 3.4 Orientedpercolation 45 3.5 Exercises 48 4 Association and in¯uence 50 4.1 Holley inequality 50 4.2 FKGinequality 53 4.3 BK inequality 54 4.4 Hoeffdinginequality 56 c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 vi Contents 4.5 In¯uenceforproductmeasures 58 4.6 Proofsofin¯uencetheorems 63 4.7 Russo'sformulaandsharpthresholds 75 4.8 Exercises 78 5 Further percolation 81 5.1 Subcritical phase 81 5.2 Supercritical phase 86 5.3 Uniquenessofthein®nitecluster 92 5.4 Phase transition 95 5.5 Openpathsinannuli 99 5.6 The critical probability in two dimensions 103 5.7 Cardy's formula 110 5.8 The
    [Show full text]
  • Arxiv:1504.02898V2 [Cond-Mat.Stat-Mech] 7 Jun 2015 Keywords: Percolation, Explosive Percolation, SLE, Ising Model, Earth Topography
    Recent advances in percolation theory and its applications Abbas Ali Saberi aDepartment of Physics, University of Tehran, P.O. Box 14395-547,Tehran, Iran bSchool of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM) P.O. Box 19395-5531, Tehran, Iran Abstract Percolation is the simplest fundamental model in statistical mechanics that exhibits phase transitions signaled by the emergence of a giant connected component. Despite its very simple rules, percolation theory has successfully been applied to describe a large variety of natural, technological and social systems. Percolation models serve as important universality classes in critical phenomena characterized by a set of critical exponents which correspond to a rich fractal and scaling structure of their geometric features. We will first outline the basic features of the ordinary model. Over the years a variety of percolation models has been introduced some of which with completely different scaling and universal properties from the original model with either continuous or discontinuous transitions depending on the control parameter, di- mensionality and the type of the underlying rules and networks. We will try to take a glimpse at a number of selective variations including Achlioptas process, half-restricted process and spanning cluster-avoiding process as examples of the so-called explosive per- colation. We will also introduce non-self-averaging percolation and discuss correlated percolation and bootstrap percolation with special emphasis on their recent progress. Directed percolation process will be also discussed as a prototype of systems displaying a nonequilibrium phase transition into an absorbing state. In the past decade, after the invention of stochastic L¨ownerevolution (SLE) by Oded Schramm, two-dimensional (2D) percolation has become a central problem in probability theory leading to the two recent Fields medals.
    [Show full text]
  • A Representation for Functionals of Superprocesses Via Particle Picture Raisa E
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector stochastic processes and their applications ELSEVIER Stochastic Processes and their Applications 64 (1996) 173-186 A representation for functionals of superprocesses via particle picture Raisa E. Feldman *,‘, Srikanth K. Iyer ‘,* Department of Statistics and Applied Probability, University oj’ Cull~ornia at Santa Barbara, CA 93/06-31/O, USA, and Department of Industrial Enyineeriny and Management. Technion - Israel Institute of’ Technology, Israel Received October 1995; revised May 1996 Abstract A superprocess is a measure valued process arising as the limiting density of an infinite col- lection of particles undergoing branching and diffusion. It can also be defined as a measure valued Markov process with a specified semigroup. Using the latter definition and explicit mo- ment calculations, Dynkin (1988) built multiple integrals for the superprocess. We show that the multiple integrals of the superprocess defined by Dynkin arise as weak limits of linear additive functionals built on the particle system. Keywords: Superprocesses; Additive functionals; Particle system; Multiple integrals; Intersection local time AMS c.lassijication: primary 60555; 60H05; 60580; secondary 60F05; 6OG57; 60Fl7 1. Introduction We study a class of functionals of a superprocess that can be identified as the multiple Wiener-It6 integrals of this measure valued process. More precisely, our objective is to study these via the underlying branching particle system, thus providing means of thinking about the functionals in terms of more simple, intuitive and visualizable objects. This way of looking at multiple integrals of a stochastic process has its roots in the previous work of Adler and Epstein (=Feldman) (1987), Epstein (1989), Adler ( 1989) Feldman and Rachev (1993), Feldman and Krishnakumar ( 1994).
    [Show full text]
  • Markov Random Fields and Stochastic Image Models
    Markov Random Fields and Stochastic Image Models Charles A. Bouman School of Electrical and Computer Engineering Purdue University Phone: (317) 494-0340 Fax: (317) 494-3358 email [email protected] Available from: http://dynamo.ecn.purdue.edu/»bouman/ Tutorial Presented at: 1995 IEEE International Conference on Image Processing 23-26 October 1995 Washington, D.C. Special thanks to: Ken Sauer Suhail Saquib Department of Electrical School of Electrical and Computer Engineering Engineering University of Notre Dame Purdue University 1 Overview of Topics 1. Introduction (b) Non-Gaussian MRF's 2. The Bayesian Approach i. Quadratic functions ii. Non-Convex functions 3. Discrete Models iii. Continuous MAP estimation (a) Markov Chains iv. Convex functions (b) Markov Random Fields (MRF) (c) Parameter Estimation (c) Simulation i. Estimation of σ (d) Parameter estimation ii. Estimation of T and p parameters 4. Application of MRF's to Segmentation 6. Application to Tomography (a) The Model (a) Tomographic system and data models (b) Bayesian Estimation (b) MAP Optimization (c) MAP Optimization (c) Parameter estimation (d) Parameter Estimation 7. Multiscale Stochastic Models (e) Other Approaches (a) Continuous models 5. Continuous Models (b) Discrete models (a) Gaussian Random Process Models 8. High Level Image Models i. Autoregressive (AR) models ii. Simultaneous AR (SAR) models iii. Gaussian MRF's iv. Generalization to 2-D 2 References in Statistical Image Modeling 1. Overview references [100, 89, 50, 54, 162, 4, 44] 4. Simulation and Stochastic Optimization Methods [118, 80, 129, 100, 68, 141, 61, 76, 62, 63] 2. Type of Random Field Model 5. Computational Methods used with MRF Models (a) Discrete Models i.
    [Show full text]
  • Random Walk in Random Scenery (RWRS)
    IMS Lecture Notes–Monograph Series Dynamics & Stochastics Vol. 48 (2006) 53–65 c Institute of Mathematical Statistics, 2006 DOI: 10.1214/074921706000000077 Random walk in random scenery: A survey of some recent results Frank den Hollander1,2,* and Jeffrey E. Steif 3,† Leiden University & EURANDOM and Chalmers University of Technology Abstract. In this paper we give a survey of some recent results for random walk in random scenery (RWRS). On Zd, d ≥ 1, we are given a random walk with i.i.d. increments and a random scenery with i.i.d. components. The walk and the scenery are assumed to be independent. RWRS is the random process where time is indexed by Z, and at each unit of time both the step taken by the walk and the scenery value at the site that is visited are registered. We collect various results that classify the ergodic behavior of RWRS in terms of the characteristics of the underlying random walk (and discuss extensions to stationary walk increments and stationary scenery components as well). We describe a number of results for scenery reconstruction and close by listing some open questions. 1. Introduction Random walk in random scenery is a family of stationary random processes ex- hibiting amazingly rich behavior. We will survey some of the results that have been obtained in recent years and list some open questions. Mike Keane has made funda- mental contributions to this topic. As close colleagues it has been a great pleasure to work with him. We begin by defining the object of our study. Fix an integer d ≥ 1.
    [Show full text]
  • Particle Filtering Within Adaptive Metropolis Hastings Sampling
    Particle filtering within adaptive Metropolis-Hastings sampling Ralph S. Silva Paolo Giordani School of Economics Research Department University of New South Wales Sveriges Riksbank [email protected] [email protected] Robert Kohn∗ Michael K. Pitt School of Economics Economics Department University of New South Wales University of Warwick [email protected] [email protected] October 29, 2018 Abstract We show that it is feasible to carry out exact Bayesian inference for non-Gaussian state space models using an adaptive Metropolis-Hastings sampling scheme with the likelihood approximated by the particle filter. Furthermore, an adaptive independent Metropolis Hastings sampler based on a mixture of normals proposal is computation- ally much more efficient than an adaptive random walk Metropolis proposal because the cost of constructing a good adaptive proposal is negligible compared to the cost of approximating the likelihood. Independent Metropolis-Hastings proposals are also arXiv:0911.0230v1 [stat.CO] 2 Nov 2009 attractive because they are easy to run in parallel on multiple processors. We also show that when the particle filter is used, the marginal likelihood of any model is obtained in an efficient and unbiased manner, making model comparison straightforward. Keywords: Auxiliary variables; Bayesian inference; Bridge sampling; Marginal likeli- hood. ∗Corresponding author. 1 Introduction We show that it is feasible to carry out exact Bayesian inference on the parameters of a general state space model by using the particle filter to approximate the likelihood and adaptive Metropolis-Hastings sampling to generate unknown parameters. The state space model can be quite general, but we assume that the observation equation can be evaluated analytically and that it is possible to generate from the state transition equation.
    [Show full text]
  • Stable Marked Point Processes
    Stable Marked Point Processes Tucker McElroy Dimitris N. Politis U.S. Bureau of the Census University of California, San Diego University of California, San Diego Abstract In many contexts, such as queueing theory, spatial statistics, geostatistics and meteorology, data are observed at irregular spatial positions. One model of this situation is to consider the observation points as generated by a Poisson Process. Under this assumption, we study the limit behavior of the partial sums of the Marked Point Process f(ti;X(ti))g, where X(t) is a stationary random ¯eld and the points ti are generated from an independent Poisson random measure N on Rd. We de¯ne the sample mean and sample variance statistics, and determine their joint asymptotic behavior in a heavy-tailed setting, thus extending some ¯nite variance results of Karr (1986). New results on subsampling in the context of a Marked Point Process are also presented, with the application of forming a con¯dence interval for the unknown mean under an unknown degree of heavy tails. 1 Introduction Random ¯eld data arise in diverse areas, such as spatial statistics, geostatistics, and meteorology, to name a few. It often happens that the observation locations of the data are irregularly spaced, which is a serious deviation from the typical formulation of random ¯eld theory where data are located at lattice points. One e®ective way of modeling the observation points is through a Poisson Process. In Karr (1982, 1986), the statistical problem of mean estimation is addressed given a Marked Point Process 1 structure of the data where the observation locations are governed by a Poisson Ran- dom Measure N assumed independent of the distribution of the stationary random ¯eld itself.
    [Show full text]