Simulating Gaussian Random Fields and Solving Stochastic Differential Equations Using Bounded Wiener Increments

Total Page:16

File Type:pdf, Size:1020Kb

Simulating Gaussian Random Fields and Solving Stochastic Differential Equations Using Bounded Wiener Increments SIMULATING GAUSSIAN RANDOM FIELDS AND SOLVING STOCHASTIC DIFFERENTIAL EQUATIONS USING BOUNDED WIENER INCREMENTS A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences 2013 Phillip Taylor School of Mathematics Contents Abstract 9 Declaration 10 Copyright Statement 11 Publications 12 Acknowledgements 13 List of abbreviations 14 Notation 15 1 Introduction 20 1.1 Part I: Simulation of random fields . 20 1.2 Implementation of the circulant embedding method in the NAG Fortran Library . 22 1.3 Part II: Solution of SDEs . 24 I Simulating Gaussian Random Fields 26 2 Stochastic Processes and Random Fields 27 2.1 Random Variables . 27 2.1.1 The Gaussian distribution . 30 2.2 Stochastic Processes . 33 2.2.1 Martingales . 34 2.2.2 The Wiener process . 35 2 2.2.3 The Brownian bridge process . 36 2.2.4 Fractional Brownian Motion . 37 2.2.5 Simulating the Wiener process . 38 2.3 Random Fields . 40 3 Linear Algebra 44 3.1 Discrete Fourier Transforms . 45 3.2 Toeplitz and Circulant Matrices . 48 3.3 Embeddings of Toeplitz matrices into circulant matrices . 49 3.4 Block Toeplitz and Block Circulant Matrices . 54 3.4.1 Additional notation for BSTSTB and symmetric BTTB matrices 56 3.4.2 Notation for padding . 57 3.5 Embeddings of BSTSTB and symmetric BTTB matrices into BSCSCB and symmetric BCCB matrices . 60 3.6 Matrix Factorisations . 61 4 The Circulant Embedding Method 64 4.1 Introduction . 64 4.2 The circulant embedding method in one dimension . 67 4.3 The circulant embedding method in two dimensions . 69 4.4 Example: Simulating Fractional Brownian Motion using the circulant embedding method . 73 4.5 Approximation in the case that a non-negative definite embedding ma- trix cannot be found . 75 4.6 Cases for which the embedding matrix is always non-negative definite . 76 4.7 Implementation in the NAG Fortran Library . 77 4.7.1 The NAG Library routines . 78 4.7.2 List of variograms for G05ZNF and G05ZRF ............ 82 4.8 Numerical examples . 84 4.8.1 Notes on the numerical examples . 85 4.8.2 Stringent tests . 93 3 II Solving Stochastic Differential Equations 94 5 Background Material for SDEs 95 5.1 Stochastic Differential Equations . 95 5.1.1 Stochastic integration . 95 5.1.2 Definition of SDEs . 96 5.1.3 The It^oformula . 98 5.1.4 Existence and uniqueness of solutions for SDEs . 99 5.1.5 Notation for stochastic integrals . 99 5.1.6 The It^o{Taylor and Stratonovich{Taylor Theorems . 103 5.1.7 Numerical schemes for solving SDEs . 105 5.1.8 Local truncation errors for numerical schemes . 110 5.1.9 Types of global error and order of a numerical scheme . 111 5.1.10 Stopping times . 113 5.2 Important results for pathwise error analysis of SDEs . 113 5.2.1 The Burkholder{Davis{Gundy inequalities and Kolmogorov{ Centsovˇ Theorem . 114 5.2.2 Bounds for stochastic integrals . 115 5.2.3 Local truncation error bounds for explicit numerical schemes . 118 5.2.4 The Dynkin formula . 119 5.2.5 Modulus of continuity of stochastic processes . 120 6 Existing Variable Stepsize Methods 123 6.1 Numerical Schemes . 123 6.1.1 Explicit numerical schemes . 124 6.1.2 Implicit numerical schemes . 126 6.1.3 Balanced schemes . 127 6.2 Variable stepsize methods . 127 6.2.1 Example to show non-convergence of the Euler{Maruyama scheme with a variable stepsize method . 128 6.2.2 Variable stepsize methods based on the Euler{Maruyama and Milstein schemes . 131 6.2.3 Variable stepsize methods based on other schemes . 135 4 6.3 Two examples of variable stepsize methods . 135 6.3.1 Method proposed by Gaines and Lyons . 136 6.3.2 Method proposed by Lamba . 140 6.3.3 A comparison of the two methods . 144 7 Simulating a Wiener Process 146 7.1 Deriving the PDE to solve for bounded diffusion . 147 7.1.1 Background . 147 7.2 Simulating the exit time for a one-dimensional Wiener process . 151 7.2.1 Separation of variables . 152 7.2.2 Using a Green's function . 154 7.3 Simulating the exit point of a Wiener process if τ = Tinc ........ 162 7.3.1 Separation of variables . 162 7.3.2 Using a Green's function . 164 7.4 Simulating the exit point and exit time of a one-dimensional Wiener process with unbounded stepsize and bounded stepsize . 166 7.5 Simulating the exit time of a multi-dimensional Wiener process . 168 7.5.1 Simulating the exit point of a multi-dimensional Wiener process if τ = Tinc .............................. 171 7.5.2 Simulating the exit point of the multi-dimensional Wiener pro- cess if τ < Tinc ............................ 172 7.5.3 The final algorithm for simulating the bounded Wiener process . 173 7.6 Using bounded increments to generate a path of the Wiener process . 174 8 Solving SDEs using Bounded Wiener Increments 176 8.1 Background . 177 1 8.1.1 Pathwise global error of order γ − 2 for numerical schemes in Sγ 179 8.1.2 Pathwise global error of order γ for numerical schemes in Sγ .. 181 8.2 The Bounded Increments with Error Control (BIEC) Method . 181 8.3 Numerical results . 183 8.3.1 The numerical methods used . 183 8.3.2 The numerical results . 184 8.3.3 Conclusions from the numerical experiments . 196 5 8.3.4 Future directions . 198 Bibliography 200 Word count 38867 6 List of Figures 4.1 Four realisations of the one-dimensional random field with variogram (4.11), with σ2 = 1, ` = 0:01 and ν = 0:5. 86 4.2 Four realisations of the one-dimensional random field with variogram (4.11), with σ2 = 1, ` = 0:01 and ν = 2:5. 86 4.3 Four realisations of the one-dimensional random field with variogram (4.11), with σ2 = 1, ` = 0:002 and ν = 0:5. 87 4.4 Four realisations of the one-dimensional random field with variogram (4.11), with σ2 = 1, ` = 0:002 and ν = 2:5. 87 4.5 Plots of the variogram (4.11) for the different values of ` and ν..... 88 4.6 Four realisations of fractional Brownian motion with Hurst parameter H = 0:2.................................... 88 4.7 Four realisations of fractional Brownian motion with Hurst parameter H = 0:8.................................... 89 4.8 Four realisations of the two-dimensional random field with variogram 2 (4.10), with σ = 1, `1 = 0:25, `2 = 0:5 and ν = 1:1. 89 4.9 Four realisations of the two-dimensional random field with variogram 2 (4.10), with σ = 1, `1 = 0:5, `2 = 0:25 and ν = 1:8. 90 4.10 Four realisations of the two-dimensional random field with variogram 2 (4.10), with σ = 1, `1 = 0:05, `2 = 0:1 and ν = 1:1. 90 4.11 Four realisations of the two-dimensional random field with variogram 2 (4.10), with σ = 1, `1 = 0:1, `2 = 0:05 and ν = 1:8. 91 4.12 Plots of the variogram (4.10) in one dimension for different values of ` and ν..................................... 91 6.1 An illustration of the Brownian tree, for K = 4. 137 7 6.2 An illustration to show how each element of the Wiener increments ∆W 2k−1;`+1 and ∆W 2k;`+1 are generated from ∆W k;`, for s = (k − ∆tmax ∆tmax 1) 2` , t = k 2` .............................. 138 7.1 The possible cases for the exit points of X(t) from G, with m = 1, X(0) = 0 and G = (−r; r).......................... 150 7.2 The possible cases for the behaviour of X(t), with m = 1, X(0) = 0 and G = (−r; r). .............................. 150 8.1 Numerical results over 1000 samples for geometric Brownian motion with d = m = 1, a = 0:1 and b = 0:1.................... 187 8.2 Numerical results over 1000 samples for geometric Brownian motion with d = m = 1, a = 1:5 and b = 0:1.................... 188 8.3 Numerical results over 1000 samples for geometric Brownian motion with d = m = 1, a = 0:1 and b = 1:2.................... 189 8.4 Numerical results over 1000 samples for geometric Brownian motion with d = m = 1, a = 1:5 and b = 1:2.................... 190 8.5 Numerical results over 1000 samples for geometric Brownian motion with d = m =2................................ 191 8.6 Numerical results over 1000 samples for (8.29) with a = 1 and b = 0:1. 192 8.7 Numerical results over 1000 samples for (8.29) with a = 1 and b = 1. 193 8.8 Numerical results over 1000 samples for (8.31) with a = 0:2. 194 8.9 Numerical results over 1000 samples for (8.31) with a = 2. 195 8 The University of Manchester Phillip Taylor Doctor of Philosophy Simulating Gaussian Random Fields and Solving Stochastic Differential Equations using Bounded Wiener Increments January 28, 2014 This thesis is in two parts. Part I concerns simulation of random fields using the circulant embedding method, and Part II studies the numerical solution of stochastic differential equations (SDEs). A Gaussian random field Z(x) is a set of random variables parameterised by a variable x 2 D ⊂ Rd for some d ≥ 1 which, when sampled at a finite set of sampling points, produces a random vector with a multivariate Gaussian distribution. Such a random vector has a covariance matrix A which must be factorised as either A = RT R for a real-valued matrix R, or A = R∗R for a complex-valued matrix R, in order to take realisations of Z(x).
Recommended publications
  • An Infinite Dimensional Central Limit Theorem for Correlated Martingales
    Ann. I. H. Poincaré – PR 40 (2004) 167–196 www.elsevier.com/locate/anihpb An infinite dimensional central limit theorem for correlated martingales Ilie Grigorescu 1 Department of Mathematics, University of Miami, 1365 Memorial Drive, Ungar Building, Room 525, Coral Gables, FL 33146, USA Received 6 March 2003; accepted 27 April 2003 Abstract The paper derives a functional central limit theorem for the empirical distributions of a system of strongly correlated continuous martingales at the level of the full trajectory space. We provide a general class of functionals for which the weak convergence to a centered Gaussian random field takes place. An explicit formula for the covariance is established and a characterization of the limit is given in terms of an inductive system of SPDEs. We also show a density theorem for a Sobolev- type class of functionals on the space of continuous functions. 2003 Elsevier SAS. All rights reserved. Résumé L’article présent dérive d’un théorème limite centrale fonctionnelle au niveau de l’espace de toutes les trajectoires continues pour les distributions empiriques d’un système de martingales fortement corrélées. Nous fournissons une classe générale de fonctions pour lesquelles est établie la convergence faible vers un champ aléatoire gaussien centré. Une formule explicite pour la covariance est determinée et on offre une charactérisation de la limite à l’aide d’un système inductif d’équations aux dérivées partielles stochastiques. On démontre également que l’espace de fonctions pour lesquelles le champ des fluctuations converge est dense dans une classe de fonctionnelles de type Sobolev sur l’espace des trajectoires continues.
    [Show full text]
  • Stochastic Differential Equations with Variational Wishart Diffusions
    Stochastic Differential Equations with Variational Wishart Diffusions Martin Jørgensen 1 Marc Peter Deisenroth 2 Hugh Salimbeni 3 Abstract Why model the process noise? Assume that in the example above, the two states represent meteorological measure- We present a Bayesian non-parametric way of in- ments: rainfall and wind speed. Both are influenced by ferring stochastic differential equations for both confounders, such as atmospheric pressure, which are not regression tasks and continuous-time dynamical measured directly. This effect can in the case of the model modelling. The work has high emphasis on the in (1) only be modelled through the diffusion . Moreover, stochastic part of the differential equation, also wind and rain may not correlate identically for all states of known as the diffusion, and modelling it by means the confounders. of Wishart processes. Further, we present a semi- parametric approach that allows the framework Dynamical modelling with focus in the noise-term is not to scale to high dimensions. This successfully a new area of research. The most prominent one is the lead us onto how to model both latent and auto- Auto-Regressive Conditional Heteroskedasticity (ARCH) regressive temporal systems with conditional het- model (Engle, 1982), which is central to scientific fields eroskedastic noise. We provide experimental ev- like econometrics, climate science and meteorology. The idence that modelling diffusion often improves approach in these models is to estimate large process noise performance and that this randomness in the dif- when the system is exposed to a shock, i.e. an unforeseen ferential equation can be essential to avoid over- significant change in states.
    [Show full text]
  • Pdf File of Second Edition, January 2018
    Probability on Graphs Random Processes on Graphs and Lattices Second Edition, 2018 GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge copyright Geoffrey Grimmett Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 56 Figures copyright Geoffrey Grimmett Contents Preface ix 1 Random Walks on Graphs 1 1.1 Random Walks and Reversible Markov Chains 1 1.2 Electrical Networks 3 1.3 FlowsandEnergy 8 1.4 RecurrenceandResistance 11 1.5 Polya's Theorem 14 1.6 GraphTheory 16 1.7 Exercises 18 2 Uniform Spanning Tree 21 2.1 De®nition 21 2.2 Wilson's Algorithm 23 2.3 Weak Limits on Lattices 28 2.4 Uniform Forest 31 2.5 Schramm±LownerEvolutionsÈ 32 2.6 Exercises 36 3 Percolation and Self-Avoiding Walks 39 3.1 PercolationandPhaseTransition 39 3.2 Self-Avoiding Walks 42 3.3 ConnectiveConstantoftheHexagonalLattice 45 3.4 CoupledPercolation 53 3.5 Oriented Percolation 53 3.6 Exercises 56 4 Association and In¯uence 59 4.1 Holley Inequality 59 4.2 FKG Inequality 62 4.3 BK Inequalitycopyright Geoffrey Grimmett63 vi Contents 4.4 HoeffdingInequality 65 4.5 In¯uenceforProductMeasures 67 4.6 ProofsofIn¯uenceTheorems 72 4.7 Russo'sFormulaandSharpThresholds 80 4.8 Exercises 83 5 Further Percolation 86 5.1 Subcritical Phase 86 5.2 Supercritical Phase 90 5.3 UniquenessoftheIn®niteCluster 96 5.4 Phase Transition 99 5.5 OpenPathsinAnnuli 103 5.6 The Critical Probability in Two Dimensions 107
    [Show full text]
  • Mean Field Methods for Classification with Gaussian Processes
    Mean field methods for classification with Gaussian processes Manfred Opper Neural Computing Research Group Division of Electronic Engineering and Computer Science Aston University Birmingham B4 7ET, UK. opperm~aston.ac.uk Ole Winther Theoretical Physics II, Lund University, S6lvegatan 14 A S-223 62 Lund, Sweden CONNECT, The Niels Bohr Institute, University of Copenhagen Blegdamsvej 17, 2100 Copenhagen 0, Denmark winther~thep.lu.se Abstract We discuss the application of TAP mean field methods known from the Statistical Mechanics of disordered systems to Bayesian classifi­ cation models with Gaussian processes. In contrast to previous ap­ proaches, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given. 1 Modeling with Gaussian Processes Bayesian models which are based on Gaussian prior distributions on function spaces are promising non-parametric statistical tools. They have been recently introduced into the Neural Computation community (Neal 1996, Williams & Rasmussen 1996, Mackay 1997). To give their basic definition, we assume that the likelihood of the output or target variable T for a given input s E RN can be written in the form p(Tlh(s)) where h : RN --+ R is a priori assumed to be a Gaussian random field. If we assume fields with zero prior mean, the statistics of h is entirely defined by the second order correlations C(s, S') == E[h(s)h(S')], where E denotes expectations 310 M Opper and 0. Winther with respect to the prior. Interesting examples are C(s, s') (1) C(s, s') (2) The choice (1) can be motivated as a limit of a two-layered neural network with infinitely many hidden units with factorizable input-hidden weight priors (Williams 1997).
    [Show full text]
  • Particle Filter Based Monitoring and Prediction of Spatiotemporal Corrosion Using Successive Measurements of Structural Responses
    sensors Article Particle Filter Based Monitoring and Prediction of Spatiotemporal Corrosion Using Successive Measurements of Structural Responses Sang-ri Yi and Junho Song * Department of Civil and Environmental Engineering, Seoul National University, Seoul 08826, Korea; [email protected] * Correspondence: [email protected]; Tel.: +82-2-880-7369 Received: 11 October 2018; Accepted: 10 November 2018; Published: 13 November 2018 Abstract: Prediction of structural deterioration is a challenging task due to various uncertainties and temporal changes in the environmental conditions, measurement noises as well as errors of mathematical models used for predicting the deterioration progress. Monitoring of deterioration progress is also challenging even with successive measurements, especially when only indirect measurements such as structural responses are available. Recent developments of Bayesian filters and Bayesian inversion methods make it possible to address these challenges through probabilistic assimilation of successive measurement data and deterioration progress models. To this end, this paper proposes a new framework to monitor and predict the spatiotemporal progress of structural deterioration using successive, indirect and noisy measurements. The framework adopts particle filter for the purpose of real-time monitoring and prediction of corrosion states and probabilistic inference of uncertain and/or time-varying parameters in the corrosion progress model. In order to infer deterioration states from sparse indirect inspection data, for example structural responses at sensor locations, a Bayesian inversion method is integrated with the particle filter. The dimension of a continuous domain is reduced by the use of basis functions of truncated Karhunen-Loève expansion. The proposed framework is demonstrated and successfully tested by numerical experiments of reinforcement bar and steel plates subject to corrosion.
    [Show full text]
  • Probability on Graphs Random Processes on Graphs and Lattices
    Probability on Graphs Random Processes on Graphs and Lattices GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 44 Figures c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Contents Preface ix 1 Random walks on graphs 1 1.1 RandomwalksandreversibleMarkovchains 1 1.2 Electrical networks 3 1.3 Flowsandenergy 8 1.4 Recurrenceandresistance 11 1.5 Polya's theorem 14 1.6 Graphtheory 16 1.7 Exercises 18 2 Uniform spanning tree 21 2.1 De®nition 21 2.2 Wilson's algorithm 23 2.3 Weak limits on lattices 28 2.4 Uniform forest 31 2.5 Schramm±LownerevolutionsÈ 32 2.6 Exercises 37 3 Percolation and self-avoiding walk 39 3.1 Percolationandphasetransition 39 3.2 Self-avoiding walks 42 3.3 Coupledpercolation 45 3.4 Orientedpercolation 45 3.5 Exercises 48 4 Association and in¯uence 50 4.1 Holley inequality 50 4.2 FKGinequality 53 4.3 BK inequality 54 4.4 Hoeffdinginequality 56 c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 vi Contents 4.5 In¯uenceforproductmeasures 58 4.6 Proofsofin¯uencetheorems 63 4.7 Russo'sformulaandsharpthresholds 75 4.8 Exercises 78 5 Further percolation 81 5.1 Subcritical phase 81 5.2 Supercritical phase 86 5.3 Uniquenessofthein®nitecluster 92 5.4 Phase transition 95 5.5 Openpathsinannuli 99 5.6 The critical probability in two dimensions 103 5.7 Cardy's formula 110 5.8 The
    [Show full text]
  • Stochastic Differential Equations with Variational Wishart Diffusions
    Stochastic Differential Equations with Variational Wishart Diffusions Martin Jørgensen 1 Marc Peter Deisenroth 2 Hugh Salimbeni 3 Abstract Why model the process noise? Assume that in the example above, the two states represent meteorological measure- We present a Bayesian non-parametric way of in- ments: rainfall and wind speed. Both are influenced by ferring stochastic differential equations for both confounders, such as atmospheric pressure, which are not regression tasks and continuous-time dynamical measured directly. This effect can in the case of the model modelling. The work has high emphasis on the in (1) only be modelled through the diffusion . Moreover, stochastic part of the differential equation, also wind and rain may not correlate identically for all states of known as the diffusion, and modelling it by means the confounders. of Wishart processes. Further, we present a semi- parametric approach that allows the framework Dynamical modelling with focus in the noise-term is not to scale to high dimensions. This successfully a new area of research. The most prominent one is the lead us onto how to model both latent and auto- Auto-Regressive Conditional Heteroskedasticity (ARCH) regressive temporal systems with conditional het- model (Engle, 1982), which is central to scientific fields eroskedastic noise. We provide experimental ev- like econometrics, climate science and meteorology. The idence that modelling diffusion often improves approach in these models is to estimate large process noise performance and that this randomness in the dif- when the system is exposed to a shock, i.e. an unforeseen ferential equation can be essential to avoid over- significant change in states.
    [Show full text]
  • Arxiv:1504.02898V2 [Cond-Mat.Stat-Mech] 7 Jun 2015 Keywords: Percolation, Explosive Percolation, SLE, Ising Model, Earth Topography
    Recent advances in percolation theory and its applications Abbas Ali Saberi aDepartment of Physics, University of Tehran, P.O. Box 14395-547,Tehran, Iran bSchool of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM) P.O. Box 19395-5531, Tehran, Iran Abstract Percolation is the simplest fundamental model in statistical mechanics that exhibits phase transitions signaled by the emergence of a giant connected component. Despite its very simple rules, percolation theory has successfully been applied to describe a large variety of natural, technological and social systems. Percolation models serve as important universality classes in critical phenomena characterized by a set of critical exponents which correspond to a rich fractal and scaling structure of their geometric features. We will first outline the basic features of the ordinary model. Over the years a variety of percolation models has been introduced some of which with completely different scaling and universal properties from the original model with either continuous or discontinuous transitions depending on the control parameter, di- mensionality and the type of the underlying rules and networks. We will try to take a glimpse at a number of selective variations including Achlioptas process, half-restricted process and spanning cluster-avoiding process as examples of the so-called explosive per- colation. We will also introduce non-self-averaging percolation and discuss correlated percolation and bootstrap percolation with special emphasis on their recent progress. Directed percolation process will be also discussed as a prototype of systems displaying a nonequilibrium phase transition into an absorbing state. In the past decade, after the invention of stochastic L¨ownerevolution (SLE) by Oded Schramm, two-dimensional (2D) percolation has become a central problem in probability theory leading to the two recent Fields medals.
    [Show full text]
  • A Representation for Functionals of Superprocesses Via Particle Picture Raisa E
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector stochastic processes and their applications ELSEVIER Stochastic Processes and their Applications 64 (1996) 173-186 A representation for functionals of superprocesses via particle picture Raisa E. Feldman *,‘, Srikanth K. Iyer ‘,* Department of Statistics and Applied Probability, University oj’ Cull~ornia at Santa Barbara, CA 93/06-31/O, USA, and Department of Industrial Enyineeriny and Management. Technion - Israel Institute of’ Technology, Israel Received October 1995; revised May 1996 Abstract A superprocess is a measure valued process arising as the limiting density of an infinite col- lection of particles undergoing branching and diffusion. It can also be defined as a measure valued Markov process with a specified semigroup. Using the latter definition and explicit mo- ment calculations, Dynkin (1988) built multiple integrals for the superprocess. We show that the multiple integrals of the superprocess defined by Dynkin arise as weak limits of linear additive functionals built on the particle system. Keywords: Superprocesses; Additive functionals; Particle system; Multiple integrals; Intersection local time AMS c.lassijication: primary 60555; 60H05; 60580; secondary 60F05; 6OG57; 60Fl7 1. Introduction We study a class of functionals of a superprocess that can be identified as the multiple Wiener-It6 integrals of this measure valued process. More precisely, our objective is to study these via the underlying branching particle system, thus providing means of thinking about the functionals in terms of more simple, intuitive and visualizable objects. This way of looking at multiple integrals of a stochastic process has its roots in the previous work of Adler and Epstein (=Feldman) (1987), Epstein (1989), Adler ( 1989) Feldman and Rachev (1993), Feldman and Krishnakumar ( 1994).
    [Show full text]
  • Markov Random Fields and Stochastic Image Models
    Markov Random Fields and Stochastic Image Models Charles A. Bouman School of Electrical and Computer Engineering Purdue University Phone: (317) 494-0340 Fax: (317) 494-3358 email [email protected] Available from: http://dynamo.ecn.purdue.edu/»bouman/ Tutorial Presented at: 1995 IEEE International Conference on Image Processing 23-26 October 1995 Washington, D.C. Special thanks to: Ken Sauer Suhail Saquib Department of Electrical School of Electrical and Computer Engineering Engineering University of Notre Dame Purdue University 1 Overview of Topics 1. Introduction (b) Non-Gaussian MRF's 2. The Bayesian Approach i. Quadratic functions ii. Non-Convex functions 3. Discrete Models iii. Continuous MAP estimation (a) Markov Chains iv. Convex functions (b) Markov Random Fields (MRF) (c) Parameter Estimation (c) Simulation i. Estimation of σ (d) Parameter estimation ii. Estimation of T and p parameters 4. Application of MRF's to Segmentation 6. Application to Tomography (a) The Model (a) Tomographic system and data models (b) Bayesian Estimation (b) MAP Optimization (c) MAP Optimization (c) Parameter estimation (d) Parameter Estimation 7. Multiscale Stochastic Models (e) Other Approaches (a) Continuous models 5. Continuous Models (b) Discrete models (a) Gaussian Random Process Models 8. High Level Image Models i. Autoregressive (AR) models ii. Simultaneous AR (SAR) models iii. Gaussian MRF's iv. Generalization to 2-D 2 References in Statistical Image Modeling 1. Overview references [100, 89, 50, 54, 162, 4, 44] 4. Simulation and Stochastic Optimization Methods [118, 80, 129, 100, 68, 141, 61, 76, 62, 63] 2. Type of Random Field Model 5. Computational Methods used with MRF Models (a) Discrete Models i.
    [Show full text]
  • Random Walk in Random Scenery (RWRS)
    IMS Lecture Notes–Monograph Series Dynamics & Stochastics Vol. 48 (2006) 53–65 c Institute of Mathematical Statistics, 2006 DOI: 10.1214/074921706000000077 Random walk in random scenery: A survey of some recent results Frank den Hollander1,2,* and Jeffrey E. Steif 3,† Leiden University & EURANDOM and Chalmers University of Technology Abstract. In this paper we give a survey of some recent results for random walk in random scenery (RWRS). On Zd, d ≥ 1, we are given a random walk with i.i.d. increments and a random scenery with i.i.d. components. The walk and the scenery are assumed to be independent. RWRS is the random process where time is indexed by Z, and at each unit of time both the step taken by the walk and the scenery value at the site that is visited are registered. We collect various results that classify the ergodic behavior of RWRS in terms of the characteristics of the underlying random walk (and discuss extensions to stationary walk increments and stationary scenery components as well). We describe a number of results for scenery reconstruction and close by listing some open questions. 1. Introduction Random walk in random scenery is a family of stationary random processes ex- hibiting amazingly rich behavior. We will survey some of the results that have been obtained in recent years and list some open questions. Mike Keane has made funda- mental contributions to this topic. As close colleagues it has been a great pleasure to work with him. We begin by defining the object of our study. Fix an integer d ≥ 1.
    [Show full text]
  • Even If Google Sent You Here, What Follows Is NOT the Book RANDOM
    i Note: Even if Google sent you here, what follows is NOT the book RANDOM FIELDS AND GEOMETRY published with Springer in 2007, but rather a companion volume, still under production, that gives a simpler version of the theory of the first book as well as lots of applications. You can find the original Random Fields and Geometry on the Springer site. Meanwhile, enjoy what is available of the second volume, and keep in mind that we are happy to receive all comments, suggestions and, most of all, cor- rections. ii Applications of RANDOM FIELDS AND GEOMETRY Foundations and Case Studies Robert J. Adler Faculty of Industrial Engineering and Management and Faculty of Electrical Engineering Technion { Israel Institute of Technology Haifa, Israel e-mail: [email protected] Jonathan E. Taylor Department of Statistics Stanford University Stanford, California. e-mail: [email protected] Keith J. Worsley Department of Mathematics and Statistics McGill University Montr´eal,Canada and Department of Statistics University of Chicago Chicago, Illinois. e-mail: [email protected] May 20, 2015 0 Preface Robert J. Adler Jonathan E. Taylor1 Keith Worsley Haifa, Israel Stanford, California Montreal, Canada ie.technion.ac.il/Adler.phtml www-stat.stanford.edu/∼jtaylo www.math.mcgill.ca/keith 1Research supported in part by NSF grant DMS-0405970 Contents 0 Preface .................................................... iii 1 Introduction ............................................... 1 1.1 An Initial Example......................................1 1.2 Gaussian and Related Random Fields......................3 1.3 Shape and the Euler Characteristic........................4 1.4 The Large-Scale Structure of the Universe..................7 1.4.1 Survey Data......................................7 1.4.2 Designing and Testing a Model......................8 1.5 Brain Imaging.........................................
    [Show full text]