6 Processes 5 6.1 Stochastic Process Definitions
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Cox Process Representation and Inference for Stochastic Reaction-Diffusion Processes
Edinburgh Research Explorer Cox process representation and inference for stochastic reaction-diffusion processes Citation for published version: Schnoerr, D, Grima, R & Sanguinetti, G 2016, 'Cox process representation and inference for stochastic reaction-diffusion processes', Nature Communications, vol. 7, 11729. https://doi.org/10.1038/ncomms11729 Digital Object Identifier (DOI): 10.1038/ncomms11729 Link: Link to publication record in Edinburgh Research Explorer Document Version: Publisher's PDF, also known as Version of record Published In: Nature Communications General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 02. Oct. 2021 ARTICLE Received 16 Dec 2015 | Accepted 26 Apr 2016 | Published 25 May 2016 DOI: 10.1038/ncomms11729 OPEN Cox process representation and inference for stochastic reaction–diffusion processes David Schnoerr1,2,3, Ramon Grima1,3 & Guido Sanguinetti2,3 Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction–diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. -
Poisson Representations of Branching Markov and Measure-Valued
The Annals of Probability 2011, Vol. 39, No. 3, 939–984 DOI: 10.1214/10-AOP574 c Institute of Mathematical Statistics, 2011 POISSON REPRESENTATIONS OF BRANCHING MARKOV AND MEASURE-VALUED BRANCHING PROCESSES By Thomas G. Kurtz1 and Eliane R. Rodrigues2 University of Wisconsin, Madison and UNAM Representations of branching Markov processes and their measure- valued limits in terms of countable systems of particles are con- structed for models with spatially varying birth and death rates. Each particle has a location and a “level,” but unlike earlier con- structions, the levels change with time. In fact, death of a particle occurs only when the level of the particle crosses a specified level r, or for the limiting models, hits infinity. For branching Markov pro- cesses, at each time t, conditioned on the state of the process, the levels are independent and uniformly distributed on [0,r]. For the limiting measure-valued process, at each time t, the joint distribu- tion of locations and levels is conditionally Poisson distributed with mean measure K(t) × Λ, where Λ denotes Lebesgue measure, and K is the desired measure-valued process. The representation simplifies or gives alternative proofs for a vari- ety of calculations and results including conditioning on extinction or nonextinction, Harris’s convergence theorem for supercritical branch- ing processes, and diffusion approximations for processes in random environments. 1. Introduction. Measure-valued processes arise naturally as infinite sys- tem limits of empirical measures of finite particle systems. A number of ap- proaches have been developed which preserve distinct particles in the limit and which give a representation of the measure-valued process as a transfor- mation of the limiting infinite particle system. -
Markovian Bridges: Weak Continuity and Pathwise Constructions
The Annals of Probability 2011, Vol. 39, No. 2, 609–647 DOI: 10.1214/10-AOP562 c Institute of Mathematical Statistics, 2011 MARKOVIAN BRIDGES: WEAK CONTINUITY AND PATHWISE CONSTRUCTIONS1 By Lo¨ıc Chaumont and Geronimo´ Uribe Bravo2,3 Universit´ed’Angers and Universidad Nacional Aut´onoma de M´exico A Markovian bridge is a probability measure taken from a disin- tegration of the law of an initial part of the path of a Markov process given its terminal value. As such, Markovian bridges admit a natural parameterization in terms of the state space of the process. In the context of Feller processes with continuous transition densities, we construct by weak convergence considerations the only versions of Markovian bridges which are weakly continuous with respect to their parameter. We use this weakly continuous construction to provide an extension of the strong Markov property in which the flow of time is reversed. In the context of self-similar Feller process, the last result is shown to be useful in the construction of Markovian bridges out of the trajectories of the original process. 1. Introduction and main results. 1.1. Motivation. The aim of this article is to study Markov processes on [0,t], starting at x, conditioned to arrive at y at time t. Historically, the first example of such a conditional law is given by Paul L´evy’s construction of the Brownian bridge: given a Brownian motion B starting at zero, let s s bx,y,t = x + B B + (y x) . s s − t t − t arXiv:0905.2155v3 [math.PR] 14 Mar 2011 Received May 2009; revised April 2010. -
Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes
Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Yves-Laurent Kom Samo [email protected] Stephen Roberts [email protected] Deparment of Engineering Science and Oxford-Man Institute, University of Oxford Abstract 2. Related Work In this paper we propose an efficient, scalable Non-parametric inference on point processes has been ex- non-parametric Gaussian process model for in- tensively studied in the literature. Rathbum & Cressie ference on Poisson point processes. Our model (1994) and Moeller et al. (1998) used a finite-dimensional does not resort to gridding the domain or to intro- piecewise constant log-Gaussian for the intensity function. ducing latent thinning points. Unlike competing Such approximations are limited in that the choice of the 3 models that scale as O(n ) over n data points, grid on which to represent the intensity function is arbitrary 2 our model has a complexity O(nk ) where k and one has to trade-off precision with computational com- n. We propose a MCMC sampler and show that plexity and numerical accuracy, with the complexity being the model obtained is faster, more accurate and cubic in the precision and exponential in the dimension of generates less correlated samples than competing the input space. Kottas (2006) and Kottas & Sanso (2007) approaches on both synthetic and real-life data. used a Dirichlet process mixture of Beta distributions as Finally, we show that our model easily handles prior for the normalised intensity function of a Poisson data sizes not considered thus far by alternate ap- process. Cunningham et al. -
Superprocesses and Mckean-Vlasov Equations with Creation of Mass
Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es. -
Brownian Motion and the Heat Equation
Brownian motion and the heat equation Denis Bell University of North Florida 1. The heat equation Let the function u(t, x) denote the temperature in a rod at position x and time t u(t,x) Then u(t, x) satisfies the heat equation ∂u 1∂2u = , t > 0. (1) ∂t 2∂x2 It is easy to check that the Gaussian function 1 x2 u(t, x) = e−2t 2πt satisfies (1). Let φ be any! bounded continuous function and define 1 (x y)2 u(t, x) = ∞ φ(y)e− 2−t dy. 2πt "−∞ Then u satisfies!(1). Furthermore making the substitution z = (x y)/√t in the integral gives − 1 z2 u(t, x) = ∞ φ(x y√t)e−2 dz − 2π "−∞ ! 1 z2 φ(x) ∞ e−2 dz = φ(x) → 2π "−∞ ! as t 0. Thus ↓ 1 (x y)2 u(t, x) = ∞ φ(y)e− 2−t dy. 2πt "−∞ ! = E[φ(Xt)] where Xt is a N(x, t) random variable solves the heat equation ∂u 1∂2u = ∂t 2∂x2 with initial condition u(0, ) = φ. · Note: The function u(t, x) is smooth in x for t > 0 even if is only continuous. 2. Brownian motion In the nineteenth century, the botanist Robert Brown observed that a pollen particle suspended in liquid undergoes a strange erratic motion (caused by bombardment by molecules of the liquid) Letting w(t) denote the position of the particle in a fixed direction, the paths w typically look like this t N. Wiener constructed a rigorous mathemati- cal model of Brownian motion in the 1930s. -
Deep Neural Networks As Gaussian Processes
Published as a conference paper at ICLR 2018 DEEP NEURAL NETWORKS AS GAUSSIAN PROCESSES Jaehoon Lee∗y, Yasaman Bahri∗y, Roman Novak , Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein Google Brain fjaehlee, yasamanb, romann, schsam, jpennin, [email protected] ABSTRACT It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as co- variance functions for GPs and allow fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide deep net- works and GPs. We further develop a computationally efficient pipeline to com- pute the covariance function for these GPs. We then use the resulting GPs to per- form Bayesian inference for wide deep neural networks on MNIST and CIFAR- 10. We observe that trained neural network accuracy approaches that of the corre- sponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test perfor- mance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite-width networks. -
Local Conditioning in Dawson–Watanabe Superprocesses
The Annals of Probability 2013, Vol. 41, No. 1, 385–443 DOI: 10.1214/11-AOP702 c Institute of Mathematical Statistics, 2013 LOCAL CONDITIONING IN DAWSON–WATANABE SUPERPROCESSES By Olav Kallenberg Auburn University Consider a locally finite Dawson–Watanabe superprocess ξ =(ξt) in Rd with d ≥ 2. Our main results include some recursive formulas for the moment measures of ξ, with connections to the uniform Brown- ian tree, a Brownian snake representation of Palm measures, continu- ity properties of conditional moment densities, leading by duality to strongly continuous versions of the multivariate Palm distributions, and a local approximation of ξt by a stationary clusterη ˜ with nice continuity and scaling properties. This all leads up to an asymptotic description of the conditional distribution of ξt for a fixed t> 0, given d that ξt charges the ε-neighborhoods of some points x1,...,xn ∈ R . In the limit as ε → 0, the restrictions to those sets are conditionally in- dependent and given by the pseudo-random measures ξ˜ orη ˜, whereas the contribution to the exterior is given by the Palm distribution of ξt at x1,...,xn. Our proofs are based on the Cox cluster representa- tions of the historical process and involve some delicate estimates of moment densities. 1. Introduction. This paper may be regarded as a continuation of [19], where we considered some local properties of a Dawson–Watanabe super- process (henceforth referred to as a DW-process) at a fixed time t> 0. Recall that a DW-process ξ = (ξt) is a vaguely continuous, measure-valued diffu- d ξtf µvt sion process in R with Laplace functionals Eµe− = e− for suitable functions f 0, where v = (vt) is the unique solution to the evolution equa- 1 ≥ 2 tion v˙ = 2 ∆v v with initial condition v0 = f. -
POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis
POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p. -
Patterns in Random Walks and Brownian Motion
Patterns in Random Walks and Brownian Motion Jim Pitman and Wenpin Tang Abstract We ask if it is possible to find some particular continuous paths of unit length in linear Brownian motion. Beginning with a discrete version of the problem, we derive the asymptotics of the expected waiting time for several interesting patterns. These suggest corresponding results on the existence/non-existence of continuous paths embedded in Brownian motion. With further effort we are able to prove some of these existence and non-existence results by various stochastic analysis arguments. A list of open problems is presented. AMS 2010 Mathematics Subject Classification: 60C05, 60G17, 60J65. 1 Introduction and Main Results We are interested in the question of embedding some continuous-time stochastic processes .Zu;0Ä u Ä 1/ into a Brownian path .BtI t 0/, without time-change or scaling, just by a random translation of origin in spacetime. More precisely, we ask the following: Question 1 Given some distribution of a process Z with continuous paths, does there exist a random time T such that .BTCu BT I 0 Ä u Ä 1/ has the same distribution as .Zu;0Ä u Ä 1/? The question of whether external randomization is allowed to construct such a random time T, is of no importance here. In fact, we can simply ignore Brownian J. Pitman ()•W.Tang Department of Statistics, University of California, 367 Evans Hall, Berkeley, CA 94720-3860, USA e-mail: [email protected]; [email protected] © Springer International Publishing Switzerland 2015 49 C. Donati-Martin et al. -
On Sampling from the Multivariate T Distribution by Marius Hofert
CONTRIBUTED RESEARCH ARTICLES 129 On Sampling from the Multivariate t Distribution by Marius Hofert Abstract The multivariate normal and the multivariate t distributions belong to the most widely used multivariate distributions in statistics, quantitative risk management, and insurance. In contrast to the multivariate normal distribution, the parameterization of the multivariate t distribution does not correspond to its moments. This, paired with a non-standard implementation in the R package mvtnorm, provides traps for working with the multivariate t distribution. In this paper, common traps are clarified and corresponding recent changes to mvtnorm are presented. Introduction A supposedly simple task in statistics courses and related applications is to generate random variates from a multivariate t distribution in R. When teaching such courses, we found several fallacies one might encounter when sampling multivariate t distributions with the well-known R package mvtnorm; see Genz et al.(2013). These fallacies have recently led to improvements of the package ( ≥ 0.9-9996) which we present in this paper1. To put them in the correct context, we first address the multivariate normal distribution. The multivariate normal distribution The multivariate normal distribution can be defined in various ways, one is with its stochastic represen- tation X = m + AZ, (1) where Z = (Z1, ... , Zk) is a k-dimensional random vector with Zi, i 2 f1, ... , kg, being independent standard normal random variables, A 2 Rd×k is an (d, k)-matrix, and m 2 Rd is the mean vector. The covariance matrix of X is S = AA> and the distribution of X (that is, the d-dimensional multivariate normal distribution) is determined solely by the mean vector m and the covariance matrix S; we can thus write X ∼ Nd(m, S). -
Lecture 19: Polymer Models: Persistence and Self-Avoidance
Lecture 19: Polymer Models: Persistence and Self-Avoidance Scribe: Allison Ferguson (and Martin Z. Bazant) Martin Fisher School of Physics, Brandeis University April 14, 2005 Polymers are high molecular weight molecules formed by combining a large number of smaller molecules (monomers) in a regular pattern. Polymers in solution (i.e. uncrystallized) can be characterized as long flexible chains, and thus may be well described by random walk models of various complexity. 1 Simple Random Walk (IID Steps) Consider a Rayleigh random walk (i.e. a fixed step size a with a random angle θ). Recall (from Lecture 1) that the probability distribution function (PDF) of finding the walker at position R~ after N steps is given by −3R2 2 ~ e 2a N PN (R) ∼ d (1) (2πa2N/d) 2 Define RN = |R~| as the end-to-end distance of the polymer. Then from previous results we q √ 2 2 ¯ 2 know hRN i = Na and RN = hRN i = a N. We can define an additional quantity, the radius of gyration GN as the radius from the centre of mass (assuming the polymer has an approximately 1 2 spherical shape ). Hughes [1] gives an expression for this quantity in terms of hRN i as follows: 1 1 hG2 i = 1 − hR2 i (2) N 6 N 2 N As an example of the type of prediction which can be made using this type of simple model, consider the following experiment: Pull on both ends of a polymer and measure the force f required to stretch it out2. Note that the force will be given by f = −(∂F/∂R) where F = U − TS is the Helmholtz free energy (T is the temperature, U is the internal energy, and S is the entropy).