POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis

Total Page:16

File Type:pdf, Size:1020Kb

POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p. If p is small, then successes are rare events; but if N is correspondingly large, so that λ = N p is of moderate size, then there are just enough trials so that a few successes are likely. Theorem 1. (Law of Small Numbers) If N and p 0 in such a way that N p λ, then the Binomial-(N ,p) distribution converges! to 1 the Poisson-!λ distribution, that is, for! each k = 0,1,2,..., k e λ N k N k λ − (1) p (1 p) − k − −! k! What does this have to do with the Rutherford-Chadwick-Ellis experiment? The radioactive substances that Rutherford was studying are composed of large numbers of individuals atoms 24 – typically on the order of N = 10 . The chance that an individual atom will decay in a short 1 2 POISSON PROCESSES time interval, and that the resulting decay particle will emerge in just the right direction so as to colide with the counter, is very small. Finally, the different atoms are at least approximately independent.1 Theorem 1 can be proved easily by showing that the probability generating functions of the binomial distributions converge to the generating function of the Poisson distribution. Alter- natively, it is possible (and not very difficult) to show directly that the densities converge. For either proof, the following important analytic lemma is needed. Lemma 2. If λn is a sequence of real numbers such that limn λn = λ exists and is finite, then !1 n λn λ (2) lim 1 = e . n n − !1 − Proof. Write the product on the left side as the exponential of a sum: n (1 λn =n) = exp n log(1 λn =n) . − f − g This makes it apparent that what we must really show is that n log(1 λn =n) converges to λ. Now λn =n converges to zero, so the log is being computed at an argument− very near 1, where− log1 = 0. But near x = 1, the log function is very well approximated by its tangent line, which has slope 1 (because the derivative of log x is 1=x ). Hence, n log(1 λn =n) n( λn =n) λ. − ≈ − ≈ − To turn this into a rigorous proof, use the second term in the Taylor series for log to estimate the error in the approximation. Exercise 1. Prove Theorem 1 either by showing that the generating functions converge or by showing that the densities converge. The Law of Small Numbers is closely related to the next proposition, which shows that the exponential distribution is a limit of geometric distributions. Proposition 3. Let Tn be a sequence of geometric random variables with parameters pn , that is, k (3) P Tn > k = (1 pn ) for k = 0,1,2,... f g − If npn λ > 0 as n then Tn =n converges in distribution to the exponential distribution with parameter! λ. ! 1 Proof. Set λn = npn ; then λn λ as n . Substitute λn =n for pn in (3) and use Lemma 2 to get ! ! 1 [nt ] lim P Tn > nt 1 λn =n exp λt . n = ( ) !1 f g − −! {− g 1This isn’t completely true, though, and so observable deviations from the Poisson distribution will occur in larger experiments. POISSON PROCESSES 3 2. THINNING AND SUPERPOSITION PROPERTIES Superposition Theorem . If Y1, Y2,..., Yn are independent Poisson random variables with means EYi = λi then n n X X (4) Yi Poisson λi i =1 ∼ − i =1 There are various ways to prove this, none of them especially hard. For instance, you can use probability generating functions (Exercise.) Alternatively, you can do a direct calculation of the probability density when n = 2, and then induct on n. (Exercise.) But the clearest way to see that theorem theorem must be true is to use the Law of Small Numbers. Consider, for definiteness, the case n = 2. Consider independent Bernoulli trials Xi , with small successs parameter p. Let N1 = [λ1=p] and N2 = [λ2=p] (here [x ] means the integer part of x ), and set N = N1 + N2. Clearly, N X1 Xi Binomial (N1,p), i =1 ∼ − N N X2+ 1 Xi Binomial (N2,p),and i =N1+1 ∼ − N X Xi Binomial (N ,p). i =1 ∼ − The Law of Small Numbers implies that when p is small and N1,N2, and N are correspondingly large, the three sums above have distributions which are close to Poisson, with means λ1, λ2, and λ, respectively. Consequently, (4) has to be true when n = 2. It then follows by induction that it must be true for all n. Thinning Theorem . Suppose that N Poisson(λ), and that X1, X2,... are independent, iden- tically distributed Bernoulli-p random∼ variables independent of N . Let S Pn X . Then S n = i =1 i N has the Poisson distribution with mean λp. This is called the “Thinning Property” because, in effect, it says that if for each occurence counted in N you toss a p coin, and then record only those occurences for which the coin toss is a Head, then you still end− up with a Poisson random variable. Proof. You can prove this directly, by evaluating P SN = k (exercise), or by using generating functions, or by using the Law of Small Numbers (exercise).f g The best proof is by the Law of Small Numbers, in my view, and I think it worth one’s while to try to understand the result from this perspective. But for variety, here is a generating function proof: First, recall that the S n generating function of the binomial distribution is E t n = (q + p t ) , where q = 1 p. The generating function of the Poisson distribution with parameter λ is − k N X1 (t λ) E t e λ exp λt λ . = k! − = k=0 f − g 4 POISSON PROCESSES Therefore, SN X1 Sn n λ E t = E t λ e − =n! n=0 X1 n n λ = (q + p t ) λ e − =n! n=0 = exp (λq + λp t ) λ f − g = exp λp t λp , f − g and so it follows that SN has the Poisson distribution with parameter λp. A similar argument can be used to prove the following generalizaton: Generalized Thinning Theorem . Suppose that N Poisson(λ), and that X1, X2,... are inde- pendent, identically distributed multinomial random∼ variables with distribution Multinomial (p1,p2,...,pm ), that is, − P Xi = k = pk for each k = 1,2,...,m f g Then the random variables N1,N2,...,Nm defined by N X Nk = 1 Xi = k i =1 f g are independent Poisson random variables with parameters E Nk = λpk . Notation: The notation 1A (or 1 ... ) denotes the indicator function of the event A (or the event ... ), that is, the random variablef thatg takes the value 1 when A occurs and 0 otherwise. Thus, inf theg statement of the theorem, Nk is the number of multinomial trials Xi among the first N that result in outcome k. 3. POISSON PROCESSES Definition 1. A point process on the timeline [0, ) is a mapping J NJ = N (J ) that assigns to 2 1 7! each subset J [0, ) a nonnegative, integer-valued random variable NJ in such a way that if J1, J2,.. are pairwise⊂ 1 disjoint then X (5) N ( i Ji ) = N (Ji ) [ i The counting process associated with the point process N (J ) is the family of random variables Nt = N (t ) t 0 defined by f g ≥ (6) N (t ) = N ((0, t ]). NOTE: The sample paths of N (t ) are, by convention, always right-continuous, that is, N (t ) = lim" 0+ N (t + "). ! There is a slight risk of confusion in using the same letter N for both the point process and the counting process, but it isn’t worth using a different letter for the two. The counting process N (t ) has sample paths that are step functions, with jumps of integer sizes. The discontinuities represent occurences in the point process. 2actually, to each Borel measurable subset POISSON PROCESSES 5 Definition 2.
Recommended publications
  • Poisson Representations of Branching Markov and Measure-Valued
    The Annals of Probability 2011, Vol. 39, No. 3, 939–984 DOI: 10.1214/10-AOP574 c Institute of Mathematical Statistics, 2011 POISSON REPRESENTATIONS OF BRANCHING MARKOV AND MEASURE-VALUED BRANCHING PROCESSES By Thomas G. Kurtz1 and Eliane R. Rodrigues2 University of Wisconsin, Madison and UNAM Representations of branching Markov processes and their measure- valued limits in terms of countable systems of particles are con- structed for models with spatially varying birth and death rates. Each particle has a location and a “level,” but unlike earlier con- structions, the levels change with time. In fact, death of a particle occurs only when the level of the particle crosses a specified level r, or for the limiting models, hits infinity. For branching Markov pro- cesses, at each time t, conditioned on the state of the process, the levels are independent and uniformly distributed on [0,r]. For the limiting measure-valued process, at each time t, the joint distribu- tion of locations and levels is conditionally Poisson distributed with mean measure K(t) × Λ, where Λ denotes Lebesgue measure, and K is the desired measure-valued process. The representation simplifies or gives alternative proofs for a vari- ety of calculations and results including conditioning on extinction or nonextinction, Harris’s convergence theorem for supercritical branch- ing processes, and diffusion approximations for processes in random environments. 1. Introduction. Measure-valued processes arise naturally as infinite sys- tem limits of empirical measures of finite particle systems. A number of ap- proaches have been developed which preserve distinct particles in the limit and which give a representation of the measure-valued process as a transfor- mation of the limiting infinite particle system.
    [Show full text]
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • Partnership As Experimentation: Business Organization and Survival in Egypt, 1910–1949
    Yale University EliScholar – A Digital Platform for Scholarly Publishing at Yale Discussion Papers Economic Growth Center 5-1-2017 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç Timothy Guinnane Follow this and additional works at: https://elischolar.library.yale.edu/egcenter-discussion-paper-series Recommended Citation Artunç, Cihan and Guinnane, Timothy, "Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949" (2017). Discussion Papers. 1065. https://elischolar.library.yale.edu/egcenter-discussion-paper-series/1065 This Discussion Paper is brought to you for free and open access by the Economic Growth Center at EliScholar – A Digital Platform for Scholarly Publishing at Yale. It has been accepted for inclusion in Discussion Papers by an authorized administrator of EliScholar – A Digital Platform for Scholarly Publishing at Yale. For more information, please contact [email protected]. ECONOMIC GROWTH CENTER YALE UNIVERSITY P.O. Box 208269 New Haven, CT 06520-8269 http://www.econ.yale.edu/~egcenter Economic Growth Center Discussion Paper No. 1057 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç University of Arizona Timothy W. Guinnane Yale University Notes: Center discussion papers are preliminary materials circulated to stimulate discussion and critical comments. This paper can be downloaded without charge from the Social Science Research Network Electronic Paper Collection: https://ssrn.com/abstract=2973315 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç⇤ Timothy W. Guinnane† This Draft: May 2017 Abstract The relationship between legal forms of firm organization and economic develop- ment remains poorly understood. Recent research disputes the view that the joint-stock corporation played a crucial role in historical economic development, but retains the view that the costless firm dissolution implicit in non-corporate forms is detrimental to investment.
    [Show full text]
  • A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks†
    A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks† Fernando Antoneli1,4,*, Renata C. Ferreira3, Marcelo R. S. Briones2,4 1 Departmento de Informática em Saúde, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 2 Departmento de Microbiologia, Imunologia e Parasitologia, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 3 College of Medicine, Pennsylvania State University (Hershey), PA, USA 4 Laboratório de Genômica Evolutiva e Biocomplexidade, EPM, UNIFESP, Ed. Pesquisas II, Rua Pedro de Toledo 669, CEP 04039-032, São Paulo, Brasil Abstract. Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving
    [Show full text]
  • Introduction to Lévy Processes
    Introduction to L´evyprocesses Graduate lecture 22 January 2004 Matthias Winkel Departmental lecturer (Institute of Actuaries and Aon lecturer in Statistics) 1. Random walks and continuous-time limits 2. Examples 3. Classification and construction of L´evy processes 4. Examples 5. Poisson point processes and simulation 1 1. Random walks and continuous-time limits 4 Definition 1 Let Yk, k ≥ 1, be i.i.d. Then n X 0 Sn = Yk, n ∈ N, k=1 is called a random walk. -4 0 8 16 Random walks have stationary and independent increments Yk = Sk − Sk−1, k ≥ 1. Stationarity means the Yk have identical distribution. Definition 2 A right-continuous process Xt, t ∈ R+, with stationary independent increments is called L´evy process. 2 Page 1 What are Sn, n ≥ 0, and Xt, t ≥ 0? Stochastic processes; mathematical objects, well-defined, with many nice properties that can be studied. If you don’t like this, think of a model for a stock price evolving with time. There are also many other applications. If you worry about negative values, think of log’s of prices. What does Definition 2 mean? Increments , = 1 , are independent and Xtk − Xtk−1 k , . , n , = 1 for all 0 = . Xtk − Xtk−1 ∼ Xtk−tk−1 k , . , n t0 < . < tn Right-continuity refers to the sample paths (realisations). 3 Can we obtain L´evyprocesses from random walks? What happens e.g. if we let the time unit tend to zero, i.e. take a more and more remote look at our random walk? If we focus at a fixed time, 1 say, and speed up the process so as to make n steps per time unit, we know what happens, the answer is given by the Central Limit Theorem: 2 Theorem 1 (Lindeberg-L´evy) If σ = V ar(Y1) < ∞, then Sn − (Sn) √E → Z ∼ N(0, σ2) in distribution, as n → ∞.
    [Show full text]
  • Spatio-Temporal Cluster Detection and Local Moran Statistics of Point Processes
    Old Dominion University ODU Digital Commons Mathematics & Statistics Theses & Dissertations Mathematics & Statistics Spring 2019 Spatio-Temporal Cluster Detection and Local Moran Statistics of Point Processes Jennifer L. Matthews Old Dominion University Follow this and additional works at: https://digitalcommons.odu.edu/mathstat_etds Part of the Applied Statistics Commons, and the Biostatistics Commons Recommended Citation Matthews, Jennifer L.. "Spatio-Temporal Cluster Detection and Local Moran Statistics of Point Processes" (2019). Doctor of Philosophy (PhD), Dissertation, Mathematics & Statistics, Old Dominion University, DOI: 10.25777/3mps-rk62 https://digitalcommons.odu.edu/mathstat_etds/46 This Dissertation is brought to you for free and open access by the Mathematics & Statistics at ODU Digital Commons. It has been accepted for inclusion in Mathematics & Statistics Theses & Dissertations by an authorized administrator of ODU Digital Commons. For more information, please contact [email protected]. ABSTRACT Approved for public release; distribution is unlimited SPATIO-TEMPORAL CLUSTER DETECTION AND LOCAL MORAN STATISTICS OF POINT PROCESSES Jennifer L. Matthews Commander, United States Navy Old Dominion University, 2019 Director: Dr. Norou Diawara Moran's index is a statistic that measures spatial dependence, quantifying the degree of dispersion or clustering of point processes and events in some location/area. Recognizing that a single Moran's index may not give a sufficient summary of the spatial autocorrelation measure, a local
    [Show full text]
  • Contents-Preface
    Stochastic Processes From Applications to Theory CHAPMAN & HA LL/CRC Texts in Statis tical Science Series Series Editors Francesca Dominici, Harvard School of Public Health, USA Julian J. Faraway, University of Bath, U K Martin Tanner, Northwestern University, USA Jim Zidek, University of Br itish Columbia, Canada Statistical !eory: A Concise Introduction Statistics for Technology: A Course in Applied F. Abramovich and Y. Ritov Statistics, !ird Edition Practical Multivariate Analysis, Fifth Edition C. Chat!eld A. A!!, S. May, and V.A. Clark Analysis of Variance, Design, and Regression : Practical Statistics for Medical Research Linear Modeling for Unbalanced Data, D.G. Altman Second Edition R. Christensen Interpreting Data: A First Course in Statistics Bayesian Ideas and Data Analysis: An A.J.B. Anderson Introduction for Scientists and Statisticians Introduction to Probability with R R. Christensen, W. Johnson, A. Branscum, K. Baclawski and T.E. Hanson Linear Algebra and Matrix Analysis for Modelling Binary Data, Second Edition Statistics D. Collett S. Banerjee and A. Roy Modelling Survival Data in Medical Research, Mathematical Statistics: Basic Ideas and !ird Edition Selected Topics, Volume I, D. Collett Second Edition Introduction to Statistical Methods for P. J. Bickel and K. A. Doksum Clinical Trials Mathematical Statistics: Basic Ideas and T.D. Cook and D.L. DeMets Selected Topics, Volume II Applied Statistics: Principles and Examples P. J. Bickel and K. A. Doksum D.R. Cox and E.J. Snell Analysis of Categorical Data with R Multivariate Survival Analysis and Competing C. R. Bilder and T. M. Loughin Risks Statistical Methods for SPC and TQM M.
    [Show full text]
  • Notes on Stochastic Processes
    Notes on stochastic processes Paul Keeler March 20, 2018 This work is licensed under a “CC BY-SA 3.0” license. Abstract A stochastic process is a type of mathematical object studied in mathemat- ics, particularly in probability theory, which can be used to represent some type of random evolution or change of a system. There are many types of stochastic processes with applications in various fields outside of mathematics, including the physical sciences, social sciences, finance and economics as well as engineer- ing and technology. This survey aims to give an accessible but detailed account of various stochastic processes by covering their history, various mathematical definitions, and key properties as well detailing various terminology and appli- cations of the process. An emphasis is placed on non-mathematical descriptions of key concepts, with recommendations for further reading. 1 Introduction In probability and related fields, a stochastic or random process, which is also called a random function, is a mathematical object usually defined as a collection of random variables. Historically, the random variables were indexed by some set of increasing numbers, usually viewed as time, giving the interpretation of a stochastic process representing numerical values of some random system evolv- ing over time, such as the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule [120, page 7][51, page 46 and 47][66, page 1]. Stochastic processes are widely used as math- ematical models of systems and phenomena that appear to vary in a random manner. They have applications in many disciplines including physical sciences such as biology [67, 34], chemistry [156], ecology [16][104], neuroscience [102], and physics [63] as well as technology and engineering fields such as image and signal processing [53], computer science [15], information theory [43, page 71], and telecommunications [97][11][12].
    [Show full text]
  • Random Walk in Random Scenery (RWRS)
    IMS Lecture Notes–Monograph Series Dynamics & Stochastics Vol. 48 (2006) 53–65 c Institute of Mathematical Statistics, 2006 DOI: 10.1214/074921706000000077 Random walk in random scenery: A survey of some recent results Frank den Hollander1,2,* and Jeffrey E. Steif 3,† Leiden University & EURANDOM and Chalmers University of Technology Abstract. In this paper we give a survey of some recent results for random walk in random scenery (RWRS). On Zd, d ≥ 1, we are given a random walk with i.i.d. increments and a random scenery with i.i.d. components. The walk and the scenery are assumed to be independent. RWRS is the random process where time is indexed by Z, and at each unit of time both the step taken by the walk and the scenery value at the site that is visited are registered. We collect various results that classify the ergodic behavior of RWRS in terms of the characteristics of the underlying random walk (and discuss extensions to stationary walk increments and stationary scenery components as well). We describe a number of results for scenery reconstruction and close by listing some open questions. 1. Introduction Random walk in random scenery is a family of stationary random processes ex- hibiting amazingly rich behavior. We will survey some of the results that have been obtained in recent years and list some open questions. Mike Keane has made funda- mental contributions to this topic. As close colleagues it has been a great pleasure to work with him. We begin by defining the object of our study. Fix an integer d ≥ 1.
    [Show full text]
  • Laplace Transform Identities for the Volume of Stopping Sets Based on Poisson Point Processes
    Laplace transform identities for the volume of stopping sets based on Poisson point processes Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 21 Nanyang Link Singapore 637371 November 9, 2015 Abstract We derive Laplace transform identities for the volume content of random stopping sets based on Poisson point processes. Our results are based on antic- ipating Girsanov identities for Poisson point processes under a cyclic vanishing condition for a finite difference gradient. This approach does not require classi- cal assumptions based on set-indexed martingales and the (partial) ordering of index sets. The examples treated focus on stopping sets in finite volume, and include the random missed volume of Poisson convex hulls. Key words: Poisson point processes; stopping sets; gamma-type distributions; Gir- sanov identities; anticipating stochastic calculus. Mathematics Subject Classification (2010): 60D05; 60G40; 60G57; 60G48; 60H07. 1 Introduction Gamma-type results for the area of random domains constructed from a finite number of \typical" Poisson distributed points, and more generally known as complementary theorems, have been obtained in [7], cf. e.g. Theorem 10.4.8 in [14]. 1 Stopping sets are random sets that carry over the notion of stopping time to set- indexed processes, cf. Definition 2.27 in [8], based on stochastic calculus for set- indexed martingales, cf. e.g. [6]. Gamma-type results for the probability law of the volume content of random sets have been obtained in the framework of stopping sets in [15], via Laplace transforms, using the martingale property of set-indexed stochas- tic exponentials, see [16] for the strong Markov property for point processes, cf.
    [Show full text]
  • Rescaling Marked Point Processes
    1 Rescaling Marked Point Processes David Vere-Jones1, Victoria University of Wellington Frederic Paik Schoenberg2, University of California, Los Angeles Abstract In 1971, Meyer showed how one could use the compensator to rescale a multivari- ate point process, forming independent Poisson processes with intensity one. Meyer’s result has been generalized to multi-dimensional point processes. Here, we explore gen- eralization of Meyer’s theorem to the case of marked point processes, where the mark space may be quite general. Assuming simplicity and the existence of a conditional intensity, we show that a marked point process can be transformed into a compound Poisson process with unit total rate and a fixed mark distribution. ** Key words: residual analysis, random time change, Meyer’s theorem, model evaluation, intensity, compensator, Poisson process. ** 1 Department of Mathematics, Victoria University of Wellington, P.O. Box 196, Welling- ton, New Zealand. [email protected] 2 Department of Statistics, 8142 Math-Science Building, University of California, Los Angeles, 90095-1554. [email protected] Vere-Jones and Schoenberg. Rescaling Marked Point Processes 2 1 Introduction. Before other matters, both authors would like express their appreciation to Daryl for his stimulating and forgiving company, and to wish him a long and fruitful continuation of his mathematical, musical, woodworking, and many other activities. In particular, it is a real pleasure for the first author to acknowledge his gratitude to Daryl for his hard work, good humour, generosity and continuing friendship throughout the development of (innumerable draft versions and now even two editions of) their joint text on point processes.
    [Show full text]
  • Lecture 26 : Poisson Point Processes
    Lecture 26 : Poisson Point Processes STAT205 Lecturer: Jim Pitman Scribe: Ben Hough <[email protected]> In this lecture, we consider a measure space (S; S; µ), where µ is a σ-finite measure (i.e. S may be written as a disjoint union of sets of finite µ-measure). 26.1 The Poisson Point Process Definition 26.1 A Poisson Point Process (P.P.P.) with intensity µ is a collection of random variables N(A; !), A 2 S, ! 2 Ω defined on a probability space (Ω; F; P) such that: 1. N(·; !) is a counting measure on (S; S) for each ! 2 Ω. 2. N(A; ·) is Poisson with mean µ(A): − P e µ(A)(µ(A))k (N(A) = k) = k! all A 2 S. 3. If A1, A2, ... are disjoint sets then N(A1; ·), N(A2; ·), ... are independent random variables. Theorem 26.2 P.P.P.'s exist. Proof Sketch: It's not enough to quote Kolmogorov's Extension Theorem. The only convincing argument is to give an explicit constuction from sequences of independent random variables. We begin by considering the case µ(S) < 1. P µ(A) 1. Take X1, X2, ... to be i.i.d. random variables so that (Xi 2 A) = µ(S) . 2. Take N(S) to be a Poisson random variable with mean µ(S), independent of the Xi's. Assume all random varibles are defined on the same probability space (Ω; F; P). N(S) 3. Define N(A) = i=1 1(Xi2A), for all A 2 S. P 1 Now verify that this N(A; !) is a P.P.P.
    [Show full text]