Looking for Poisson-Like Random Measures

Total Page:16

File Type:pdf, Size:1020Kb

Looking for Poisson-Like Random Measures Received: 18 March 2019 DOI: 10.1002/mma.6224 RESEARCH ARTICLE Throwing stones and collecting bones: Looking for Poisson-like random measures Caleb Deen Bastian1 Grzegorz A. Rempala2 1Program in Applied and Computational Mathematics, Princeton University, We show that in a broad class of random counting measures, one may identify Princeton, New Jersey only three that are rescaled versions of themselves when restricted to a sub- 2 Division of Biostatistics and Department space. These are Poisson, binomial, and negative binomial random measures. of Mathematics, The Ohio State University, Columbus, Ohio We provide some simple examples of possible applications of such measures. Correspondence KEYWORDS Caleb Deen Bastian, Program in Applied Laplace functional, Poisson-type (PT) distributions, random counting measure, stone throwing and Computational Mathematics, construction, strong invariance, thinning Washington Road, Fine Hall, Princeton University, Princeton, NJ. 08544. Email: [email protected] MSC CLASSIFICATION 60G57; 60G18; 60G55 Communicatedby: S. Wise Funding information National Science Foundation, Grant/Award Number: DMS 1853587 1 INTRODUCTION Random counting measures, also known as point processes, are the central objects of this note. For general introduc- tion, see, for instance, monographs by Olav1,2 or Erhan.3 Random counting measures have numerous uses in statistics and applied probability, including representation and construction of stochastic processes, Monte Carlo schemes, etc. For example, the Poisson random measure is a fundamental random counting measure that is related to the structure of Lévy processes, Markov jump processes, or the excursions of Brownian motion, and is prototypical to the class of completely random (additive) random measures.3 In particular, it is also well known that the Poisson random measure is self-similar in the sense of being invariant under restriction to a subspace (invariant under thinning). The bino- mial random measure is another fundamental random counting measure that underlies the theory of autoregressive integer-valued processes.4,5 In this note, we explore a broad class of random counting measures to identify those that share the Poisson self-similarity property and discuss their possible applications. The paper is organized as follows. In Section 2, we provide necessary background and lay out the main mathematical results, whereas in Section 3, we give examples of possible applications in different areas of modern sciences, from epidemiology to consumer research to traffic flows. The main result of the note is Theorem 3, which identifies in a broad class of random counting measures those that are closed under restriction to subspaces, ie, invariant under thinning. They are the Poisson, negative binomial, and binomial random measures. We show that the corresponding counting distributions are the only distributions in the power series family that are invariant under thinning. We also give simple examples to highlight calculus of PT random measures and their possible applications. 4658 © 2020 John Wiley & Sons, Ltd. wileyonlinelibrary.com/journal/mma Math Meth Appl Sci. 2020;43:4658–4668. BASTIAN AND REMPALA 4659 2 THROWING STONES AND LOOKING FOR BONES Consider measurable space (E, ) with some collection X ={Xi} of iid random variables (stones) with law 휈 and some non-negative integer-valued random variable K (K ∈ N≥0 = N>0 ∪ {0}) with law 휅 that is independent of X and has 2 finite mean c. Whenever it exists, the variance of K is denoted by 훿 > 0. Let + be the set of positive -measurable functions. It is well known3 that the random counting measure N on (E, ) is uniquely determined by the pair of deterministic probability measures (휅, 휈) through the so-called stone throwing construction (STC) as follows. For every outcome 휔 ∈Ω, K(휔) N휔(A)=N(휔, A)= IA(Xi(휔)) for A ∈ , (1) i=1 ∑ where K has law 휅, the iid X1, X2, … have law 휈, and IA(·) denotes the indicator function for set A. Below, we write N =(휅, 휈) to denote the random measure N determined by (휅, 휈) through STC. We note that N may be also regarded as a mixed binomial process.2 In particular, when 휅 is the Dirac measure, then N is a binomial process.2 Note that on any test function 푓 ∈ +, K(휔) K(휔) N휔푓 = 푓◦Xi(휔)= 푓(Xi(휔)). i i ∑ ∑ Below, for brevity, we write N푓, so that eg, N(A)=NIA. It follows from the above and the independence of K and X that EN푓 = c휈푓 (2) VarN푓 = c휈푓2 +(훿2 − c)(휈푓)2, (3) and that the Laplace functional for N is Ee−N푓 = E(Ee−푓(X))K = E(휈e−푓 )K = 휓(휈e−푓 ), where 휓(t)=E tK is the probability generating function (pgf) of K. In what follows, we will also sometimes consider the alternate pgf (apgf) defined as 휓̃ (t)=E(1 − t)K . Note also that for any measurable partition of E, say {A, … , B}, the joint distribution of the collection N(A), … , N(B) is for i, … , 푗 ∈ N and i +…+푗 = k P(N(A)=i, … , N(B)=푗) (4) = P(N(A)=i, … , N(B)=푗 K = k) P(K = k) k! = 휈(A)i … 휈(B)푗 P(K = k). i!…푗! | The following result extends construction of a random measure N =(K, 휈) to the case when the collection X is expanded to (X, Y)={(Xi, Yi)}, where Yi is a random transformation of Xi. Heuristically, Yi represents some properties (marks) of Xi. We assume that the conditional law of Y follows some transition kernel according to P(Y ∈ B X = x)=Q(x, B). Theorem 1 (Marked STC). Consider random measure N =(K, 휈) and the transition probability kernel Q from (E, ) | into ( F, ). Assume that given the collection X the variables Y ={Yi} are conditionally independent with Yi ∼ Q(Xi, ·). Then, M =(K, 휈 ×Q) is a random measure on (E ×F, ⊗F). Here, 휇 = 휈 ×Q is understood as 휇(dx, d푦)=휈(dx)Q(x, d푦). Moreover, for any 푓 ∈ ( ⊗ )+ Ee−M푓 = 휓(휈e−g), −g(x) −푓(x,푦) where 휓(·) is pgf of K, and g ∈ + satisfies e = ∫FQ(x, d푦)e . The proof of this result is standard but for convenience, we provide it in the appendix. For any A ⊂ E with 휈(A) > 0, define the conditional law 휈A by 휈A(B)=휈(A ∩ B)∕휈(A). The following is a simple consequence of Theorem 1 upon taking the transition kernel Q(x, B)=IA(x) 휈A(B). 4660 BASTIAN AND REMPALA Corollary 1. NA =(NIA, 휈A) is a well-defined random measure on the measurable subspace (E ∩ A, A) where A = {A ∩ B ∶ B ∈ }. Moreover, for any 푓 ∈ + −N 푓 −푓 Ee A = 휓(휈e IA + b), where b = 1 − 휈(A). In many practical situations, one is interested in analyzing random measures of the form N =(K, 휈 × Q) while having some information about the restricted measure NA =(NIA, 휈A × Q). Note that the counting variable for NA is KA = NIA, the original counting variable K restricted to (thinned by) the subset A ⊂ E. The purpose of this note is to identify the families of counting distributions K for which the family of random measures {NA ∶ A ⊂ E} belongs to the same family of distributions. We refer to such families of counting distributions as “bones” and give their formal definition below. The term reflects the prototypical or foundational nature of these families within the class of random measures considered here. One obvious example is the Poisson family of distributions, but it turns out that there are also others. The definite result on the existence and uniqueness of random measures based on such “bones” in a broad class is given in Theorem 3 of Section 2.2. 2.1 Subset invariant families (bones) Let N =(휅휃, 휈) be the random measure on (E, ), where 휅휃 is the distribution of K parametrized by 휃 > 0, that is, P(K = k)k≥0 =(pk(휃))k≥0, where we assume p0(휃) > 0. For brevity, we write below K ∼ 휅휃. Consider the family of random variables {NIA ∶ A ⊂ E} and let 휓A(t) be the pgf of NIA with 휓휃(t)=휓E(t) being the pgf of K (since NIE = K). Let a = 휈(A), b = 1 − a and note that IA K K 휓A(t)=E(E t ) = E(at + b) = 휓휃(at + b), or equivalently, in terms of apgf, 휓̃ A(t)=휓̃ 휃(at). Definition 1 (Bones). We say that the family {휅휃 ∶ 휃 ∈Θ} of counting probability measures is strongly invariant with respect to the family {NIA ∶ A ⊂ E} (is a “bone”)ifforany0< a ≤ 1 there exists a mapping ha ∶Θ→ Θ such that 휓휃(at + 1 − a)=휓ha(휃)(t). (5) Note that in terms of apgf, the above condition becomes simply 휓̃ 휃(at)=휓̃ ha(휃)(t). In Table 1, we give some examples of such invariant (“bone”) families. 2.2 Finding bones in power series family Consider the family {휅휃 ∶ 휃 ∈Θ} to be in the form of the non-negative power series (NNPS) where k pk(휃)=ak휃 ∕g(휃). (6) and p0 > 0. We call NNPS canonical if a0 = 1. Setting b = 1 − a, we see that for canonical NNPS, the bone condition in Definition 1 becomes g((at + b)휃)=g(b휃)g(ha(휃)t). (7) The following is a fundamental result on the existence of “bones” in the NNPS family. Theorem 2 (Bones in NNPS). Let 휈 be diffuse (ie, non-atomic). For canonical NNPS 휅휃 satisfying additionally a1 > 0, the relation (7) holds iff log g(휃)=휃 or log g(휃)=±c log (1±휃), where c > 0. TABLE 1 Some examples of “bone” distributions with Name Parameter 휃휓휃(t) ha(휃) Poisson 휆 exp[휃(t − 1)] a휃 corresponding pgfs and mappings of their canonical parameters Bernoulli p∕(1 − p) (1 + 휃t)∕(1 + 휃) a휃∕(1 +(1 − a)휃) Geometric p (1 − 휃)∕(1 − t휃) a휃∕(1 − (1 − a)휃) BASTIAN AND REMPALA 4661 Proof.
Recommended publications
  • Levy Processes
    LÉVY PROCESSES, STABLE PROCESSES, AND SUBORDINATORS STEVEN P.LALLEY 1. DEFINITIONSAND EXAMPLES d Definition 1.1. A continuous–time process Xt = X(t ) t 0 with values in R (or, more generally, in an abelian topological groupG ) isf called a Lévyg ≥ process if (1) its sample paths are right-continuous and have left limits at every time point t , and (2) it has stationary, independent increments, that is: (a) For all 0 = t0 < t1 < < tk , the increments X(ti ) X(ti 1) are independent. − (b) For all 0 s t the··· random variables X(t ) X−(s ) and X(t s ) X(0) have the same distribution.≤ ≤ − − − The default initial condition is X0 = 0. A subordinator is a real-valued Lévy process with nondecreasing sample paths. A stable process is a real-valued Lévy process Xt t 0 with ≥ initial value X0 = 0 that satisfies the self-similarity property f g 1/α (1.1) Xt =t =D X1 t > 0. 8 The parameter α is called the exponent of the process. Example 1.1. The most fundamental Lévy processes are the Wiener process and the Poisson process. The Poisson process is a subordinator, but is not stable; the Wiener process is stable, with exponent α = 2. Any linear combination of independent Lévy processes is again a Lévy process, so, for instance, if the Wiener process Wt and the Poisson process Nt are independent then Wt Nt is a Lévy process. More important, linear combinations of independent Poisson− processes are Lévy processes: these are special cases of what are called compound Poisson processes: see sec.
    [Show full text]
  • Random Measures, Theory and Applications
    Olav Kallenberg Random Measures, Theory and Applications 123 Olav Kallenberg Department of Mathematics and Statistics Auburn University Auburn, Alabama USA ISSN 2199-3130 ISSN 2199-3149 (electronic) Probability Theory and Stochastic Modelling ISBN 978-3-319-41596-3 ISBN 978-3-319-41598-7 (eBook) DOI 10.1007/978-3-319-41598-7 Library of Congress Control Number: 2017936349 Mathematics Subject Classification (2010): 60G55, 60G57 © Springer International Publishing Switzerland 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions
    [Show full text]
  • Dependent Normalized Random Measures
    Dependent Normalized Random Measures Changyou Chen A thesis submitted for the degree of Doctor of Philosophy The Australian National University February 2014 c Changyou Chen 2014 To all my family, on earth and in heaven, for their selfless care and love. Acknowledgments It is time to say goodbye to my PhD research in the Australian National University. It has been an exciting and memorable experience and I would like to take this opportunity to thank everyone who has helped me during my PhD. My utmost gratitude goes to my supervisor, Professor Wray Buntine, for both his extraordinary supervision in my research and selfless care in my daily life. Wray brought me into the Bayesian nonparametrics world, he has taught me how to per- form quality research from the beginning of my PhD. He taught me how to do critical thinking, how to design professional experiments, how to write quality papers, and how to present my research work. I benefited greatly from his experience which has helped me develop as a machine learning researcher. I would like to express my gratitude and respect to Professor Yee Whye Teh, who hosted me as a visiting student at UCL for a short time in 2012. My special thanks also goes to Vinayak Rao, who explained to me his spatial normalized Gamma pro- cesses, a precursor of our joint work together with Yee Whye. I was impressed by Yee Whye and Vinayak’s knowledge in Bayesian nonparametrics, and their sophisticated ideas for Bayesian posterior inference. I was lucky to work with them and had our joint work published in ICML.
    [Show full text]
  • Levy Processes, Stable Processes, and Subordinators
    LÉVY PROCESSES, STABLE PROCESSES, AND SUBORDINATORS STEVEN P.LALLEY 1. DEFINITIONS AND EXAMPLES d Definition 1.1. A continuous–time process X t = X (t ) t 0 with values in R (or, more generally, in an abelian topological group G{) is called} a≥Lévy process if (1) its sample paths are right-continuous and have left limits at every time point t , and (2) it has stationary, independent increments, that is: (a) For all 0 = t0 < t1 < < tk , the increments X (ti ) X (ti 1) are independent. (b) For all 0 s t the··· random variables X (t ) X (−s ) and−X (t s ) X (0) have the same distribution. − − − The default initial condition is X0 = 0. It can be shown that every process with sta- tionary, independent increments has a version with paths that are right-continuous with left limits, but this fact won’t be needed in this course. See BREIMAN, Probability, chs. 9, 14 for the proof. Definition 1.2. A subordinator is a real-valued Lévy process with nondecreasing sam- ple paths. A stable process is a real-valued Lévy process X t t 0 with initial value X0 = 0 that satisfies the self-similarity property { } ≥ 1/↵ (1.1) X t /t =D X1 t > 0. 8 The parameter ↵ is called the exponent of the process. Example 1.1. The fundamental Lévy processes are the Wiener process and the Poisson process. The Poisson process is a subordinator, but is not stable; the Wiener process is stable, with exponent ↵ = 2. Any linear combination of independent Lévy processes is again a Lévy process, so, for instance, if the Wiener process Wt and the Poisson process Nt are independent then Wt Nt is a Lévy process.
    [Show full text]
  • Poisson Point Processes and Subordinators
    CHAPTER 1 Poisson point processes and subordinators. In this chapter, we introduce basic notions on Poisson point processes and subordina- tors. Poisson processes and two remarkable families of related martingales are studied. We also introduce the notion of Poisson random measures in order to define the Poisson point process. The last part of this chapter concerns to subordinators and their connection with the Levy-Kinthine´ formula. 1. Poisson point processes 1.1. Poisson processes. DEFINITION 1. A Poisson process with parameter c > 0 is a renewal process where the time between occurrences is exponentially distributed with parameter c. More precisely take a sequence (τn, n ≥ 1) of independent exponential random variables with parameter c and introduce the partial sums Sn = τ1 + ··· + τn, n ∈ IN. The right-continuous inverse n o Nt = sup n ∈ IN : Sn ≤ t , t ≥ 0, is called a Poisson process of parameter c. Let us explain some details of the above definition. We first recall that Sn has the same law as a Gamma distribution with parameters c and n. Therefore, for any fixed k ∈ IN and t ∈ IR+ P(Nt = k) = E 1I{Sk≤t<Sk+1} = E 1I{Sk≤t}P(τn+1 ≥ t − Sk) 1 Z t cktk = e−c(t−x)cke−cxxk−1dx = e−ct. Γ(k) 0 k! This implies that for any fixed t > 0, Nt is a Poisson r.v. with parameter tc, from where this process takes his name. The lack of memory property of the exponential law implies that for every 0 ≤ s ≤ t, the increment Nt+s − Nt has the Poisson distribution with parameter cs and is independent of the σ-field generated by (Nu, 0 ≤ u ≤ t), i.e.
    [Show full text]
  • Notes on the Poisson Point Process
    Notes on the Poisson point process Paul Keeler March 20, 2018 This work is licensed under a “CC BY-SA 3.0” license. Abstract The Poisson point process is a type of random object in mathematics known as a point process. For over a century this point process has been the focus of much study and application. This survey aims to give an accessible but detailed account of the Poisson point process by covering its history, mathematical defini- tions in a number of settings, and key properties as well detailing various termi- nology and applications of the point process, with recommendations for further reading. 1 Introduction In probability, statistics and related fields, a Poisson point process or a Pois- son process (also called a Poisson random measure or a Poisson point field) is a type of random object known as a point process that consists of randomly po- sitioned points located on some underlying mathematical space [51, 54]. The Poisson point process has interesting mathematical properties [51][54, pages xv- xvi], which has led to it being frequently defined in Euclidean space and used as a mathematical model for seemingly random objects in numerous disciplines such as astronomy [4], biology [62], ecology [84], geology [17], physics [74], im- age processing [10], and telecommunications [33]. For example, defined on the real line, it can be interpreted as a stochastic process 1 called the Poisson pro- cess [83, pages 7–8][2][70, pages 94–95], forming an example of a Levy´ process and a Markov process [73, page ix] [2]. In this setting, the Poisson process plays 1Point processes and stochastic processes are usually considered as different objects, with point processes being associated or related to certain stochastic processes [20, page 194][18, page 3][83, pages 7–8][25, page 580].
    [Show full text]
  • AN INTRODUCTION to LÉVY PROCESSES Contents 1. Introduction 2 2. Definition of Lévy Processes 2 3. Toy Example: a Lévy Jump-Di
    AN INTRODUCTION TO LEVY´ PROCESSES ERIK BAURDOUX AND ANTONIS PAPAPANTOLEON Abstract. Lecture notes from courses at TU Berlin, LSE and CIMAT Contents 1. Introduction 2 2. Definition of L´evy processes 2 3. Toy example: a L´evyjump-diffusion 3 4. Infinitely Divisible distributions 6 5. The L´evy{Khintchine representation 11 6. The L´evy{It^odecomposition 19 7. The L´evymeasure and path properties 32 8. Elementary operations 38 9. Moments and Martingales 41 10. Popular examples 46 11. Simulation of L´evyprocesses 52 12. Stochastic integration 53 References 60 Date: February 6, 2015. 1 2 ERIK BAURDOUX AND ANTONIS PAPAPANTOLEON 1. Introduction These lecture notes aim at providing a (mostly) self-contained introduc- tion to L´evyprocesses. We start by defining L´evyprocesses in Section 2 and study the simple but very important example of a L´evyjump-diffusion in Section 3. In Section 4 we discuss infinitely divisible distributions, as the distribution of a L´evyprocess at any fixed time has this property. In the sub- sequent section we prove the L´evy{Khintchine formula, which characterises all infinitely divisible distributions. We then prove the L´evy{It^odecomposi- tion of a L´evyprocess, which is an explicit existence result explaining how to construct a L´evyprocess based on a given infinitely divisible distribution. In Section 7, we make use of the L´evy{It^odecomposition to derive necesarry and sufficient conditions for a L´evyprocess to be of (in)finite variation and a subordinator, respectively. In Section 8 we study elementary operations such as linear transformations, projections and subordination.
    [Show full text]
  • An Introduction to L\'{E} Vy Processes with Applications in Finance
    AN INTRODUCTION TO LEVY´ PROCESSES WITH APPLICATIONS IN FINANCE ANTONIS PAPAPANTOLEON Abstract. These lectures notes aim at introducing L´evyprocesses in an informal and intuitive way, accessible to non-specialists in the field. In the first part, we focus on the theory of L´evyprocesses. We analyze a `toy' example of a L´evyprocess, viz. a L´evyjump-diffusion, which yet offers significant insight into the distributional and path structure of a L´evyprocess. Then, we present several important results about L´evy processes, such as infinite divisibility and the L´evy-Khintchine formula, the L´evy-It^odecomposition, the It^oformula for L´evyprocesses and Gir- sanov's transformation. Some (sketches of) proofs are presented, still the majority of proofs is omitted and the reader is referred to textbooks instead. In the second part, we turn our attention to the applications of L´evyprocesses in financial modeling and option pricing. We discuss how the price process of an asset can be modeled using L´evyprocesses and give a brief account of market incompleteness. Popular models in the literature are presented and revisited from the point of view of L´evy processes, and we also discuss three methods for pricing financial deriva- tives. Finally, some indicative evidence from applications to market data is presented. Contents Part 1. Theory 2 1. Introduction2 2. Definition5 3. `Toy' example: a L´evyjump-diffusion6 4. Infinitely divisible distributions and the L´evy-Khintchine formula8 5. Analysis of jumps and Poisson random measures 11 6. The L´evy-It^odecomposition 12 7. The L´evymeasure, path and moment properties 14 8.
    [Show full text]
  • Poisson Random Measures and Their Numerical Approximation
    SPDES driven by Poisson Random Measures and their numerical Approximation Erika Hausenblas joint work with Thomas Dunst and Andreas Prohl University of Salzburg, Austria SPDES driven by Poisson Random Measures and their numerical Approximation – p.1 Outline Lévy processes - Poisson Random Measure Stochastic Integration in Banach spaces SPDEs driven by Poisson Random Measure Their Numerical Approximation SPDES driven by Poisson Random Measures and their numerical Approximation – p.2 A typical Example Let be a bounded domain in Rd with smooth boundary. O The Equation: du(t,ξ) ∂2 = 2 u(t, ξ)+ α u(t, ξ)+ g(u(t, ξ))L˙ (t) dt ∂ξ ∇ + f(u(t, ξ)), ξ , t> 0; ∈ O (⋆) u(0, ξ) = u (ξ) ξ ; 0 ∈ O u(t, ξ) = u(t, ξ) = 0, t 0, ξ ∂ ; ≥ ∈ O where u Lp( ), p 1, g a certain mapping and L(t) is a space time 0 ∈ O ≥ Lévy noise specified later. Problem: To find a function u : [0, ) R ∞ ×O −→ solving Equation (⋆). SPDES driven by Poisson Random Measures and their numerical Approximation – p.3 The Lévy Process L Definition 1 Let E be a Banach space. A stochastic process L = L(t), 0 t< is an E–valued Lévy process over (Ω; ; P) if the { ≤ ∞} F following conditions are satisfied: L(0) = 0; L has independent and stationary increments; L is stochastically continuous, i.e. for φ bounded and measurable, the function t Eφ(L(t)) is continuous on R+; 7→ L has a.s. càdlàg paths; SPDES driven by Poisson Random Measures and their numerical Approximation – p.4 The Lévy Process L E denotes a separable Banach space and E′ the dual on E.
    [Show full text]