Looking for Poisson-Like Random Measures

Looking for Poisson-Like Random Measures

Received: 18 March 2019 DOI: 10.1002/mma.6224 RESEARCH ARTICLE Throwing stones and collecting bones: Looking for Poisson-like random measures Caleb Deen Bastian1 Grzegorz A. Rempala2 1Program in Applied and Computational Mathematics, Princeton University, We show that in a broad class of random counting measures, one may identify Princeton, New Jersey only three that are rescaled versions of themselves when restricted to a sub- 2 Division of Biostatistics and Department space. These are Poisson, binomial, and negative binomial random measures. of Mathematics, The Ohio State University, Columbus, Ohio We provide some simple examples of possible applications of such measures. Correspondence KEYWORDS Caleb Deen Bastian, Program in Applied Laplace functional, Poisson-type (PT) distributions, random counting measure, stone throwing and Computational Mathematics, construction, strong invariance, thinning Washington Road, Fine Hall, Princeton University, Princeton, NJ. 08544. Email: [email protected] MSC CLASSIFICATION 60G57; 60G18; 60G55 Communicatedby: S. Wise Funding information National Science Foundation, Grant/Award Number: DMS 1853587 1 INTRODUCTION Random counting measures, also known as point processes, are the central objects of this note. For general introduc- tion, see, for instance, monographs by Olav1,2 or Erhan.3 Random counting measures have numerous uses in statistics and applied probability, including representation and construction of stochastic processes, Monte Carlo schemes, etc. For example, the Poisson random measure is a fundamental random counting measure that is related to the structure of Lévy processes, Markov jump processes, or the excursions of Brownian motion, and is prototypical to the class of completely random (additive) random measures.3 In particular, it is also well known that the Poisson random measure is self-similar in the sense of being invariant under restriction to a subspace (invariant under thinning). The bino- mial random measure is another fundamental random counting measure that underlies the theory of autoregressive integer-valued processes.4,5 In this note, we explore a broad class of random counting measures to identify those that share the Poisson self-similarity property and discuss their possible applications. The paper is organized as follows. In Section 2, we provide necessary background and lay out the main mathematical results, whereas in Section 3, we give examples of possible applications in different areas of modern sciences, from epidemiology to consumer research to traffic flows. The main result of the note is Theorem 3, which identifies in a broad class of random counting measures those that are closed under restriction to subspaces, ie, invariant under thinning. They are the Poisson, negative binomial, and binomial random measures. We show that the corresponding counting distributions are the only distributions in the power series family that are invariant under thinning. We also give simple examples to highlight calculus of PT random measures and their possible applications. 4658 © 2020 John Wiley & Sons, Ltd. wileyonlinelibrary.com/journal/mma Math Meth Appl Sci. 2020;43:4658–4668. BASTIAN AND REMPALA 4659 2 THROWING STONES AND LOOKING FOR BONES Consider measurable space (E, ) with some collection X ={Xi} of iid random variables (stones) with law 휈 and some non-negative integer-valued random variable K (K ∈ N≥0 = N>0 ∪ {0}) with law 휅 that is independent of X and has 2 finite mean c. Whenever it exists, the variance of K is denoted by 훿 > 0. Let + be the set of positive -measurable functions. It is well known3 that the random counting measure N on (E, ) is uniquely determined by the pair of deterministic probability measures (휅, 휈) through the so-called stone throwing construction (STC) as follows. For every outcome 휔 ∈Ω, K(휔) N휔(A)=N(휔, A)= IA(Xi(휔)) for A ∈ , (1) i=1 ∑ where K has law 휅, the iid X1, X2, … have law 휈, and IA(·) denotes the indicator function for set A. Below, we write N =(휅, 휈) to denote the random measure N determined by (휅, 휈) through STC. We note that N may be also regarded as a mixed binomial process.2 In particular, when 휅 is the Dirac measure, then N is a binomial process.2 Note that on any test function 푓 ∈ +, K(휔) K(휔) N휔푓 = 푓◦Xi(휔)= 푓(Xi(휔)). i i ∑ ∑ Below, for brevity, we write N푓, so that eg, N(A)=NIA. It follows from the above and the independence of K and X that EN푓 = c휈푓 (2) VarN푓 = c휈푓2 +(훿2 − c)(휈푓)2, (3) and that the Laplace functional for N is Ee−N푓 = E(Ee−푓(X))K = E(휈e−푓 )K = 휓(휈e−푓 ), where 휓(t)=E tK is the probability generating function (pgf) of K. In what follows, we will also sometimes consider the alternate pgf (apgf) defined as 휓̃ (t)=E(1 − t)K . Note also that for any measurable partition of E, say {A, … , B}, the joint distribution of the collection N(A), … , N(B) is for i, … , 푗 ∈ N and i +…+푗 = k P(N(A)=i, … , N(B)=푗) (4) = P(N(A)=i, … , N(B)=푗 K = k) P(K = k) k! = 휈(A)i … 휈(B)푗 P(K = k). i!…푗! | The following result extends construction of a random measure N =(K, 휈) to the case when the collection X is expanded to (X, Y)={(Xi, Yi)}, where Yi is a random transformation of Xi. Heuristically, Yi represents some properties (marks) of Xi. We assume that the conditional law of Y follows some transition kernel according to P(Y ∈ B X = x)=Q(x, B). Theorem 1 (Marked STC). Consider random measure N =(K, 휈) and the transition probability kernel Q from (E, ) | into ( F, ). Assume that given the collection X the variables Y ={Yi} are conditionally independent with Yi ∼ Q(Xi, ·). Then, M =(K, 휈 ×Q) is a random measure on (E ×F, ⊗F). Here, 휇 = 휈 ×Q is understood as 휇(dx, d푦)=휈(dx)Q(x, d푦). Moreover, for any 푓 ∈ ( ⊗ )+ Ee−M푓 = 휓(휈e−g), −g(x) −푓(x,푦) where 휓(·) is pgf of K, and g ∈ + satisfies e = ∫FQ(x, d푦)e . The proof of this result is standard but for convenience, we provide it in the appendix. For any A ⊂ E with 휈(A) > 0, define the conditional law 휈A by 휈A(B)=휈(A ∩ B)∕휈(A). The following is a simple consequence of Theorem 1 upon taking the transition kernel Q(x, B)=IA(x) 휈A(B). 4660 BASTIAN AND REMPALA Corollary 1. NA =(NIA, 휈A) is a well-defined random measure on the measurable subspace (E ∩ A, A) where A = {A ∩ B ∶ B ∈ }. Moreover, for any 푓 ∈ + −N 푓 −푓 Ee A = 휓(휈e IA + b), where b = 1 − 휈(A). In many practical situations, one is interested in analyzing random measures of the form N =(K, 휈 × Q) while having some information about the restricted measure NA =(NIA, 휈A × Q). Note that the counting variable for NA is KA = NIA, the original counting variable K restricted to (thinned by) the subset A ⊂ E. The purpose of this note is to identify the families of counting distributions K for which the family of random measures {NA ∶ A ⊂ E} belongs to the same family of distributions. We refer to such families of counting distributions as “bones” and give their formal definition below. The term reflects the prototypical or foundational nature of these families within the class of random measures considered here. One obvious example is the Poisson family of distributions, but it turns out that there are also others. The definite result on the existence and uniqueness of random measures based on such “bones” in a broad class is given in Theorem 3 of Section 2.2. 2.1 Subset invariant families (bones) Let N =(휅휃, 휈) be the random measure on (E, ), where 휅휃 is the distribution of K parametrized by 휃 > 0, that is, P(K = k)k≥0 =(pk(휃))k≥0, where we assume p0(휃) > 0. For brevity, we write below K ∼ 휅휃. Consider the family of random variables {NIA ∶ A ⊂ E} and let 휓A(t) be the pgf of NIA with 휓휃(t)=휓E(t) being the pgf of K (since NIE = K). Let a = 휈(A), b = 1 − a and note that IA K K 휓A(t)=E(E t ) = E(at + b) = 휓휃(at + b), or equivalently, in terms of apgf, 휓̃ A(t)=휓̃ 휃(at). Definition 1 (Bones). We say that the family {휅휃 ∶ 휃 ∈Θ} of counting probability measures is strongly invariant with respect to the family {NIA ∶ A ⊂ E} (is a “bone”)ifforany0< a ≤ 1 there exists a mapping ha ∶Θ→ Θ such that 휓휃(at + 1 − a)=휓ha(휃)(t). (5) Note that in terms of apgf, the above condition becomes simply 휓̃ 휃(at)=휓̃ ha(휃)(t). In Table 1, we give some examples of such invariant (“bone”) families. 2.2 Finding bones in power series family Consider the family {휅휃 ∶ 휃 ∈Θ} to be in the form of the non-negative power series (NNPS) where k pk(휃)=ak휃 ∕g(휃). (6) and p0 > 0. We call NNPS canonical if a0 = 1. Setting b = 1 − a, we see that for canonical NNPS, the bone condition in Definition 1 becomes g((at + b)휃)=g(b휃)g(ha(휃)t). (7) The following is a fundamental result on the existence of “bones” in the NNPS family. Theorem 2 (Bones in NNPS). Let 휈 be diffuse (ie, non-atomic). For canonical NNPS 휅휃 satisfying additionally a1 > 0, the relation (7) holds iff log g(휃)=휃 or log g(휃)=±c log (1±휃), where c > 0. TABLE 1 Some examples of “bone” distributions with Name Parameter 휃휓휃(t) ha(휃) Poisson 휆 exp[휃(t − 1)] a휃 corresponding pgfs and mappings of their canonical parameters Bernoulli p∕(1 − p) (1 + 휃t)∕(1 + 휃) a휃∕(1 +(1 − a)휃) Geometric p (1 − 휃)∕(1 − t휃) a휃∕(1 − (1 − a)휃) BASTIAN AND REMPALA 4661 Proof.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us