On the Distribution of the Running Average of a Skellam Process

Total Page:16

File Type:pdf, Size:1020Kb

On the Distribution of the Running Average of a Skellam Process International Journal of Pure and Applied Mathematics Volume 119 No. 3 2018, 461-473 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu AP doi: 10.12732/ijpam.v119i3.6 ijpam.eu ON THE DISTRIBUTION OF THE RUNNING AVERAGE OF A SKELLAM PROCESS Weixuan Xia ∗Mathematical Finance Boston University Questrom School of Business 595 Commonwealth Ave, Boston, MA 02215, USA Abstract: It is shown that the probability distribution of the running average of a Skellam process is of compound Poisson type, which gives rise to double-uniform-sum distributions. The average process’s characteristic function, moments, as well as probability density function and cumulative distribution function are derived in explicit form. AMS Subject Classification: 60E10, 60G55, 60J75 Key Words: probability distribution, running average, Skellam process, double-uniform sum 1. Introduction A Skellam distribution is a discrete-valued probability distribution initially pro- posed in Skellam (1946) [6], and is well known to be the distribution of the difference of two independent Poisson random variables, with respective rate parameters λ > 0 and λ > 0. In continuous time t 0, a Skellam pro- 1 2 ≥ cess K (K ) is hence defined to be a L´evy process admitting a Skellam ≡ t distribution. In light of the conventional definition of L´evy processes (e.g., see Schoutens ((2003), pages 44–45) [5]), the Skellam process is defined by the following conditions. K = 0, a.s., i.e., Pr[K = 0] = 1. • 0 0 Received: May 18, 2107 c 2018 Academic Publications, Ltd. Revised: July 11, 2018 url: www.acadpubl.eu Published: July 12, 2018 462 W. Xia For any partition of time P = t N with t = 0 and t < t , • { n}n∈ 0 n n+1 n, it holds true that the increment random variables Ktn+1 Ktn are ∀ law − mutually independent and stationary with K K n = K n tn+1 − t tn+1−t ∼ Skellam(λ (t t ), λ (t t )). 1 n+1 − n 2 n+1 − n The mapping t R K Z is c`adl`ag with probability 1, i.e., • ∈ + 7→ t ∈ lim Pr[ K K > ǫ] = 0, ǫ > 0. hց0 | t+h − t| ∀ Conditional on t, the Skellam process has the following probability mass function, x/2 −(λ1+λ2)t λ1 pK|t(x) Pr[Kt = x] = e Ix(2t λ1λ2), x Z, (1) ≡ λ2 ∈ p where I ( ) is the modified Bessel function of the first kind. For details refer · · to Abramowitz and Stegun ((1972), pages 375–378) [1]. The characteristic function of K is hence given by φ (u) := E eiuKt eiuxp (x) (2) K|t ≡ K|t x∈Z X = exp λ t(eiu 1) + λ t(e−iu 1) , u R, 1 − 2 − ∈ where i = √ 1. Clearly, (2) is infinitely divisible in that φ (u) = (φ (u))t, − K|t K|1 so that the L´evy properties are meaningful. Several works have so far existed to discuss the properties as well as ap- plications of the Skellam process. For instance, Barndorff-Nielsen et al (2010) [2] considered the scaled Skellam process and a generalization using negative binomial distributions when modeling low-latency financial data while Kerss et al (2014) [4] analyzed, by means of time change, fractional Skellam processes of which the Skellam process is a special case. In this paper our interest lies in analyzing the following time-scaled integral of the path of the Skellam process, 1 t K˜ := K ds. (3) t t s Z0 This stochastic process, notably, can be identified as the running average of the Skellam process K. In equivalent differential, we can write 1 1 t dK˜ = K K ds dt, (4) t t t − t2 s Z0 ON THE DISTRIBUTION OF THE RUNNING... 463 with K˜0 = 0, a.s. This indicates that K˜ has continuous sample paths of bounded total variation. In the following sections the distributional information of K˜ is thoroughly explored, while comparison is also made with the original distribu- tional properties of K. 2. Characteristic function In general, the distribution of K˜ is analyzed conditional on t > 0 and the parametrization λ > 0, λ > 0 . In an attempt to derive the characteristic { 1 2 } function of K˜ , we introduce the following lemma, which applies quite conve- niently to the general class of L´evy processes. Lemma 1. If X (Xt) is a L´evy process and Y (Yt) its Riemann integral defined by ≡ ≡ t Yt := Xsds, (5) Z0 then it holds for the respective characteristic functions of X and Y that 1 φ (u) := E eiuYt = exp t ln φ (tuz)dz , u R. (6) Y |t X|1 ∈ Z0 Proof. By the independent and stationary increments of X, the decompo- sition n n t t Yt = lim Xkt/n = lim (n k + 1)(Xkt/n X(k−1)t/n) (7) n→∞ n n→∞ n − − Xk=1 Xk=1 immediately leads to n ktu φY |t(u) = exp lim ln φX|t/n , (8) n→∞ n ! Xk=1 from which the lemma follows by infinite divisibility. The next theorem hence gives the characteristic function of K˜ . Theorem 2. E iuK˜t φK˜ |t(u) := e (9) eiu 1 1 e−iu = exp λ t − 1 + λ t − 1 , u R. 1 iu − 2 iu − ∈ 464 W. Xia Proof. This follows from a direct application of Lemma 1 to (2) after scaling by 1/t. Notice that (9) has a removable singularity at u = 0, as it is easily ob- servable that the Taylor expansion of the function (ez 1)/z about 0 contains − all nonnegative integer-valued powers of z. Upon removal we can define that φ (0) = 1. As a consequence, K˜ ’s moment generating function, φ ( iu), K˜ |t K˜ |t − is uniformly well-defined on the real line, which allows the next section to ex- patiate on the moment properties. Obviously, like (2), (9) is still infinitely divisible. An important implication from (9) is that the running average process has a compound Poisson distribution, as stated below. Corollary 3. Nt law K˜t = Jn, (10) n=1 X where (Nt) is a Poisson process with intensity parameter λ1 + λ2 > 0 and J N are i.i.d. random variables admitting a double uniform distribution. { n}n∈ ++ Proof. Some elementary transformations from (9) lead to (eiu 1)e−iu(λ eiu + λ ) φ (u) = exp (λ + λ )t − 1 2 1 (11) K˜ |t 1 2 i(λ + λ )u − 1 2 λ eiu 1 λ 1 e−iu = exp (λ + λ )t 1 − + 2 − 1 , 1 2 λ + λ iu λ + λ iu − 1 2 1 2 which conveniently points to a compound Poisson structure with rate (λ1 +λ2)t. The mixing distribution is understood from −iu iu iuJ1 λ2 1 e λ1 e 1 φJ (u) := E e = − + − (12) λ1 + λ2 iu λ1 + λ2 iu to be a weighted average of the characteristic functions of two uniform distri- butions supported over [ 1, 0] and [0, 1], respectively. − In other words, the running average of the Skellam process is equivalent in law to a compound Poisson process with intensity λ1 +λ2 and double-uniformly distributed jumps. Nevertheless, the resulting distribution is no longer Skellam, in the absence of α-stability. ON THE DISTRIBUTION OF THE RUNNING... 465 3. Moments For succinctness, denote by mr the rth moment of the running average K˜ , with the definition r d φK˜ |t(u) m := E K˜ r = ( i)r . (13) r t − dur u=0 The uniform existence of the moments is as aforementioned, and they can be found by the following recursive formula. Theorem 4. r r λ + ( 1)k+1λ m = 1, m = t 1 − 2 m , r N. (14) 0 r+1 k k + 2 r−k ∈ Xk=0 Proof. Based on (9), the Taylor expansion of the characteristic exponent, ln φK˜ |t(u), around 0, gives that ∞ ∞ eiu 1 (iu)r 1 e−iu ( iu)r − = 1 + and − = 1 + − . (15) iu (r + 1)! iu (r + 1)! r=1 r=1 X X Then, we apply the famous exponential formula in combinatorics, a.k.a. Fa`a di Bruno’s formula in the context of exponentials (see Stanley ((1999), pages 1–10) [7]), in order to calculate the coefficients in ∞ m φ (u) = r (iu)r. (16) K˜ |t r! r=0 X As a result, r r λ t ( 1)k+1λ t m = (k + 1)! 1 + − 2 m , r N, (17) r+1 k (k + 2)! (k + 2)! r−k ∈ Xk=0 with m0 = 1, and the theorem follows. In connection with this, the mean, variance, skewness, and excess kurtosis can be calculated in proper order as (λ λ )t E[K˜ ] = m = 1 − 2 , (18) t 1 2 (λ + λ )t Var[K˜ ] = m m2 = 1 2 , (19) t 2 − 1 3 466 W. Xia 3 m3 3m2m1 + 2m1 3√3(λ1 λ2) Skew[K˜t] = − = − , (20) 3/2 3 m m2 4 (λ1 + λ2) t 2 − 1 2 4 ˜ m4 4m3m1 + 6m2m1 3m1 p 9 EKurt[Kt] = − 2 − 3 = . (21) m m2 − 5(λ1 + λ2)t 2 − 1 We remark that, compared to the Skellam process, E[K˜t]/E[Kt] = 1/2, Var[K˜t]/Var[Kt] = 1/3, Skew[K˜t]/Skew[Kt] = 3√3/4, 1 and EKurt[K˜t]/EKurt[Kt] = 9/5 . In comparison, the running average is char- acterized with smaller variance but higher asymmetric leptokurtic level. The mean and variance are still linear in time while the skewness and kurtosis gen- erally decrease with the passage of time.
Recommended publications
  • Arxiv:2004.02679V2 [Math.OA] 17 Jul 2020
    THE FREE TANGENT LAW WIKTOR EJSMONT AND FRANZ LEHNER Abstract. Nevanlinna-Herglotz functions play a fundamental role for the study of infinitely divisible distributions in free probability [11]. In the present paper we study the role of the tangent function, which is a fundamental Herglotz-Nevanlinna function [28, 23, 54], and related functions in free probability. To be specific, we show that the function tan z 1 ´ x tan z of Carlitz and Scoville [17, (1.6)] describes the limit distribution of sums of free commutators and anticommutators and thus the free cumulants are given by the Euler zigzag numbers. 1. Introduction Nevanlinna or Herglotz functions are functions analytic in the upper half plane having non- negative imaginary part. This class has been thoroughly studied during the last century and has proven very useful in many applications. One of the fundamental examples of Nevanlinna functions is the tangent function, see [6, 28, 23, 54]. On the other hand it was shown by by Bercovici and Voiculescu [11] that Nevanlinna functions characterize freely infinitely divisible distributions. Such distributions naturally appear in free limit theorems and in the present paper we show that the tangent function appears in a limit theorem for weighted sums of free commutators and anticommutators. More precisely, the family of functions tan z 1 x tan z ´ arises, which was studied by Carlitz and Scoville [17, (1.6)] in connection with the combinorics of tangent numbers; in particular we recover the tangent function for x 0. In recent years a number of papers have investigated limit theorems for“ the free convolution of probability measures defined by Voiculescu [58, 59, 56].
    [Show full text]
  • A Guide on Probability Distributions
    powered project A guide on probability distributions R-forge distributions Core Team University Year 2008-2009 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic ditribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 132 13 Misc 134 Conclusion 135 Bibliography 135 A Mathematical tools 138 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • An Alternative Discrete Skew Logistic Distribution
    An Alternative Discrete Skew Logistic Distribution Deepesh Bhati1*, Subrata Chakraborty2, and Snober Gowhar Lateef1 1Department of Statistics, Central University of Rajasthan, 2Department of Statistics, Dibrugarh University, Assam *Corresponding Author: [email protected] April 7, 2016 Abstract In this paper, an alternative Discrete skew Logistic distribution is proposed, which is derived by using the general approach of discretizing a continuous distribution while retaining its survival function. The properties of the distribution are explored and it is compared to a discrete distribution defined on integers recently proposed in the literature. The estimation of its parameters are discussed, with particular focus on the maximum likelihood method and the method of proportion, which is particularly suitable for such a discrete model. A Monte Carlo simulation study is carried out to assess the statistical properties of these inferential techniques. Application of the proposed model to a real life data is given as well. 1 Introduction Azzalini A.(1985) and many researchers introduced different skew distributions like skew-Cauchy distribu- tion(Arnold B.C and Beaver R.J(2000)), Skew-Logistic distribution (Wahed and Ali (2001)), Skew Student's t distribution(Jones M.C. et. al(2003)). Lane(2004) fitted the existing skew distributions to insurance claims data. Azzalini A.(1985), Wahed and Ali (2001) developed Skew Logistic distribution by taking f(x) to be Logistic density function and G(x) as its CDF of standard Logistic distribution, respectively and obtained arXiv:1604.01541v1 [stat.ME] 6 Apr 2016 the probability density function(pdf) as 2e−x f(x; λ) = ; −∞ < x < 1; −∞ < λ < 1 (1 + e−x)2(1 + e−λx) They have numerically studied cdf, moments, median, mode and other properties of this distribution.
    [Show full text]
  • Skellam Type Processes of Order K and Beyond
    Skellam Type Processes of Order K and Beyond Neha Guptaa, Arun Kumara, Nikolai Leonenkob a Department of Mathematics, Indian Institute of Technology Ropar, Rupnagar, Punjab - 140001, India bCardiff School of Mathematics, Cardiff University, Senghennydd Road, Cardiff, CF24 4AG, UK Abstract In this article, we introduce Skellam process of order k and its running average. We also discuss the time-changed Skellam process of order k. In particular we discuss space-fractional Skellam process and tempered space-fractional Skellam process via time changes in Poisson process by independent stable subordinator and tempered stable subordinator, respectively. We derive the marginal proba- bilities, L´evy measures, governing difference-differential equations of the introduced processes. Our results generalize Skellam process and running average of Poisson process in several directions. Key words: Skellam process, subordination, L´evy measure, Poisson process of order k, running average. 1 Introduction Skellam distribution is obtained by taking the difference between two independent Poisson distributed random variables which was introduced for the case of different intensities λ1, λ2 by (see [1]) and for equal means in [2]. For large values of λ1 + λ2, the distribution can be approximated by the normal distribution and if λ2 is very close to 0, then the distribution tends to a Poisson distribution with intensity λ1. Similarly, if λ1 tends to 0, the distribution tends to a Poisson distribution with non- positive integer values. The Skellam random variable is infinitely divisible since it is the difference of two infinitely divisible random variables (see Prop. 2.1 in [3]). Therefore, one can define a continuous time L´evy process for Skellam distribution which is called Skellam process.
    [Show full text]
  • On the Bivariate Skellam Distribution Jan Bulla, Christophe Chesneau, Maher Kachour
    On the bivariate Skellam distribution Jan Bulla, Christophe Chesneau, Maher Kachour To cite this version: Jan Bulla, Christophe Chesneau, Maher Kachour. On the bivariate Skellam distribution. 2012. hal- 00744355v1 HAL Id: hal-00744355 https://hal.archives-ouvertes.fr/hal-00744355v1 Preprint submitted on 22 Oct 2012 (v1), last revised 23 Oct 2013 (v2) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Noname manuscript No. (will be inserted by the editor) On the bivariate Skellam distribution Jan Bulla · Christophe Chesneau · Maher Kachour Submitted to Journal of Multivariate Analysis: 22 October 2012 Abstract In this paper, we introduce a new distribution on Z2, which can be viewed as a natural bivariate extension of the Skellam distribution. The main feature of this distribution a possible dependence of the univariate components, both following univariate Skellam distributions. We explore various properties of the distribution and investigate the estimation of the unknown parameters via the method of moments and maximum likelihood. In the experimental section, we illustrate our theory. First, we compare the performance of the estimators by means of a simulation study. In the second part, we present and an application to a real data set and show how an improved fit can be achieved by estimating mixture distributions.
    [Show full text]
  • A CURE for Noisy Magnetic Resonance Images: Chi-Square Unbiased Risk Estimation
    SUBMITTED MANUSCRIPT 1 A CURE for Noisy Magnetic Resonance Images: Chi-Square Unbiased Risk Estimation Florian Luisier, Thierry Blu, and Patrick J. Wolfe Abstract In this article we derive an unbiased expression for the expected mean-squared error associated with continuously differentiable estimators of the noncentrality parameter of a chi- square random variable. We then consider the task of denoising squared-magnitude magnetic resonance image data, which are well modeled as independent noncentral chi-square random variables on two degrees of freedom. We consider two broad classes of linearly parameterized shrinkage estimators that can be optimized using our risk estimate, one in the general context of undecimated filterbank transforms, and another in the specific case of the unnormalized Haar wavelet transform. The resultant algorithms are computationally tractable and improve upon state-of-the-art methods for both simulated and actual magnetic resonance image data. EDICS: TEC-RST (primary); COI-MRI, SMR-SMD, TEC-MRS (secondary) Florian Luisier and Patrick J. Wolfe are with the Statistics and Information Sciences Laboratory, Harvard University, Cambridge, MA 02138, USA (email: fl[email protected], [email protected]) Thierry Blu is with the Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong (e-mail: [email protected]). arXiv:1106.2848v1 [stat.AP] 15 Jun 2011 This work was supported by the Swiss National Science Foundation Fellowship BELP2-133245, the US National Science Foundation Grant DMS-0652743, and the General Research Fund CUHK410209 from the Hong Kong Research Grant Council. August 20, 2018 DRAFT SUBMITTED MANUSCRIPT 2 I.
    [Show full text]
  • Handbook on Probability Distributions
    R powered R-forge project Handbook on probability distributions R-forge distributions Core Team University Year 2009-2010 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic distribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 133 13 Misc 135 Conclusion 137 Bibliography 137 A Mathematical tools 141 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • Package 'Extradistr'
    Package ‘extraDistr’ September 7, 2020 Type Package Title Additional Univariate and Multivariate Distributions Version 1.9.1 Date 2020-08-20 Author Tymoteusz Wolodzko Maintainer Tymoteusz Wolodzko <[email protected]> Description Density, distribution function, quantile function and random generation for a number of univariate and multivariate distributions. This package implements the following distributions: Bernoulli, beta-binomial, beta-negative binomial, beta prime, Bhattacharjee, Birnbaum-Saunders, bivariate normal, bivariate Poisson, categorical, Dirichlet, Dirichlet-multinomial, discrete gamma, discrete Laplace, discrete normal, discrete uniform, discrete Weibull, Frechet, gamma-Poisson, generalized extreme value, Gompertz, generalized Pareto, Gumbel, half-Cauchy, half-normal, half-t, Huber density, inverse chi-squared, inverse-gamma, Kumaraswamy, Laplace, location-scale t, logarithmic, Lomax, multivariate hypergeometric, multinomial, negative hypergeometric, non-standard beta, normal mixture, Poisson mixture, Pareto, power, reparametrized beta, Rayleigh, shifted Gompertz, Skellam, slash, triangular, truncated binomial, truncated normal, truncated Poisson, Tukey lambda, Wald, zero-inflated binomial, zero-inflated negative binomial, zero-inflated Poisson. License GPL-2 URL https://github.com/twolodzko/extraDistr BugReports https://github.com/twolodzko/extraDistr/issues Encoding UTF-8 LazyData TRUE Depends R (>= 3.1.0) LinkingTo Rcpp 1 2 R topics documented: Imports Rcpp Suggests testthat, LaplacesDemon, VGAM, evd, hoa,
    [Show full text]
  • On the Poisson Difference Distribution Inference and Applications 19
    BULLETIN of the Bull. Malays. Math. Sci. Soc. (2) 33(1) (2010), 17–45 Malaysian Mathematical Sciences Society http://math.usm.my/bulletin On The Poisson Difference Distribution Inference and Applications 1Abdulhamid A. Alzaid and 2Maha A. Omair 1;2Department of Statistics and Operations Research, College of Sciences, King Saud University, BOX 2455 Riyadh 11451, Kingdom of Saudi Arabia [email protected], [email protected] Abstract. The distribution of the difference between two independent Poisson random variables involves the modified Bessel function of the first kind. Using properties of this function, maximum likelihood estimates of the parameters of the Poisson difference were derived. Asymptotic distribution property of the maximum likelihood estimates is discussed. Maximum likelihood estimates were compared with the moment estimates in a Monte Carlo study. Hypothesis test- ing using likelihood ratio tests was considered. Some new formulas concerning the modified Bessel function of the first kind were provided. Alternative formu- las for the probability mass function of the Poisson difference (PD) distribution are introduced. Finally, two new applications for the PD distribution are pre- sented. The first is from the Saudi stock exchange (TASI) and the second is from Dallah hospital. 2000 Mathematics Subject Classification: Primary 60E05; Secondary 62F86, 46N30 Key words and phrases: Poisson difference distribution, Skellam distribution, Bessel function, regularized hypergeometric function, maximum likelihood esti- mate, likelihood ratio test. 1. Introduction The distribution of the difference between two independent Poisson random variables was derived by Irwin [5] for the case of equal parameters. Skellam [13] and Prekopa [11] discussed the case of unequal parameters.
    [Show full text]
  • Ebookdistributions.Pdf
    DOWNLOAD YOUR FREE MODELRISK TRIAL Adapted from Risk Analysis: a quantitative guide by David Vose. Published by John Wiley and Sons (2008). All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. If you notice any errors or omissions, please contact [email protected] Referencing this document Please use the following reference, following the Harvard system of referencing: Van Hauwermeiren M, Vose D and Vanden Bossche S (2012). A Compendium of Distributions (second edition). [ebook]. Vose Software, Ghent, Belgium. Available from www.vosesoftware.com . Accessed dd/mm/yy. © Vose Software BVBA (2012) www.vosesoftware.com Updated 17 January, 2012. Page 2 Table of Contents Introduction .................................................................................................................... 7 DISCRETE AND CONTINUOUS DISTRIBUTIONS.......................................................................... 7 Discrete Distributions .............................................................................................. 7 Continuous Distributions ........................................................................................
    [Show full text]
  • Arxiv:1710.02036V4 [Cs.CR] 1 Dec 2017 ‹ Hsi H Ulvrino 3] H Eerhwsspotdb Supported Was Research the [38]
    Computational Differential Privacy from Lattice-based Cryptography ‹ Filipp Valovich and Francesco Ald`a Horst G¨ortz Institute for IT Security Faculty of Mathematics Ruhr-Universit¨at Bochum, Universit¨atsstrasse 150, 44801 Bochum, Germany {filipp.valovich,francesco.alda}@rub.de Abstract. The emerging technologies for large scale data analysis raise new challenges to the security and privacy of sensitive user data. In this work we investigate the problem of private statistical analysis of time-series data in the distributed and semi-honest setting. In particu- lar, we study some properties of Private Stream Aggregation (PSA), first introduced by Shi et al. 2011. This is a computationally secure proto- col for the collection and aggregation of data in a distributed network and has a very small communication cost. In the non-adaptive query model, a secure PSA scheme can be built upon any key-homomorphic weak pseudo-random function as shown by Valovich 2017, yielding se- curity guarantees in the standard model which is in contrast to Shi et. al. We show that every mechanism which preserves ǫ, δ -differential pri- p q vacy in effect preserves computational ǫ, δ -differential privacy when it p q is executed through a secure PSA scheme. Furthermore, we introduce a novel perturbation mechanism based on the symmetric Skellam distribu- tion that is suited for preserving differential privacy in the distributed setting, and find that its performances in terms of privacy and accuracy are comparable to those of previous solutions. On the other hand, we leverage its specific properties to construct a computationally efficient prospective post-quantum protocol for differentially private time-series data analysis in the distributed model.
    [Show full text]
  • Probability Distribution Relationships
    STATISTICA, anno LXX, n. 1, 2010 PROBABILITY DISTRIBUTION RELATIONSHIPS Y.H. Abdelkader, Z.A. Al-Marzouq 1. INTRODUCTION In spite of the variety of the probability distributions, many of them are related to each other by different kinds of relationship. Deriving the probability distribu- tion from other probability distributions are useful in different situations, for ex- ample, parameter estimations, simulation, and finding the probability of a certain distribution depends on a table of another distribution. The relationships among the probability distributions could be one of the two classifications: the transfor- mations and limiting distributions. In the transformations, there are three most popular techniques for finding a probability distribution from another one. These three techniques are: 1 - The cumulative distribution function technique 2 - The transformation technique 3 - The moment generating function technique. The main idea of these techniques works as follows: For given functions gin(,XX12 ,...,) X, for ik 1, 2, ..., where the joint dis- tribution of random variables (r.v.’s) XX12, , ..., Xn is given, we define the func- tions YgXXii (12 , , ..., X n ), i 1, 2, ..., k (1) The joint distribution of YY12, , ..., Yn can be determined by one of the suit- able method sated above. In particular, for k 1, we seek the distribution of YgX () (2) For some function g(X ) and a given r.v. X . The equation (1) may be linear or non-linear equation. In the case of linearity, it could be taken the form n YaX ¦ ii (3) i 1 42 Y.H. Abdelkader, Z.A. Al-Marzouq Many distributions, for this linear transformation, give the same distributions for different values for ai such as: normal, gamma, chi-square and Cauchy for continuous distributions and Poisson, binomial, negative binomial for discrete distributions as indicated in the Figures by double rectangles.
    [Show full text]