Two Classes of Bivariate Distributions on the Unit Square

Total Page:16

File Type:pdf, Size:1020Kb

Two Classes of Bivariate Distributions on the Unit Square ISSN 2279-9362 Two classes of bivariate distributions on the unit square Antonio Lijoi Bernardo Nipoti No. 238 January 2012 www.carloalberto.org/research/working-papers © 2012 by Antonio Lijoi and Bernardo Nipoti. Any opinions expressed here are those of the authors and not those of the Collegio Carlo Alberto. Two classes of bivariate distributions on the unit square A. Lijoi Universit`adegli Studi di Pavia & Collegio Carlo Alberto, Italy. E-mail: [email protected] B. Nipoti MD Anderson Cancer Center, USA & Collegio Carlo Alberto, Italy. E-mail: [email protected] February 2012 Abstract We study a class of bivariate distributions on the unit square that are obtained by means of a suitable transformation of exponentially and polynomially tilted σ{stable distributions. It is interesting to note that they appear in a class of models used in Bayesian nonparamet- ric inference where dependent nonparametric prior processes are obtained via normalization. The approach we undertake is general, even if we focus on the distributions that result from considering Poisson{Dirichlet and normalized inverse{Gaussian processes. We study some of their properties such as mixed moments and correlation and provide extensions to the mul- tivariate case. Finally we implement an algorithm to simulate from such distributions and observe that it may also be used to sample vectors from a wider class of distributions. Key words and phrases: Completely random measures; Generalized arcsine distribution; Inverse{Gaussian distribution; Tilted stable distributions; Poisson{Dirichlet process; Ran- dom variate generator. 1 Introduction The present paper introduces two new families of distributions on the unit square (0; 1)2 that are obtained by suitably transforming random variables whose probability distribution is a polynomial or exponential tilting of a positive σ-stable distribution. The main motivation for the analysis we are going to develop comes from possible applications to Bayesian nonparametric inference. Indeed, polynomially and exponentially tilted random variables are connected to two-parameter Poisson- Dirichlet and normalized generalized gamma processes that represent two well-known classes of nonparametric priors used in various research areas even beyond Bayesian statistics. See, e.g., 1 Pitman & Yor (1997), Brix (1999), Pitman (2003, 2006) and Lijoi et al. (2007). If we confine ourselves to the case where σ = 1=2, the stable distribution has a closed analytic form depending on a parameter c > 0, namely r c − 3 c f 1 (x) = x 2 exp − 1(0;1)(x) (1) 2 2π 2x where 1A stands for the indicator function of set A. This is also known as the L´evydensity. The polynomially tilted random variable related to the two-parameter Poisson-Dirichlet random probability measure has density function −θ g 1 (x) / x f 1 (x); (2) 2 ,θ 2 for some θ > −1=2, where / means that the above expression lacks the proportionality constant. See Pitman & Yor (1997). Similarly, the density function of the exponentially tilted random variable that is related to the normalized inverse-Gaussian prior is p − βx g 1 (x) / e f 1 (x); (3) 2 ,β 2 for some β > 0. See Lijoi et al. (2005). When σ 6= 1=2 there is no closed form expression for the density fσ: one can just characterize it through its Laplace transform and the same can be said for the tilted distributions. The construction we are going to resort to is quite simple. Let X0, X1 and X2 be independent and positive random variables whose probability distribution belongs to the same parametric family. We define the random vector (W1;W2) in such a way that Wi = Xi=(X0 +Xi), for i = 1; 2. As we shall see, when the Xj's have a density of the form (2) or (3) it is possible to obtain an explicit form of the density of (W1;W2) and analyses some of its properties such as moments and correlations. When σ 6= 1=2 it is not possible to deduce exact analytic forms of the quantities of interest and one must rely on some suitable simulation algorithm that generates realizations of the vector (W1;W2). Such a construction can also be extended to incorporate the case of d{ dimensional vectors generated by the d + 1 independent random variables X0;X1;:::;Xd, with d ≥ 2. In terms of statistical applications, the analytic and computational results that will be illustrated throughout the following sections might be useful for the construction of dependent nonparametric priors, namely collections fp~z : z 2 Zg of random probability measures indexed by a covariate, or set of covariates, z. The definition of covariate-dependent priors has recently been the object of a very active research work in the Bayesian nonparametric literature since they are amenable of use in a variety of contexts ranging from nonparametric regression to meta{ analysis, from spatial statistics to time series analysis and so on. See Hjort et al. (Eds) for a recent overview. In our case, if we set Z = f1; : : : ; dg we can definep ~z as a convex linear combination of independent random probabilities with weights Wz and (1 − Wz). The dependence among the Wz's will induce dependence among thep ~z's. Moreover, a suitable parameterization of the distribution of the Xi's, for i = 0; 1; : : : ; d, ensures that the marginal distribution of eachp ~z is the same for each z. For example, with d = 2 we can follow such a construction to obtain dependent Dirichlet processes (~p1; p~2) by letting X0, X1 and X2 be independent gamma random variables: the distribution of (W1;W2) corresponds to the bivariate beta by Olkin & Liu (2003). We will 2 not enter the details of Bayesian nonparametric modeling here and will focus on some structural properties of the distribution of the vector (W1;:::;Wd). The structure of the paper is as follows. In Section 2 we provide a quick resum´eon completely random measures: these are useful for defining random probability measures that are analytically tractable in a Bayesian inferential framework and are connected to the random variables considered for defining distributions on (0; 1)2. In Section 3 we study the bivariate density that arises in the construction of dependent two-parameter Poisson-Dirichlet (PD) random measures. We start by defining the marginal density of the weights W1 and W2 that turns out to be a generalization of the arcsine distribution. The joint density of (W1;W2) can be seen as a bivariate generalized arcsine distribution. For some values of the parameters, we obtain closed expressions for mixed moments, correlation and correlation of odds ratios of (W1;W2). In section 4 we consider the vector (W1;W2) whose marginals are obtained via a suitable transformation of inverse{Gaussian random variables. Such a vector appears when dealing with dependent normalized inverse{Gaussian processes (NIG). Both Sections 3 and 4 are completed by a natural extension of the distributions to (0; 1)d with d > 2. As already mentioned, the density of (W1;W2) is not always available and one needs to resort to some simulation algorithm in order to generate realizations of (W1;W2). This algorithm is thoroughly studied in Section 5 and we devise it by relying on Devroye's generator proposed in (Devroye, 2009). We particularly focus on polynomially tilted σ{stable random variables with σ 2 (0; 1). This may be useful since it allows to estimate quantities that we do not know how to compute analytically but may be interesting for statistical inference. 2 A quick overview on completely random measures Completely random measures can be considered as an extension to general parametric spaces of R[0;+1) processes with independent increments in . In order to provide a formal definition, let MX be the space of boundedly finite measures over (X; X ), namely if m 2 MX one has m(A) < 1 for any bounded set A 2 X . Note that one can define a suitable topology on MX so that it is possible to consider the Borel σ{algebra of sets B(MX) on MX. For details see Daley & Vere-Jones (2008). P Definition 1. A measurable mapping µ from a probability space (Ω; F; ) into (MX; B(MX)) is called completely random measure (CRM) if, for any A1;:::;An in X such that Ai \ Aj = ? when i 6= j, the random variables µ(A1); : : : ; µ(An) are mutually independent. CRMs are almost surely discrete, that is any realization of a CRM is a discrete measure with probability one. Any CRM µ may be represented as µ = µc + µ0; P1 where µc = i=1 JiδXi is a CRM with both the positive jumps Ji and the locations Xi random, PM and µ0 = i=1 Viδxi is a measure with random masses Vi at fixed locations xi in X. Moreover, M 2 N[f1g and V1,:::,VM are mutually independent and independent from µc. The component without fixed jumps, µc, is characterized by the L´evy{Khintchine representation which states that 3 there exists a measure ν on R+ × X such that Z minfs; 1gν(ds; dx) < 1 (4) B×R+ for any B 2 X and Z Z E exp − f(x)µc(dx) = exp − [1 − exp (−s f(x))] ν(ds; dx) + X R ×X R for any measurable function f : ! R such that jf(x)j µc(dx) < 1 almost surely. The measure X X ν characterizes µc and is referred to as the L´evyintensity of µc. One can then define a CRM by assigning the measure ν. Moreover, a CRM µ can define a random probability measure whose distribution acts as a nonparametric prior for Bayesian inference. For example, if ν is such that it satisfies (4) and ν(R+ × X) = 1 then the corresponding CRM µ definesp ~ = µ/µ(X).
Recommended publications
  • Arcsine Laws for Random Walks Generated from Random Permutations with Applications to Genomics
    Applied Probability Trust (4 February 2021) ARCSINE LAWS FOR RANDOM WALKS GENERATED FROM RANDOM PERMUTATIONS WITH APPLICATIONS TO GENOMICS XIAO FANG,1 The Chinese University of Hong Kong HAN LIANG GAN,2 Northwestern University SUSAN HOLMES,3 Stanford University HAIYAN HUANG,4 University of California, Berkeley EROL PEKOZ,¨ 5 Boston University ADRIAN ROLLIN,¨ 6 National University of Singapore WENPIN TANG,7 Columbia University 1 Email address: [email protected] 2 Email address: [email protected] 3 Email address: [email protected] 4 Email address: [email protected] 5 Email address: [email protected] 6 Email address: [email protected] 7 Email address: [email protected] 1 Postal address: Department of Statistics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong 2 Postal address: Department of Mathematics, Northwestern University, 2033 Sheridan Road, Evanston, IL 60208 2 Current address: University of Waikato, Private Bag 3105, Hamilton 3240, New Zealand 1 2 X. Fang et al. Abstract A classical result for the simple symmetric random walk with 2n steps is that the number of steps above the origin, the time of the last visit to the origin, and the time of the maximum height all have exactly the same distribution and converge when scaled to the arcsine law. Motivated by applications in genomics, we study the distributions of these statistics for the non-Markovian random walk generated from the ascents and descents of a uniform random permutation and a Mallows(q) permutation and show that they have the same asymptotic distributions as for the simple random walk.
    [Show full text]
  • Free Infinite Divisibility and Free Multiplicative Mixtures of the Wigner Distribution
    FREE INFINITE DIVISIBILITY AND FREE MULTIPLICATIVE MIXTURES OF THE WIGNER DISTRIBUTION Victor Pérez-Abreu and Noriyoshi Sakuma Comunicación del CIMAT No I-09-0715/ -10-2009 ( PE/CIMAT) Free Infinite Divisibility of Free Multiplicative Mixtures of the Wigner Distribution Victor P´erez-Abreu∗ Department of Probability and Statistics, CIMAT Apdo. Postal 402, Guanajuato Gto. 36000, Mexico [email protected] Noriyoshi Sakumay Department of Mathematics, Keio University, 3-14-1, Hiyoshi, Yokohama 223-8522, Japan. [email protected] March 19, 2010 Abstract Let I∗ and I be the classes of all classical infinitely divisible distributions and free infinitely divisible distributions, respectively, and let Λ be the Bercovici-Pata bijection between I∗ and I: The class type W of symmetric distributions in I that can be represented as free multiplicative convolutions of the Wigner distribution is studied. A characterization of this class under the condition that the mixing distribution is 2-divisible with respect to free multiplicative convolution is given. A correspondence between sym- metric distributions in I and the free counterpart under Λ of the positive distributions in I∗ is established. It is shown that the class type W does not include all symmetric distributions in I and that it does not coincide with the image under Λ of the mixtures of the Gaussian distribution in I∗. Similar results for free multiplicative convolutions with the symmetric arcsine measure are obtained. Several well-known and new concrete examples are presented. AMS 2000 Subject Classification: 46L54, 15A52. Keywords: Free convolutions, type G law, free stable law, free compound distribution, Bercovici-Pata bijection.
    [Show full text]
  • Location-Scale Distributions
    Location–Scale Distributions Linear Estimation and Probability Plotting Using MATLAB Horst Rinne Copyright: Prof. em. Dr. Horst Rinne Department of Economics and Management Science Justus–Liebig–University, Giessen, Germany Contents Preface VII List of Figures IX List of Tables XII 1 The family of location–scale distributions 1 1.1 Properties of location–scale distributions . 1 1.2 Genuine location–scale distributions — A short listing . 5 1.3 Distributions transformable to location–scale type . 11 2 Order statistics 18 2.1 Distributional concepts . 18 2.2 Moments of order statistics . 21 2.2.1 Definitions and basic formulas . 21 2.2.2 Identities, recurrence relations and approximations . 26 2.3 Functions of order statistics . 32 3 Statistical graphics 36 3.1 Some historical remarks . 36 3.2 The role of graphical methods in statistics . 38 3.2.1 Graphical versus numerical techniques . 38 3.2.2 Manipulation with graphs and graphical perception . 39 3.2.3 Graphical displays in statistics . 41 3.3 Distribution assessment by graphs . 43 3.3.1 PP–plots and QQ–plots . 43 3.3.2 Probability paper and plotting positions . 47 3.3.3 Hazard plot . 54 3.3.4 TTT–plot . 56 4 Linear estimation — Theory and methods 59 4.1 Types of sampling data . 59 IV Contents 4.2 Estimators based on moments of order statistics . 63 4.2.1 GLS estimators . 64 4.2.1.1 GLS for a general location–scale distribution . 65 4.2.1.2 GLS for a symmetric location–scale distribution . 71 4.2.1.3 GLS and censored samples .
    [Show full text]
  • Handbook on Probability Distributions
    R powered R-forge project Handbook on probability distributions R-forge distributions Core Team University Year 2009-2010 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic distribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 133 13 Misc 135 Conclusion 137 Bibliography 137 A Mathematical tools 141 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • Field Guide to Continuous Probability Distributions
    Field Guide to Continuous Probability Distributions Gavin E. Crooks v 1.0.0 2019 G. E. Crooks – Field Guide to Probability Distributions v 1.0.0 Copyright © 2010-2019 Gavin E. Crooks ISBN: 978-1-7339381-0-5 http://threeplusone.com/fieldguide Berkeley Institute for Theoretical Sciences (BITS) typeset on 2019-04-10 with XeTeX version 0.99999 fonts: Trump Mediaeval (text), Euler (math) 271828182845904 2 G. E. Crooks – Field Guide to Probability Distributions Preface: The search for GUD A common problem is that of describing the probability distribution of a single, continuous variable. A few distributions, such as the normal and exponential, were discovered in the 1800’s or earlier. But about a century ago the great statistician, Karl Pearson, realized that the known probabil- ity distributions were not sufficient to handle all of the phenomena then under investigation, and set out to create new distributions with useful properties. During the 20th century this process continued with abandon and a vast menagerie of distinct mathematical forms were discovered and invented, investigated, analyzed, rediscovered and renamed, all for the purpose of de- scribing the probability of some interesting variable. There are hundreds of named distributions and synonyms in current usage. The apparent diver- sity is unending and disorienting. Fortunately, the situation is less confused than it might at first appear. Most common, continuous, univariate, unimodal distributions can be orga- nized into a small number of distinct families, which are all special cases of a single Grand Unified Distribution. This compendium details these hun- dred or so simple distributions, their properties and their interrelations.
    [Show full text]
  • Arcsine Laws for Random Walks Generated from Random
    ARCSINE LAWS FOR RANDOM WALKS GENERATED FROM RANDOM PERMUTATIONS WITH APPLICATIONS TO GENOMICS XIAO FANG, HAN LIANG GAN, SUSAN HOLMES, HAIYAN HUANG, EROL PEKOZ,¨ ADRIAN ROLLIN,¨ AND WENPIN TANG Abstract. A classical result for the simple symmetric random walk with 2n steps is that the number of steps above the origin, the time of the last visit to the origin, and the time of the maximum height all have exactly the same distribution and converge when scaled to the arcsine law. Motivated by applications in genomics, we study the distributions of these statistics for the non-Markovian random walk generated from the ascents and descents of a uniform random permutation and a Mallows(q) permutation and show that they have the same asymptotic distributions as for the simple random walk. We also give an unexpected conjecture, along with numerical evidence and a partial proof in special cases, for the result that the number of steps above the origin by step 2n for the uniform permutation generated walk has exactly the same discrete arcsine distribution as for the simple random walk, even though the other statistics for these walks have very different laws. We also give explicit error bounds to the limit theorems using Stein’s method for the arcsine distribution, as well as functional central limit theorems and a strong embedding of the Mallows(q) permutation which is of independent interest. Key words : Arcsine distribution, Brownian motion, L´evy statistics, limiting distribu- tion, Mallows permutation, random walks, Stein’s method, strong embedding, uniform permutation. AMS 2010 Mathematics Subject Classification: 60C05, 60J65, 05A05.
    [Show full text]
  • Modeling Liver Cancer and Leukemia Data Using Arcsine-Gaussian Distribution
    Computers, Materials & Continua Tech Science Press DOI:10.32604/cmc.2021.015089 Article Modeling Liver Cancer and Leukemia Data Using Arcsine-Gaussian Distribution Farouq Mohammad A. Alam1, Sharifah Alrajhi1, Mazen Nassar1,2 and Ahmed Z. Afify3,* 1Department of Statistics, Faculty of Science, King Abdulaziz University, Jeddah, 21589, Saudi Arabia 2Department of Statistics, Faculty of Commerce, Zagazig University, Zagazig, 44511, Egypt 3Department of Statistics, Mathematics and Insurance, Benha University, Benha, 13511, Egypt *Corresponding Author: Ahmed Z. Afify. Email: [email protected] Received: 06 November 2020; Accepted: 12 December 2020 Abstract: The main objective of this paper is to discuss a general family of distributions generated from the symmetrical arcsine distribution. The considered family includes various asymmetrical and symmetrical probability distributions as special cases. A particular case of a symmetrical probability distribution from this family is the Arcsine–Gaussian distribution. Key sta- tistical properties of this distribution including quantile, mean residual life, order statistics and moments are derived. The Arcsine–Gaussian parameters are estimated using two classical estimation methods called moments and maximum likelihood methods. A simulation study which provides asymptotic distribution of all considered point estimators, 90% and 95% asymptotic confidence intervals are performed to examine the estimation efficiency of the considered methods numerically. The simulation results show that both biases and variances of the estimators tend to zero as the sample size increases, i.e., the estimators are asymptotically consistent. Also, when the sample size increases the coverage probabilities of the confidence intervals increase to the nominal levels, while the corresponding length decrease and approach zero. Two real data sets from the medicine filed are used to illustrate the flexibility of the Arcsine–Gaussian distribution as compared with the normal, logistic, and Cauchy models.
    [Show full text]
  • Package 'Distr' Standardmethods Utility to Automatically Generate Accessor and Replacement Functions
    Package ‘distr’ March 11, 2019 Version 2.8.0 Date 2019-03-11 Title Object Oriented Implementation of Distributions Description S4-classes and methods for distributions. Depends R(>= 2.14.0), methods, graphics, startupmsg, sfsmisc Suggests distrEx, svUnit (>= 0.7-11), knitr Imports stats, grDevices, utils, MASS VignetteBuilder knitr ByteCompile yes Encoding latin1 License LGPL-3 URL http://distr.r-forge.r-project.org/ LastChangedDate {$LastChangedDate: 2019-03-11 16:33:22 +0100 (Mo, 11 Mrz 2019) $} LastChangedRevision {$LastChangedRevision: 1315 $} VCS/SVNRevision 1314 NeedsCompilation yes Author Florian Camphausen [ctb] (contributed as student in the initial phase --2005), Matthias Kohl [aut, cph], Peter Ruckdeschel [cre, cph], Thomas Stabla [ctb] (contributed as student in the initial phase --2005), R Core Team [ctb, cph] (for source file ks.c/ routines 'pKS2' and 'pKolmogorov2x') Maintainer Peter Ruckdeschel <[email protected]> Repository CRAN Date/Publication 2019-03-11 20:32:54 UTC 1 2 R topics documented: R topics documented: distr-package . .5 AbscontDistribution . 12 AbscontDistribution-class . 14 Arcsine-class . 18 Beta-class . 19 BetaParameter-class . 21 Binom-class . 22 BinomParameter-class . 24 Cauchy-class . 25 CauchyParameter-class . 27 Chisq-class . 28 ChisqParameter-class . 30 CompoundDistribution . 31 CompoundDistribution-class . 32 convpow-methods . 34 d-methods . 36 decomposePM-methods . 36 DExp-class . 37 df-methods . 39 df1-methods . 39 df2-methods . 40 dim-methods . 40 dimension-methods . 40 Dirac-class . 41 DiracParameter-class . 42 DiscreteDistribution . 44 DiscreteDistribution-class . 46 distr-defunct . 49 distrARITH . 50 Distribution-class . 51 DistributionSymmetry-class . 52 DistrList . 53 DistrList-class . 54 distrMASK . 55 distroptions . 56 DistrSymmList . 58 DistrSymmList-class . 59 EllipticalSymmetry .
    [Show full text]
  • Transforming Gaussian Correlations. Applications to Generating Long-Range Power-Law Correlated Time Series with Arbitrary Distribution
    Transforming Gaussian correlations. Applications to generating long-range power-law correlated time series with arbitrary distribution Pedro Carpena,∗ Pedro A. Bernaola-Galv´an,Manuel G´omez-Extremera, and Ana V. Coronado Departamento de F´ısica Aplicada II, E.T.S.I. de Telecomunicaci´on,Universidad de M´alaga.29071, M´alaga,Spain. The observable outputs of many complex dynamical systems consist in time series exhibiting autocorrelation functions of great diversity of behaviors, including long-range power-law autocorre- lation functions, as a signature of interactions operating at many temporal or spatial scales. Often, numerical algorithms able to generate correlated noises reproducing the properties of real time se- ries are used to study and characterize such systems. Typically, those algorithms produce Gaussian time series. However, real, experimentally observed time series are often non-Gaussian, and may follow distributions with a diversity of behaviors concerning the support, the symmetry or the tail properties. Given a correlated Gaussian time series, it is always possible to transform it into a time series with a different distribution, but the question is how this transformation affects the behavior of the autocorrelation function. Here, we study analytically and numerically how the Pearson's cor- relation of two Gaussian variables changes when the variables are transformed to follow a different destination distribution. Specifically, we consider bounded and unbounded distributions, symmetric and non-symmetric distributions, and distributions with different tail properties, from decays faster than exponential to heavy tail cases including power-laws, and we find how these properties affect the correlation of the final variables. We extend these results to Gaussian time series which are transformed to have a different marginal distribution, and show how the autocorrelation function of the final non-Gaussian time series depends on the Gaussian correlations and on the final marginal distribution.
    [Show full text]
  • Distributions.Jl Documentation Release 0.6.3
    Distributions.jl Documentation Release 0.6.3 JuliaStats January 06, 2017 Contents 1 Getting Started 3 1.1 Installation................................................3 1.2 Starting With a Normal Distribution...................................3 1.3 Using Other Distributions........................................4 1.4 Estimate the Parameters.........................................4 2 Type Hierarchy 5 2.1 Sampleable................................................5 2.2 Distributions...............................................6 3 Univariate Distributions 9 3.1 Univariate Continuous Distributions...................................9 3.2 Univariate Discrete Distributions.................................... 18 3.3 Common Interface............................................ 22 4 Truncated Distributions 27 4.1 Truncated Normal Distribution...................................... 28 5 Multivariate Distributions 29 5.1 Common Interface............................................ 29 5.2 Multinomial Distribution......................................... 30 5.3 Multivariate Normal Distribution.................................... 31 5.4 Multivariate Lognormal Distribution.................................. 33 5.5 Dirichlet Distribution........................................... 35 6 Matrix-variate Distributions 37 6.1 Common Interface............................................ 37 6.2 Wishart Distribution........................................... 37 6.3 Inverse-Wishart Distribution....................................... 37 7 Mixture Models 39 7.1 Type
    [Show full text]
  • Kumaraswamy and Beta Distribution Are Related by the Logistic
    Kumaraswamy and beta distribution are related by the logistic map B. Trancón y Widemann [email protected] Ecological Modelling, University of Bayreuth, Germany September 3, 2018 Abstract The Kumaraswamy distribution has been proposed as an alterna- tive to the beta distribution with more benign algebraic properties. They have the same two parameters, the same support and qualita- tively similar shape for any parameter values. There is a generic re- lationship between the distributions established by a simple transfor- mation between arbitrary Kumaraswamy-distributed random variables and certain beta-distributed random variables. Here, a different rela- tionship is established by means of the logistic map, the paradigmatic example of a discrete non-linear dynamical system. Keywords: Kumaraswamy distribution; Beta distribution; Logistic map 1 Introduction 1.1 The Logistic Map The logistic map is a parametric discrete-time non-linear dynamical system. It is widely studied because of its complex transition from orderly to chaotic behavior in spite of an extremely simple defining equation [May, 1976]. arXiv:1104.0581v1 [math.ST] 4 Apr 2011 Definition 1 (Logistic map). For a real parameter r > 0, the logistic map is the function f (x)= rx(1 x) (1) r − restricted to the closed real interval [0, 1]. This is a total function for r 4. ≤ The objects of interest are the trajectories that arise from the iteration of fr, i.e., sequences with the recurrence relation xn+1 = fr(xn). With in- creasing r, they show all kinds of behavior from convergence in a singular attractor through bifurcating periodic solutions to deterministic chaos.
    [Show full text]
  • 1987: Properties of Two Newly-Defined Distributions
    PROPERTIES OF TWO NEWLY-DEFINED DISTRIBUTIONS AND THEIR RELATIONSHIPS TO OTHER DISTRIBUTIONS Jiang-Ming Wu, University of Wisconsin - Madison Abstract If random variable X has the following pdf then we say that it has thc sine distribu- Two newly-defined disu'ibutions,sine distribution and cosine distribution, have been pro- tion and denote it by X - S( x,m ). posed in this article. Their basic properties including moments, cumulants, skewness, fix) = s(x;m) = -~-smm . (nu:) if 0 < x < ~"~'m. m>0 kunosis, mean deviation and all kinds of generating functions are discussed. Besides, = 0 otherwise. (2.D their relationships to other distributions are also presented including proofs. It is seen that cosine distribution can serve as a very rough approximation to the standard normal 2.2 Cosine distribution distribution under suitable condition of parameter chosen. If random variable Y has the following pdf then we say that it has the cosine distri- bution and denote it by Y - Cos( y;m ). KEY WORDS : Sine distribution;Cosine distribution;Transformations of random variables. g(Y) 1. Introduction = 0 otherwise. (2.2) It is well known that the pictures of sine and cosine function look like the following It is no doubt that expressions ( 2.1 ) and ( 2.2 ) are two pdfs since one can easily figures. verify this just by integrating them over their corresponding ranges. Because the ui- Fisure i Figur'e 2 gonomeu'ic functions are periodic and satisfy to to /(x) =ffx+2nn) n ~ Z, (2.3) we can expand those previous two delinifions as follows. 2.3 Displaced sine distribution If random variable X has the following pdf then we say that it has the displaced sine distribution and denote it by X - Sd( x;m.n ).
    [Show full text]