Chapter 1: Distribution Theory Chapter 1 Distribution Theory 2

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 1: Distribution Theory Chapter 1 Distribution Theory 2 CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 2 Basic Concepts CHAPTER 1 DISTRIBUTION THEORY 3 • Random Variables (R.V.) { discrete random variable: probability mass function { continuous random variable: probability density function CHAPTER 1 DISTRIBUTION THEORY 4 • \Formal but Scary" Definition of Random Variables { R.V. are some measurable functions from a probability measure space to real space; { probability is some non-negative value assigned to sets of a σ-field; { probability mass ≡ the Radon-Nykodym derivative of the random variable-induced measure w.r.t. to a counting measure; { probability density function ≡ the derivative w.r.t. Lebesgue measure. CHAPTER 1 DISTRIBUTION THEORY 5 • Descriptive quantities of univariate distribution { cumulative distribution function: P (X ≤ x) { moments (mean): E[Xk] { quantiles { mode { centralized moments (variance): E[(X − µ)k] { the skewness: E[(X − µ)3]=V ar(X)3=2 { the kurtosis: E[(X − µ)4]=V ar(X)2 CHAPTER 1 DISTRIBUTION THEORY 6 • Characteristic function (c.f.) { ϕX (t) = E[expfitXg] { c.f. uniquely determines the distribution Z 1 T e−ita − e−itb FX (b) − FX (a) = lim ϕX (t)dt T !1 2π −T it Z 1 1 −itx fX (x) = e ϕ(t)dt: 2π −∞ CHAPTER 1 DISTRIBUTION THEORY 7 • Descriptive quantities of multivariate R.V.s { Cov(X; Y ); corr(X; Y ) { fXjY (xjy) = fX;Y (x; y)=fY (y) R { E[XjY ] = xfXjY (xjy)dx { Independence: P (X ≤ x; Y ≤ y) = P (X ≤ x)P (Y ≤ y), fX;Y (x; y) = fX (x)fY (y) CHAPTER 1 DISTRIBUTION THEORY 8 • Equalities of double expectations E[X] = E[E[XjY ]] V ar(X) = E[V ar(XjY )] + V ar(E[XjY ]) • Useful when the distribution of X given Y is simple! CHAPTER 1 DISTRIBUTION THEORY 9 Examples of Discrete Distributions CHAPTER 1 DISTRIBUTION THEORY 10 • Binomial distribution { Binomial(n; p():) n k − n−k P (Sn = k) = k p (1 p) ; k = 0; :::; n { E[Sn] = np; V ar(Sn) = np(1 − p) it n ϕX (t) = (1 − p + pe ) { Binomial(n1; p) + Binomial(n2; p) ∼ Binomial(n1 + n2; p) CHAPTER 1 DISTRIBUTION THEORY 11 • Negative Binomial distribution ( ) k−1 m − k−m { P (Wm = k) = m−1 p (1 p) k = m; m + 1; ::: 2 { E[Wm] = m=p; V ar(Wm) = m=p − m=p { Neg-Binomial(m1; p) + Neg-Binomial(m2; p) ∼ Neg-Binomial(m1 + m2; p) CHAPTER 1 DISTRIBUTION THEORY 12 • Hypergeometric distribution ( )( ) ( ) M N−M N { P (Sn = k) = k n−k = n ; k = 0; 1; ::; n { E[Sn] = Mn=(M + N) nMN(M+N−n) { V ar(Sn) = (M+N)2(M+N−1) CHAPTER 1 DISTRIBUTION THEORY 13 • Poisson distribution { P (X = k) = λke−λ=k!; k = 0; 1; 2; ::: { E[X] = V ar(X) = λ, it ϕX (t) = exp{−λ(1 − e )g { P oisson(λ1) + P oisson(λ2) ∼ P oisson(λ1 + λ2) CHAPTER 1 DISTRIBUTION THEORY 14 • Poisson vs Binomial { X1 ∼ P oisson(λ1);X2 ∼ P oisson(λ2) λ1 { X1jX1 + X2 = n ∼ Binomial(n; ) λ1+λ2 { if Xn1; :::; Xnn are i.i.d Bernoulli(pn) and npn ! λ, then n! λk P (S = k) = pk (1 − p )n−k ! e−λ: n k!(n − k)! n n k! CHAPTER 1 DISTRIBUTION THEORY 15 • Multinomial Distribution { Nl; 1 ≤ l ≤ k counts the number of times that fY1; :::; Yng fall into Bl ( ) n n1 nk { P (N1 = n1; :::; Nk = nk) = p ··· p n1;:::;nk 1 k n1 + ::: + nk = n { the covariance matrix for (N1; :::; Nk) 0 1 p (1 − p ) ::: −p p B 1 1 1 k C B . C n @ . .. A −p1pk : : : pk(1 − pk) CHAPTER 1 DISTRIBUTION THEORY 16 Examples of Continuous Distribution CHAPTER 1 DISTRIBUTION THEORY 17 • Uniform distribution { Uniform(a; b): fX (x) = I[a;b](x)=(b − a) { E[X] = (a + b)=2 and V ar(X) = (b − a)2=12 CHAPTER 1 DISTRIBUTION THEORY 18 • Normal distribution 2 { N(µ, σ2): f (x) = p 1 exp{− (x−µ) g X 2πσ2 2σ2 { E[X] = µ and V ar(X) = σ2 2 2 { ϕX (t) = expfitµ − σ t =2g CHAPTER 1 DISTRIBUTION THEORY 19 • Gamma distribution 1 θ−1 {− x g { Gamma(θ; β): fX (x) = βθΓ(θ) x exp β ; x > 0 { E[X] = θβ and V ar(X) = θβ2 { θ = 1 equivalent to Exp(β) 2 { θ = n=2; β = 2 equivalent to χn CHAPTER 1 DISTRIBUTION THEORY 20 • Cauchy distribution 1 { Cauchy(a; b): fX (x) = bπf1+(x−a)2=b2g { E[jXj] = 1, ϕX (t) = expfiat − jbtjg { often used as counter example CHAPTER 1 DISTRIBUTION THEORY 21 Algebra of Random Variables CHAPTER 1 DISTRIBUTION THEORY 22 • Assumption { X and Y are independent and Y > 0 { we are interested in d.f. of X + Y; XY; X=Y CHAPTER 1 DISTRIBUTION THEORY 23 • Summation of X and Y { Derivation: F (z) = E[I(X + Y ≤ z)] = E [E [I(X ≤ z − Y )jY ]] X+Y Z Y X = EY [FX (z − Y )] = FX (z − y)dFY (y) { Convolution formula: Z FX+Y (z) = FY (z − x)dFX (x) ≡ FX ∗ FY (z) Z Z fX ∗fY (z) ≡ fX (z−y)fY (y)dy = fY (z−x)fX (x)dx CHAPTER 1 DISTRIBUTION THEORY 24 • Product and quotient of X and Y (Y > 0) R { FXY (z) = E[E[I(XY ≤ z)jY ]] = FX (z=y)dFY (y) Z fXY (z) = fX (z=y)=yfY (y)dy R { FX=Y (z) = E[E[I(X=Y ≤ z)jY ]] = FX (yz)dFY (y) Z fX=Y (z) = fX (yz)yfY (y)dy CHAPTER 1 DISTRIBUTION THEORY 25 • Application of formulae 2 2 ∼ 2 2 { N(µ1; σ1) + N(µ2; σ2) N(µ1 + µ2; σ1 + σ2) { Gamma(r1; θ) + Gamma(r2; θ) ∼ Gamma(r1 + r2; θ) { P oisson(λ1) + P oisson(λ2) ∼ P oisson(λ1 + λ2) { Negative Binomial(m1; p) + Negative Binomial(m2; p) ∼ Negative Binomial(m1 + m2; p) CHAPTER 1 DISTRIBUTION THEORY 26 • Summation of R.V.s using c.f. { Result: if X and Y are independent, then ϕX+Y (t) = ϕX (t)ϕY (t) { Example: X and Y are normal with f − 2 2 g ϕX (t) = exp iµ1t σ1t =2 ; f − 2 2 g ϕY (t) = exp iµ2t σ2t =2 ) f − 2 2 2 g ϕX+Y (t) = exp i(µ1 + µ2)t (σ1 + σ2)t =2 CHAPTER 1 DISTRIBUTION THEORY 27 • Further examples of special distribution ∼ ∼ 2 ∼ 2 { assume X N(0; 1), Y χm and Z χn are independent; { X q ∼ Student's t(m); Y=m Y=m ∼ Snedecor's F ; Z=n m;n Y ∼ Beta(m=2; n=2): Y + Z CHAPTER 1 DISTRIBUTION THEORY 28 • Densities of t−, F − and Beta− distributions Γ((m+1)=2) 1 p −∞ 1 { ft(m)(x) = πmΓ(m=2) (1+x2=m)(m+1)=2 I( ; )(x) Γ(m+n)=2 (m=n)m=2xm=2−1 1 { fFm;n (x) = Γ(m=2)Γ(n=2) (1+mx=n)(m+n)=2 I(0; )(x) Γ(a+b) a−1 − b−1 2 { fBeta(a;b)(x) = Γ(a)Γ(b) x (1 x) I(x (0; 1)) CHAPTER 1 DISTRIBUTION THEORY 29 • Exponential vs Beta- distributions { assume Y1; :::; Yn+1 are i.i.d Exp(θ); Y1+:::+Yi { Zi = ∼ Beta(i; n − i + 1); Y1+:::+Yn+1 { (Z1;:::;Zn) has the same joint distribution as that of the order statistics (ξn:1; :::; ξn:n) of n Uniform(0,1) r.v.s. CHAPTER 1 DISTRIBUTION THEORY 30 Transformation of Random Vector CHAPTER 1 DISTRIBUTION THEORY 31 Theorem 1.3 Suppose that X is k-dimension random vector with density function fX (x1; :::; xk). Let g be a one-to-one and continuously differentiable map from Rk to Rk. Then Y = g(X) is a random vector with density function −1 fX (g (y1; :::; yk))jJg−1 (y1; :::; yk)j; −1 where g is the inverse of g and Jg−1 is the Jacobian of g−1. CHAPTER 1 DISTRIBUTION THEORY 32 • Example { let R2 ∼ Expf2g; R > 0 and Θ ∼ Uniform(0; 2π) be independent; { X = R cos Θ and Y = R sin Θ are two independent standard normal random variables; { it can be applied to simulate normally distributed data. CHAPTER 1 DISTRIBUTION THEORY 33 Multivariate Normal Distribution CHAPTER 1 DISTRIBUTION THEORY 34 • Definition 0 Y = (Y1; :::; Yn) is said to have a multivariate normal 0 distribution with mean vector µ = (µ1; :::; µn) and non-degenerate covariance matrix Σn×n if 1 1 f (y ; :::; y ) = exp{− (y − µ)0Σ−1(y − µ)g Y 1 n (2π)n=2jΣj1=2 2 CHAPTER 1 DISTRIBUTION THEORY 35 • Characteristic function ϕY (t) Z it0Y − n − 1 0 1 0 −1 = E[e ] = (2π) 2 jΣj 2 expfit y − (y − µ) Σ (y − µ)gdy 2 Z p 0 −1 0 −1 − n − 1 y Σ y −1 0 µ Σ µ = ( 2π) 2 jΣj 2 exp{− + (it + Σ µ) y − gdy 2 2 Z { exp{−µ0Σ−1µ/2g 1 = p exp − (y − Σit − µ)0Σ−1(y − Σit − µ) ( 2π)n=2jΣj1=2 2 } +(Σit + µ)0Σ−1(Σit + µ)=2 dy { } 1 = exp −µ0Σ−1µ/2 + (Σit + µ)0Σ−1(Σit + µ) 2 1 = expfit0µ − t0Σtg: 2 CHAPTER 1 DISTRIBUTION THEORY 36 • Conclusions { multivariate normal distribution is uniquely determined by µ and Σ { for standard multivariate normal distribution, 0 ϕX (t) = exp{−t t=2g { the moment generating function for Y is expft0µ + t0Σt=2g CHAPTER 1 DISTRIBUTION THEORY 37 • Linear transformation of normal r.v. Theorem 1.4 If Y = An×kXk×1 where X ∼ N(0;I) (standard multivariate normal distribution), then Y 's characteristic function is given by 0 k ϕY (t) = exp {−t Σt=2g ; t = (t1; :::; tn) 2 R and rank(Σ) = rank(A).
Recommended publications
  • Relationships Among Some Univariate Distributions
    IIE Transactions (2005) 37, 651–656 Copyright C “IIE” ISSN: 0740-817X print / 1545-8830 online DOI: 10.1080/07408170590948512 Relationships among some univariate distributions WHEYMING TINA SONG Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan, Republic of China E-mail: [email protected], wheyming [email protected] Received March 2004 and accepted August 2004 The purpose of this paper is to graphically illustrate the parametric relationships between pairs of 35 univariate distribution families. The families are organized into a seven-by-five matrix and the relationships are illustrated by connecting related families with arrows. A simplified matrix, showing only 25 families, is designed for student use. These relationships provide rapid access to information that must otherwise be found from a time-consuming search of a large number of sources. Students, teachers, and practitioners who model random processes will find the relationships in this article useful and insightful. 1. Introduction 2. A seven-by-five matrix An understanding of probability concepts is necessary Figure 1 illustrates 35 univariate distributions in 35 if one is to gain insights into systems that can be rectangle-like entries. The row and column numbers are modeled as random processes. From an applications labeled on the left and top of Fig. 1, respectively. There are point of view, univariate probability distributions pro- 10 discrete distributions, shown in the first two rows, and vide an important foundation in probability theory since 25 continuous distributions. Five commonly used sampling they are the underpinnings of the most-used models in distributions are listed in the third row.
    [Show full text]
  • Field Guide to Continuous Probability Distributions
    Field Guide to Continuous Probability Distributions Gavin E. Crooks v 1.0.0 2019 G. E. Crooks – Field Guide to Probability Distributions v 1.0.0 Copyright © 2010-2019 Gavin E. Crooks ISBN: 978-1-7339381-0-5 http://threeplusone.com/fieldguide Berkeley Institute for Theoretical Sciences (BITS) typeset on 2019-04-10 with XeTeX version 0.99999 fonts: Trump Mediaeval (text), Euler (math) 271828182845904 2 G. E. Crooks – Field Guide to Probability Distributions Preface: The search for GUD A common problem is that of describing the probability distribution of a single, continuous variable. A few distributions, such as the normal and exponential, were discovered in the 1800’s or earlier. But about a century ago the great statistician, Karl Pearson, realized that the known probabil- ity distributions were not sufficient to handle all of the phenomena then under investigation, and set out to create new distributions with useful properties. During the 20th century this process continued with abandon and a vast menagerie of distinct mathematical forms were discovered and invented, investigated, analyzed, rediscovered and renamed, all for the purpose of de- scribing the probability of some interesting variable. There are hundreds of named distributions and synonyms in current usage. The apparent diver- sity is unending and disorienting. Fortunately, the situation is less confused than it might at first appear. Most common, continuous, univariate, unimodal distributions can be orga- nized into a small number of distinct families, which are all special cases of a single Grand Unified Distribution. This compendium details these hun- dred or so simple distributions, their properties and their interrelations.
    [Show full text]
  • Probability Distributions Used in Reliability Engineering
    Probability Distributions Used in Reliability Engineering Probability Distributions Used in Reliability Engineering Andrew N. O’Connor Mohammad Modarres Ali Mosleh Center for Risk and Reliability 0151 Glenn L Martin Hall University of Maryland College Park, Maryland Published by the Center for Risk and Reliability International Standard Book Number (ISBN): 978-0-9966468-1-9 Copyright © 2016 by the Center for Reliability Engineering University of Maryland, College Park, Maryland, USA All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from The Center for Reliability Engineering, Reliability Engineering Program. The Center for Risk and Reliability University of Maryland College Park, Maryland 20742-7531 In memory of Willie Mae Webb This book is dedicated to the memory of Miss Willie Webb who passed away on April 10 2007 while working at the Center for Risk and Reliability at the University of Maryland (UMD). She initiated the concept of this book, as an aid for students conducting studies in Reliability Engineering at the University of Maryland. Upon passing, Willie bequeathed her belongings to fund a scholarship providing financial support to Reliability Engineering students at UMD. Preface Reliability Engineers are required to combine a practical understanding of science and engineering with statistics. The reliability engineer’s understanding of statistics is focused on the practical application of a wide variety of accepted statistical methods. Most reliability texts provide only a basic introduction to probability distributions or only provide a detailed reference to few distributions.
    [Show full text]
  • Some Statistical Aspects of Spatial Distribution Models for Plants and Trees
    STUDIA FORESTALIA SUECICA Some Statistical Aspects of Spatial Distribution Models for Plants and Trees PETER DIGGLE Department of Operational Efficiency Swedish University of Agricultural Sciences S-770 73 Garpenberg. Sweden SWEDISH UNIVERSITY OF AGRICULTURAL SCIENCES COLLEGE OF FORESTRY UPPSALA SWEDEN Abstract ODC 523.31:562.6/46 The paper is an account of some stcctistical ur1alysrs carried out in coniur~tionwith a .silvicultural research project in the department of Oi_leruriotzal Efficiency. College of Forestry, Gcwpenberg. Sweden (Erikssorl, L & Eriksson. 0 in prepuratiotz 1981). 1~ Section 2, u statbtic due ro Morurz (19501 is irsed to detect spafiul intercccriorl amongst coutlts of the ~u~rnbersof yo14119 trees in stimple~~lo~slaid out in a systelnntic grid arrangement. Section 3 discusses the corlstruction of hivuriate disrrtbutions for the numbers of~:ild and plunted trees in a sample plot. Sectiorc 4 comiders the relutionship between the distributions of the number oj trees in a plot arld their rota1 busal area. Secrion 5 is a review of statistical nlerhodc for xte in cot~tlectiorz w~thpreliminai~ surveys of large areas of forest. In purticular, Secrion 5 dzscusses tests or spurrnl randotnness for a sillgle species, tests of independence betwren two species, and estimation of the number of trees per unit urea. Manuscr~ptrecelved 1981-10-15 LFIALLF 203 82 003 ISBN 9 1-38-07077-4 ISSN 0039-3150 Berllngs, Arlov 1982 Preface The origin of this report dates back to 1978. At this time Dr Peter Diggle was employed as a guest-researcher at the College. The aim of Dr Diggle's work at the College was to introduce advanced mathematical and statistical models in forestry research.
    [Show full text]
  • Multivariate Distributions
    IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing1 in particular on multivariate normal, normal-mixture, spherical and elliptical distributions. In addition to studying their properties, we will also discuss techniques for simulating and, very briefly, estimating these distributions. Familiarity with these important classes of multivariate distributions is important for many aspects of risk management. We will defer the study of copulas until later in the course. 1 Preliminary Definitions Let X = (X1;:::Xn) be an n-dimensional vector of random variables. We have the following definitions and statements. > n Definition 1 (Joint CDF) For all x = (x1; : : : ; xn) 2 R , the joint cumulative distribution function (CDF) of X satisfies FX(x) = FX(x1; : : : ; xn) = P (X1 ≤ x1;:::;Xn ≤ xn): Definition 2 (Marginal CDF) For a fixed i, the marginal CDF of Xi satisfies FXi (xi) = FX(1;:::; 1; xi; 1;::: 1): It is straightforward to generalize the previous definition to joint marginal distributions. For example, the joint marginal distribution of Xi and Xj satisfies Fij(xi; xj) = FX(1;:::; 1; xi; 1;:::; 1; xj; 1;::: 1). If the joint CDF is absolutely continuous, then it has an associated probability density function (PDF) so that Z x1 Z xn FX(x1; : : : ; xn) = ··· f(u1; : : : ; un) du1 : : : dun: −∞ −∞ Similar statements also apply to the marginal CDF's. A collection of random variables is independent if the joint CDF (or PDF if it exists) can be factored into the product of the marginal CDFs (or PDFs). If > > X1 = (X1;:::;Xk) and X2 = (Xk+1;:::;Xn) is a partition of X then the conditional CDF satisfies FX2jX1 (x2jx1) = P (X2 ≤ x2jX1 = x1): If X has a PDF, f(·), then it satisfies Z xk+1 Z xn f(x1; : : : ; xk; uk+1; : : : ; un) FX2jX1 (x2jx1) = ··· duk+1 : : : dun −∞ −∞ fX1 (x1) where fX1 (·) is the joint marginal PDF of X1.
    [Show full text]
  • Univariate Probability Distributions
    Journal of Statistics Education ISSN: (Print) 1069-1898 (Online) Journal homepage: http://www.tandfonline.com/loi/ujse20 Univariate Probability Distributions Lawrence M. Leemis, Daniel J. Luckett, Austin G. Powell & Peter E. Vermeer To cite this article: Lawrence M. Leemis, Daniel J. Luckett, Austin G. Powell & Peter E. Vermeer (2012) Univariate Probability Distributions, Journal of Statistics Education, 20:3, To link to this article: http://dx.doi.org/10.1080/10691898.2012.11889648 Copyright 2012 Lawrence M. Leemis, Daniel J. Luckett, Austin G. Powell, and Peter E. Vermeer Published online: 29 Aug 2017. Submit your article to this journal View related articles Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=ujse20 Download by: [College of William & Mary] Date: 06 September 2017, At: 10:32 Journal of Statistics Education, Volume 20, Number 3 (2012) Univariate Probability Distributions Lawrence M. Leemis Daniel J. Luckett Austin G. Powell Peter E. Vermeer The College of William & Mary Journal of Statistics Education Volume 20, Number 3 (2012), http://www.amstat.org/publications/jse/v20n3/leemis.pdf Copyright c 2012 by Lawrence M. Leemis, Daniel J. Luckett, Austin G. Powell, and Peter E. Vermeer all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor. Key Words: Continuous distributions; Discrete distributions; Distribution properties; Lim- iting distributions; Special Cases; Transformations; Univariate distributions. Abstract Downloaded by [College of William & Mary] at 10:32 06 September 2017 We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics.
    [Show full text]
  • Package 'Distr' Standardmethods Utility to Automatically Generate Accessor and Replacement Functions
    Package ‘distr’ March 11, 2019 Version 2.8.0 Date 2019-03-11 Title Object Oriented Implementation of Distributions Description S4-classes and methods for distributions. Depends R(>= 2.14.0), methods, graphics, startupmsg, sfsmisc Suggests distrEx, svUnit (>= 0.7-11), knitr Imports stats, grDevices, utils, MASS VignetteBuilder knitr ByteCompile yes Encoding latin1 License LGPL-3 URL http://distr.r-forge.r-project.org/ LastChangedDate {$LastChangedDate: 2019-03-11 16:33:22 +0100 (Mo, 11 Mrz 2019) $} LastChangedRevision {$LastChangedRevision: 1315 $} VCS/SVNRevision 1314 NeedsCompilation yes Author Florian Camphausen [ctb] (contributed as student in the initial phase --2005), Matthias Kohl [aut, cph], Peter Ruckdeschel [cre, cph], Thomas Stabla [ctb] (contributed as student in the initial phase --2005), R Core Team [ctb, cph] (for source file ks.c/ routines 'pKS2' and 'pKolmogorov2x') Maintainer Peter Ruckdeschel <[email protected]> Repository CRAN Date/Publication 2019-03-11 20:32:54 UTC 1 2 R topics documented: R topics documented: distr-package . .5 AbscontDistribution . 12 AbscontDistribution-class . 14 Arcsine-class . 18 Beta-class . 19 BetaParameter-class . 21 Binom-class . 22 BinomParameter-class . 24 Cauchy-class . 25 CauchyParameter-class . 27 Chisq-class . 28 ChisqParameter-class . 30 CompoundDistribution . 31 CompoundDistribution-class . 32 convpow-methods . 34 d-methods . 36 decomposePM-methods . 36 DExp-class . 37 df-methods . 39 df1-methods . 39 df2-methods . 40 dim-methods . 40 dimension-methods . 40 Dirac-class . 41 DiracParameter-class . 42 DiscreteDistribution . 44 DiscreteDistribution-class . 46 distr-defunct . 49 distrARITH . 50 Distribution-class . 51 DistributionSymmetry-class . 52 DistrList . 53 DistrList-class . 54 distrMASK . 55 distroptions . 56 DistrSymmList . 58 DistrSymmList-class . 59 EllipticalSymmetry .
    [Show full text]
  • Bivariate Frequency of Meteorological Drought in the Upper Minjiang River Based on Copula Function
    water Article Bivariate Frequency of Meteorological Drought in the Upper Minjiang River Based on Copula Function Fangling Qin 1, Tianqi Ao 1,2,* and Ting Chen 1,3,* 1 College of Water Resources & Hydropower, Sichuan University, Chengdu 610065, China; [email protected] 2 State Key Laboratory of Hydraulic and Mountain River Engineering, Sichuan University, Chengdu 610065, China 3 Heavy Rain and Drought-Flood Disasters in Plateau and Basin Key Laboratory of Sichuan Province, Chengdu 610072, China * Correspondence: [email protected] (T.A.); [email protected] (T.C.); Tel.: +86-028-8540-1149 (T.A.); +86-028-6028-0028 (T.C.) Abstract: Based on the Standardized Precipitation Index (SPI) and copula function, this study analyzed the meteorological drought in the upper Minjiang River basin. The Tyson polygon method is used to divide the research area into four regions based on four meteorological stations. The monthly precipitation data of four meteorological stations from 1966 to 2016 were used for the calculation of SPI. The change trend of SPI1, SPI3 and SPI12 showed the historical dry-wet evolution phenomenon of short-term humidification and long-term aridification in the study area. The major drought events in each region are counted based on SPI3. The results show that the drought lasted the longest in Maoxian region, the occurrence of minor drought events was more frequent than the other regions. Nine distribution functions are used to fit the marginal distribution of drought duration (D), severity (S) and peak (P) estimated based on SPI3, the best marginal distribution is Citation: Qin, F.; Ao, T.; Chen, T.
    [Show full text]
  • Goodness of Fit
    GOODNESS OF FIT INTRODUCTION Goodness of fit tests are used to determine how well the shape of a sample of data obtained from an experiment matches a conjectured or hypothesized distribution shape for the population from which the data was collected. The idea behind a goodness-of-fit test is to see if the sample comes from a population with the claimed distribution. Another way of looking at that is to ask if the frequency distribution fits a specific pattern, or even more to the point, how do the actual observed frequencies in each class interval of a histogram compare to the frequencies that theoretically would be expected to occur if the data exactly followed the hypothesized probability distribution. This is relevant to cost risk analysis because we often want to apply a distribution to an element of cost based on observed sample data. A goodness of fit test is a statistical hypothesis test: Set up the null and alternative hypothesis; determine alpha; calculate a test statistic; look-up a critical value statistic; draw a conclusion. In this course, we will discuss three different methods or tests that are commonly used to perform Goodness-of-Fit analyses: the Chi-Square (χ2) test, the Kolmogorov-Smirnov One Sample Test, and the Anderson-Darling test. The Kolmogorov-Smirnov and Anderson-Darling tests are restricted to continuous distributions while the χ2 test can be applied to both discrete and continuous distributions. GOODNESS OF FIT TESTS CHI SQUARE TEST The Chi-Square test is used to test if a sample of data came from a population with a specific distribution.
    [Show full text]
  • Joint Mixability of Elliptical Distributions and Related Families
    Joint Mixability of Elliptical Distributions and Related Families Chuancun Yin ∗ School of Statistics, Qufu Normal University Shandong 273165, China e-mail: [email protected] Dan Zhu School of Statistics, Qufu Normal University Shandong 273165, China September 13, 2018 Abstract In this paper, we further develop the theory of complete mixability and joint mixability for some distribution families. We generalize a result of R¨uschendorf and Uckelmann (2002) related to complete mixability of continuous distribution function having a symmetric and unimodal density. Two different proofs to a result of Wang and Wang (2016) which related to the joint mixability of elliptical distributions with the same characteristic generator are present. We solve the Open Problem 7 in Wang (2015) by arXiv:1706.05499v3 [math.ST] 12 Sep 2018 constructing a bimodal-symmetric distribution. The joint mixability of slash-elliptical distributions and skew-elliptical distributions is studied and the extension to multivariate distributions is also investigated. Keywords: Complete mixability; joint mixability; multivariate dependence; slash/skew- elliptical distributions; elliptical distributions AMS subject classifications: 60E05 60E10 60E15 ∗Corresponding author. 1 1 Introduction In recent years, the problem of studying complete mixability and joint mixability of distributions has received considerable attention. Complete mixability and joint mixabil- ity describe whether it is possible to generate random variables (or vectors) from given distributions with constant sum. The formally definition of complete mixability for a distribution was first introduced in Wang and Wang (2011) and then extended to an arbi- trary set of distributions in Wang, Peng and Yang (2013), although the concept has been used in variance reduction problems earlier (see Gaffke and R¨uschendorf (1981), Knott and Smith (2006), R¨uschendorf and Uckelmann (2002)).
    [Show full text]
  • Theorizing Probability Distribution in Applied Statistics
    Journal of Population and Development, June 2020 | Indra Malakar/Theorizing Probability Distribution in ... Received Date: January 2020 Revised: March 2020 Accepted: June 2020 Theorizing Probability Distribution in Applied Statistics Indra Malakar (M. Phil.)* Abstract This paper investigates into theoretical knowledge on probability distribution and the application of binomial, poisson and normal distribution. Binomial distribution is widely used discrete random variable when the trails are repeated under identical condition for fixed number of times and when there are only two possible outcomes whereas poisson distribution is for discrete random variable for which the probability of occurrence of an event is small and the total number of possible cases is very large and normal distribution is limiting form of binomial distribution and used when the number of cases is infinitely large and probabilities of success and failure is almost equal. Key words: Application, binomial, discrete, normal, occurrence & poisson. Introduction In statistics, probability theory and probability distribution are mathematical functions that provide the probabilities of occurrence of different possible outcomes in an experiment (Ash, 2008). In more technical terms, the probability distribution is a description of a random phenomenon in terms of the probabilities of events (Michael, 2010). A probability distribution is specified in terms of an underlying sample space, which is the set of all possible outcomes of the random phenomenon being observed. The sample space may be the set of real numbers or a set of vectors, or it may be a list of non-numerical values; for example, the sample space of a coin flip would be (heads & tails). Probability distributions are generally categorized into two classes.
    [Show full text]
  • Mahalanobis' Distance Beyond Normal Distributions
    MAHALANOBIS' DISTANCE BEYOND NORMAL DISTRIBUTIONS JOAKIM EKSTROM¨ Abstract. Based on the reasoning expressed by Mahalanobis in his original article, the present article extends the Mahalanobis distance beyond the set of normal distributions. Sufficient conditions for existence and uniqueness are studied, and some properties de- rived. Since many statistical methods use the Mahalanobis distance as e vehicle, e.g. the method of least squares and the chi-square hypothesis test, extending the Mahalanobis distance beyond normal distributions yields a high ratio of output to input, because those methods are then instantly generalized beyond the normal distributions as well. Maha- lanobis' idea also has a certain conceptual beauty, mapping random variables into a frame of reference which ensures that apples are compared to apples. 1 2 1. Introduction Distances have been used in statistics for centuries. Carl Friedrich Gauss (1809) pro- posed using sums of squares, a squared Euclidean distance, for the purpose of fitting Kepler orbits to observations of heavenly bodies. Karl Pearson (1900) proposed a weighted Eu- clidean distance, which he denoted χ, for the construction of a hypothesis test. Both Gauss' and Pearson's methods presume statistically independent and normally distributed observational errors, and therefore their distances are nowadays recognized as special cases of the Mahalanobis distance. Proposed by Mahalanobis (1936), the Mahalanobis distance is a distance that accounts for probability distribution and equals the Euclidean distance under the standard normal distribution. The rationale for the latter fact is largely historical, but both Gauss (1809) and Pearson (1900) discussed the fact that the density function of a standard normally distributed random variable equals (a normalizing constant times) the composition g ◦jj·jj, where jj · jj is the Euclidean norm and g the Gaussian function e−x2=2.
    [Show full text]