<<

Distribution Relationships

Lawrence M. LEEMIS and Jacquelyn T. MCQUESTON

distribution. A distribution is described by two lines of text in each box. The first line gives the name of the distribution andits Probability distributions are traditionally treated separately in parameters. The second line contains the properties (described introductory mathematical textbooks. A figure is pre- in the next section) that the distribution assumes. sented here that shows properties that individual distributions The parameterizations for the distributions are given in the possess and many of the relationships between these distribu- Appendix. If the distribution is known by several names (e.g., tions. the is often called the Gaussian distribu- tion), this is indicated in the Appendix following the name of KEY WORDS: Asymptotic relationships; Distribution proper- the distribution. The parameters typically satisfy the following ties; Limiting distributions; Stochastic parameters; Transforma- conditions: tions. • n, with or without subscripts, is a positive integer; < < • p is a parameter satisfying 0 p 1; α σ 1. INTRODUCTION • and , with or without subscripts, are positive scale pa- rameters; Introductory probability and statistics textbooks typically in- β γ κ troduce common univariate distributions individually, and sel- • , , and are positive shape parameters; dom report all of the relationships between these distributions. μ, This article contains an update of a figure presented by Leemis • a, and b are location parameters; (1986) that shows the properties of and relationships between λ and δ are positive parameters. several common univariate distributions. More detail concern- • ing these distributions is given by Johnson, Kotz, and Balakrish- Exceptions to these rules, such as the rectangular parameter n, nan (1994, 1995) and Johnson, Kemp, and Kotz (2005). More are given in the Appendix after any aliases for the distribution. concise treatments are given by Balakrishnan and Nevzorov Additionally, any parameters not described above are explic- (2003), Evans, Hastings, and Peacock (2000), Ord (1972), Pa- itly listed in the Appendix. Many of the distributions have sev- tel, Kapadia, and Owen (1976), Patil, Boswell, Joshi, and Rat- eral mathematical forms, only one of which is presented here naparkhi (1985), Patil, Boswell, and Ratnaparkhi (1985), and (e.g., the extreme value and discrete Weibull distributions) for Shapiro and Gross (1981). Figures similar to the one presented the sake of brevity. here have appeared in Casella and Berger (2002), Marshall and There are numerous distributions that have not been in- Olkin (1985), Nakagawa and Yoda (1977), Song (2005), and cluded in the chart due to space limitations or that the dis- Taha (1982). tribution is not related to one of the distributions currently Figure 1 contains 76 univariate probability distributions. on the chart. These include Bezier´ curves (Flanigan–Wagner There are 19 discrete and 57 continuous models. Discrete distri- and Wilson 1993); the Burr distribution (Crowder et al. 1991, butions are displayed in rectangular boxes; continuous distribu- p. 33 and Johnson, Kotz, and Balakrishnan 1994, pp. 15– tions are displayed in rounded boxes. The discrete distributions 63); the generalized (McDonald 1984); the are at the top of the figure, with the exception of the Benford generalized (Gupta and Kundu 2007); the generalized F distribution (Prentice 1975); Johnson curves Lawrence M. Leemis is a Professor, Department of Mathematics, The (Johnson, Kotz, and Balakrishnan 1994, pp. 15–63); the kappa College of William & Mary, Williamsburg, VA 23187–8795 (E-mail: distribution (Hosking 1994); the Kolmogorov–Smirnov one- [email protected]). Jacquelyn T. McQueston is an Operations Researcher, sample distribution (parameters estimated from data), the Northrop Grumman Corporation, Chantilly, VA 20151. The authors are grate- ful for the support from The College of William & Mary through a summer Kolmogorov–Smirnov two-sample distribution (Boomsma and research grant, a faculty research assignment, and a NSF CSEMS grant DUE– Molenaar 1994); the generalized lambda distribution (Ramberg 0123022. They also express their gratitude to the students in CS 688 at William and Schmeiser 1974); the Maxwell distribution (Balakrishnan & Mary, the editor, a referee, Bruce Schmeiser, John Drew, and Diane Evans for their careful proofreading of this article. Please e-mail the first author with and Nevzorov 2003, p. 232); Pearson systems (Johnson, Kotz, updates and corrections to the chart given in this article, which will be posted at and Balakrishnan 1994, pp. 15–63); the generalized Waring dis- www.math.wm.edu/∼leemis. tribution (Hogg, McKean, and Craig 2005, p. 195). Likewise,

c 2008 American Statistical Association DOI: 10.1198/000313008X270448 The American Statistician, February 2008, Vol. 62, No. 1 45 Devroye (2006) refers to Dickman’s, Kolmogorov–Smirnov, • The minimum property (M) indicates that the smallest of Kummer’s, Linnik–Laha, theta, and de la Vallee–Poussin´ dis- independent and identically distributed random variables tributions in his chapter on variate generation. from a distribution comes from the same distribution fam- ily. (α ) 2. DISTRIBUTION PROPERTIES Example: If Xi exponential i for i = 1, 2, . . . , n, and , ,..., ∼ X1 X2 Xn are independent, then There are several properties that apply to individual distribu- tions listed in Figure 1. n , ,..., ( /α ) . min X1 X2 Xn exponential 1 1 i { } ∼ ! i 1 • The linear combination property (L) indicates that lin- X= ear combinations of independent random variables having this particular distribution come from the same distribution • The maximum property (X) indicates that the largest of family. independent and identically distributed random variables μ , σ 2 , ,..., from a distribution comes from the same distribution fam- Example: If Xi N i i for i 1 2 n , ,..., ∼ ,= ,..., ; ily. a1 a2 an are real constants,  and X1 X2 Xn are β , ,..., independent, then Example: If Xi standard power ( i ) for i 1 2 n, ...∼ = and X1, X2, , Xn are independent, then n n n μ , 2σ 2 . ai Xi N ai i a n ∼ i i ! , ,..., β . max X1 X2 Xn standard power i Xi=1 Xi=1 Xi=1 { } ∼ ! Xi=1 • The convolution property (C) indicates that sums of inde- pendent random variables having this particular distribu- • The forgetfulness property (F), more commonly known as tion come from the same distribution family. the memoryless property, indicates that the conditional dis- χ 2( ) , Example: If Xi ni for i = 1, 2, . . . , n, and X1 tribution of a is identical to the uncondi- ,..., ∼ X2 Xn are independent, then tional distribution. The geometric and exponential distri- butions are the only two distributions with this property. n n χ 2 . This property is a special case of the residual property. Xi ni ∼ ! Xi=1 Xi=1 • The residual property (R) indicates that the conditional distribution of a random variable left-truncated at a value • The scaling property (S) implies that any positive real in its support belongs to the same distribution family as the constant times a random variable having this distribution unconditional distribution. comes from the same distribution family. ( , ) (α, β Example: If X ∼ Uniform a b , and k is a real constant Example: If X ∼ Weibull ) and k is a positive, real satisfying a < k < b, then the conditional distribution of constant, then X given X > k belongs to the uniform family. (α β , β). k X ∼ Weibull k • The variate generation property (V) indicates that the in- verse cumulative distribution function of a continuous ran- • The product property (P) indicates that products of inde- dom variable can be expressed in closed form. For a dis- pendent random variables having this particular distribu- crete random variable, this property indicates that a variate tion come from the same distribution family. can be generated in an O(1) algorithm that does not cycle μ , σ 2) , ,..., through the support values or rely on a special property. Example: If Xi lognormal( i i for i 1 2 n, , ,...,∼ = and X1 X2 Xn are independent, then α Example: If X ∼ exponential( ), then

n n n F−1(u) α log(1 u), 0 < u < 1. μ , σ 2 . = − − Xi lognormal i ∼ i ! Yi=1 Xi=1 Xi=1 Since property L implies properties C and S, the C and S properties are not listed on a distribution having the L property. • The inverse property (I) indicates that the reciprocal of a random variable of this type comes from the same distri- Similarly, property F ⇒ property R. bution family. Some of the properties apply only in restricted cases. The ( , ) minimum property applies to the , for ex- Example: If X F n1 n2 , then ∼ ample, only when the shape parameter is fixed. The Weibull distribution has Mβ on the second line in Figure 1 to indicate 1 ( , ). F n2 n1 that the property is valid only in this restricted case. X ∼

46 Teacher’s Corner Figure 1. relationships.

The American Statistician, February 2008, Vol. 62, No. 1 47 3. RELATIONSHIPS AMONG THE DISTRIBUTIONS distribution results in a distribution with support over the en- tire real axis. Second, the inverted indicates There are three types of lines used to connect the distribu- that the reciprocal of any survival distribution results in another tions to one another. The solid line is used for special cases and survival distribution. Third, switching the roles of F(x) and transformations. Transformations typically have an X on their F−1(u) for a random variable with support on (0, 1) results in label to distinguish them from special cases. The term “transfor- a complementary distribution (e.g., Jones 2002). mation” is used rather loosely here, to include the distribution of Additionally, the transformations in Figure 1 can be used to an order statistic, truncating a random variable, or taking a mix- give intuition to some random variate generation routines. The ture of random variables. The dashed line is used for asymp- Box–Muller algorithm, for example, converts a U(0, 1) to an totic relationships, which are typically in the limit as one or exponential to a chi-square to a standard normal to a normal more parameters approach the boundary of the parameter space. random variable. The dotted line is used for Bayesian relationships (e.g., Beta– Redundant arrows have typically not been drawn. An arrow binomial, Beta–Pascal, Gamma–normal, and Gamma–Poisson). between the minimax distribution and the standard uniform dis- The binomial, chi-square, exponential, gamma, normal, and tribution has not been drawn because of the two arrows connect- U(0, 1) distributions emerge as hubs, highlighting their central- ing the minimax distribution to the standard power distribution ity in applied statistics. Summation limits run from i = 1 to n. and the standard power distribution to the standard uniform dis- The notation X(r) denotes the rth order statistic drawn from a tribution. Likewise, although the exponential distribution is a random sample of size n. special case of the gamma distribution when the shape parame- There are certain special cases where distributions overlap ter equals 1, this is not explicitly indicated because of the special for just a single setting of their parameters. Examples include case involving the Erlang distribution. (a) the exponential distribution with a of two and the chi- In order to preserve a planar graph, several relationships are square distribution with two degrees of freedom, (b) the chi- not included, such as those that would not fit on the chart or square distribution with an even number of degrees of freedom involved distributions that were too far apart. Examples include: and the Erlang distribution with scale parameter two, and (c) the Kolmogorov–Smirnov distribution (all parameters known case) • A geometric random variable is the floor of an exponential ( / , ) random variable. for a sample of size n = 1 and the U 1 2 1 distribution. Each of these cases is indicated by a double-headed arrow. • A rectangular random variable is the floor of a uniform The probability integral transformation allows a line to random variable. be drawn, in theory, between the standard uniform and all others since F(X) U(0, 1). Similarly, a line could be drawn An exponential random variable is a special case of a ∼ • δ between the unit exponential distribution and all others since Makeham random variable with = 0. x H(X) exponential(1), where H(x) f (t)/(1 F(t))dt ∼ = −∞ − A standard power random variable is a special case of a is the cumulative hazard function. • R beta random variable with δ 1. All random variables that can be expressed as sums (e.g., the = Erlang as the sum of independent and identically distributed ex- • If X has the F distribution with parameters n1 and n2, then 1 ponential random variables) converge asymptotically in a pa- ( / ) has the beta distribution (Hogg, McKean, and 1+ n1 n2 X rameter to the normal distribution by the central limit theo- Craig 2005, p. 189). rem. These distributions include the binomial, chi-square, Er- , lang, gamma, hypoexponential, and Pascal distributions. Fur- • The doubly noncentral F distribution with n1 n2 degrees thermore, all distributions have an asymptotic relationship with of freedom and noncentrality parameters δ, γ is defined as the normal distribution (by the central limit theorem if sums of the distribution of random variables are considered). (δ) (γ ) 1 X1 X2 − Many of the transformations can be inverted, and this is indi- , cated on the chart by a double-headed arrow between two dis-  n1   n2  (δ), (γ ) tributions. Consider the relationship between the normal distri- where X1 X2 are noncentral chi-square random (μ, σ 2) , bution andμ the standard normal distribution. If X ∼ N , variables with n1 n2 degrees of freedom, respectively, X− ( , ) then σ ∼ N 0 1 as indicated on the chart. Conversely, if (Johnson, Kotz, and Balakrishnan 1995, p. 480). ( , ) μ σ (μ, σ 2) X ∼ N 0 1 , then + X ∼ N . The first direction of the transformation is useful for standardizing random vari- • A normal and uniform random variable are special and lim- ables to be used for table lookup, while the second direction iting cases of an error random variable (Evans, Hastings, is useful for variate generation. In most cases, though, an in- and Peacock 2000, p. 76). verse transformation is implicit and is not listed on the chart for • A binomial random variable is a special case of a power se- brevity (e.g., extreme value random variable as the logarithm of ries random variable (Evans, Hastings, and Peacock 2000, a Weibull random variable and Weibull random variable as the p. 166). exponential of an extreme value random variable). Several of these relationships hint at further distributions that The limit of a von Mises random variable is a normal ran- • κ have not yet been developed. First, the extreme value and log dom variable as → ∞ (Evans, Hastings, and Peacock gamma distributions indicate that the logarithm of any survival 2000, p. 191).

48 Teacher’s Corner • The half-normal, Rayleigh, and Maxwell–Boltzmann dis- • Muth’s distribution: Muth (1977) tributions are special cases of the chi distribution with , negative hypergeometric distribution: Balakrishnan and n = 1 2, and 3 degrees of freedom (Johnson, Balakrish- • nan, and Kotz 1994, p. 417). Nevzorov (2003), Miller and Fridell (2007) power distribution: Balakrishnan and Nevzorov (2003) • A function of the ratio of two independent generalized • gamma random variables has the beta distribution (Stacy TSP distribution: Kotz and van Dorp (2004) 1962). • Zipf distribution: Ross (2006). Additionally, there are transformations where two distribu- • tions are combined to obtain a third, which were also omitted to maintain a planar graph. Two such examples are: A. APPENDIX: PARAMETERIZATIONS

• The t distribution with n degrees of freedom is defined as A.1 Discrete Distributions the distribution of Z , Benford: χ 2(n)/n ( ) 1 , , ,..., p χ 2( ) f x = log10 1 + x = 1 2 9 where Z is a standard normal random variable and n  x  is a chi-square random variable with n degrees of freedom, independent of Z (Evans, Hastings, and Peacock 2000, p. Bernoulli: 180). ( ) x ( )1−x , , f x = p 1 − p x = 0 1 • The noncentral beta distribution with noncentrality param- eter δ is defined as the distribution of Beta–binomial: 0( )0( )0( )0( ) ( ) x + a n − x + b a + b n + 2 , X , f x = ( )0( )0( )0( )0( )0( ) n + 1 a + b + n a b x + 1 n − x + 1 X + Y x 0, 1,..., n where X is a noncentral chi-square random variable with = (β, δ) parameters and Y is a central chi-square random Beta–Pascal (factorial): variable with γ degrees of freedom (Evans, Hastings, and ( , ) Peacock 2000, p. 42). n 1 x B n a b x f (x) − + + + , x 0, 1,... = x B(a, b) = References for distributions not typically covered in introduc-   tory probability and statistics textbooks include: Binomial: arctan distribution: Glen and Leemis (1997) n • ( ) x ( )n−x , , ,..., f x = p 1 − p x = 0 1 n x • Benford distribution: Benford (1938) Discrete uniform: • exponential power distribution: Smith and Bain (1975) ( ) 1 , , ,..., • extreme value distribution: de Haan and Ferreira (2006) f x = x = a a + 1 b b − a + 1 generalized gamma distribution: Stacy (1962) • Discrete Weibull: generalized Pareto distribution: Davis and Feldstein (1979) β ( )β • ( ) ( )x ( ) x+1 , , ,... f x = 1 − p − 1 − p x = 0 1 • Gompertz distribution: Jordan (1967) Gamma–Poisson: hyperexponential and hypoexponential distributions: Ross • 0( β)αx (2007) ( ) x + , , ,... f x = 0(β)( α)β x x = 0 1 1 + + x! • IDB distribution: Hjorth (1980) Geometric: • inverse Gaussian distribution: Chhikara and Folks (1989), Seshadri (1993) ( ) ( )x , , ,... f x = p 1 − p x = 0 1 inverted gamma distribution: Casella and Berger (2002) • Hypergeometric: • logarithm distribution: Johnson, Kemp, and Kotz (2005) ( ) n1 n3 − n1 n3 , f x = • logistic–exponential distribution: Lan and Leemis (2007)  x  n2 − x n2 ( , ), . . . , ( , ) • Makeham distribution: Jordan (1967) x = max 0 n1 + n2 − n3 min n1 n2

The American Statistician, February 2008, Vol. 62, No. 1 49 < < < < Logarithm (logarithmic series, 0 c 1): Cauchy (Lorentz, Breit–Wigner, −∞ a ∞): ( )x ( ) − 1 − c , , ,... ( ) 1 , < < f x = x = 1 2 f x = απ (( )/α)2 −∞ x ∞ x log c [1 + x − a ] Negative hypergeometric: Chi: ( ) 1 n 1 x2/2, > f x / x − e− x 0 ( ) n1 + x − 1 n3 − n1 + n2 − x − 1 = n 2−10( / ) f x = 2 n 2  x  n2 − x  Chi-square: n3 + n2 − 1 , 1 / / n2 ( ) n 2−1 −x 2, >   f x = n/20( / ) x e x 0 ( , ), . . . , 2 n 2 x = max 0 n1 + n2 − n3 n2 Doubly noncentral F: Pascal (negative binomial): j k δ/2 1 γ /2 1 n 1 x ∞ ∞ e− δ e− γ ( ) − + n( )x , , ,... ( ) 2 2 f x = p 1 − p x = 0 1 f x          x  = j! k! Xj=0 Xk=0 μ >     Poisson ( 0): ( / ) ( / ) ( /)  n1 2 + j n2 2 +k n1 2 + j−1 μ ×n1 n2 x μx e− f (x) , x 0, 1,... 1 ( ) = = ( )− 2 n1+n2 − j−k x! × n2 + n1x −1 Polya: 1 , 1 , > B n1 j n2 k x 0 × 2 + 2 + x−1 n−x−1 n−1    ( ) n ( β) ( β) ( β), f x = p + j 1 − p + k 1 + i x  Doubly noncentral t: Yj=0 kY=0 Yi=0 , ,..., See Johnson, Kotz, and Balakrishnan (1995, p. 533) x = 0 1 n > ( ) x Erlang: Power series (c 0; A c = x ax c ): 1 /α xP ( ) n−1 −x , > ax c f x x e x 0 f (x) , x 0, 1,... = αn(n 1)! = A(c) = − < < , > Rectangular (discrete uniform, n 0, 1,...): Error (exponential power, general error; −∞ a ∞ b = 0, c > 0): 1 / f (x) , x 0, 1,..., n ( / )2 c/ = = ( ) exp − |x − a| b 2 , < < n + 1 f x / x = b(2c 2+1)0(1 c/2)  −∞ ∞ Zeta: + ( ) 1 , , ,... f x α α x 1 2 Exponential (negative exponential): = x ∞ (1/i) = i=1 /α α f (x) (1/α)e−x , x > 0 Zipf ( ≥ 0): P = ( ) 1 , , ,..., Exponential power: f x α α x 1 2 n = n ( / ) = κ x i 1 1 i λx λ κ κ = f (x) (e1−e )e x λκx −1, x > 0 P = A.2 Continuous Distributions Extreme value (Gumbel): Arcsin: β xβ /α ( ) 1 , < < f (x) (β/α)ex −e , < x < f x = π√ ( ) 0 x 1 = −∞ ∞ x 1 − x < φ < F (variance ratio, Fisher–Snedecor): Arctangent (−∞ ∞): / / λ 0(( )/ )( / )n1 2 n1 2 1 ( ) n1 + n2 2 n1 n2 x − , > f (x) , x 0 f x (( )/ ) x 0 = π ≥ = 0( / )0( / ) ( / ) n1+n2 2 (λφ) λ2( φ)2 n1 2 n2 2 [ n1 n2 x + 1] arctan + 2 1 + x −    Gamma: Beta: ( ) 1 β 1 x/α, > f x β x − e− x 0 0(β γ ) β γ = α 0(β) ( ) + −1( ) −1, < < f x = 0(β)0(γ ) x 1 − x 0 x 1  

50 Teacher’s Corner Gamma–normal: Log logistic: λκ(λ )κ 1 See Evans, Hastings, and Peacock (2000, p. 103) ( ) x − , > f x = (λ )κ 2 x 0 Generalized gamma: [1 + x ] < α < γ γβ ( /α)γ Log normal (−∞ ∞): ( ) −1 − x , > f x = αγβ 0(β) x e x 0 ( ) 1 1( ( /α)/β)2 , > f x = √ πβ exp − log x x 0 Generalized Pareto: 2 x  2 

κ κ γ Logistic: f (x) γ (1 x/δ)− e− x , x > 0 = + x δ + λκ κ κx  +  ( ) e , < < f x = (λ x )κ 2 −∞ x ∞ Gompertz (κ > 1): [1 + e ] δ(κ x )/ κ ( ) δκ x [− −1 log ], > Logistic–exponential: f x = e x 0 αβ( αx )β 1 αx ( ) e − 1 − e , > Hyperbolic–secant: f x = ( ( αx )β )2 x 0 1 + e − 1 f (x) sech(πx), < x < = −∞ ∞ Lomax: λκ > , n ( ) , > Hyperexponential (pi 0 pi 1): f x = ( λ )κ 1 x 0 i=1 = 1 + x + n P κ > /α Makeham ( 1): ( ) pi x i , > f x e− x 0 δ(κx ) = α γ −1 i ( ) (γ δκ x ) − x− log κ , > Xi=1 f x = + e x 0 α α Hypoexponential ( i 6= j for i 6= j): Minimax: β β γ n n α f (x) βγ x −1(1 x ) −1, 0 < x < 1 /α i = − ( ) ( /α ) −x i , > f x = 1 i e  α α  x 0 , i j Xi=1 j=Y1 j6=i − Muth:   κ 1 κx κ 1 γ f (x) (e x κ)e[− κ e + x+ κ ], x > 0 IDB ( ≥ 0): = − ( κ )δ γ ( ) 1 x x δx2/2, > Noncentral beta: f x + γ /κ+ e− x 0 = ( κ ) +1 δ/ 1 + x ∞ 0( β γ ) − 2 δ i ( ) i + + e μ > f x = 0(γ )0( β) Inverse Gaussian (Wald; 0): i i! ! 2 Xi=0 + λ λ ( μ)2 β γ μ x i+ −1( ) −1, < < f (x) e− 2 2x − , x > 0 ×x 1 − x 0 x 1 = r2πx3 Noncentral chi-square: Inverted beta (β > 1, γ > 1): δ δ x n+2k β β γ ∞ exp( )( )k exp( )x 2 −1 x −1(1 x)− − ( ) − 2 2 − 2 , > ( ) + , > f x = ∙ n+2k x 0 f x = (β, γ ) x 0 k! 2 0( n+2k ) B Xk=0 2 2 Inverted gamma: Noncentral F: ( )/ 2i n1 2 δ i α α /β 2i n1 n2 n1 + (2i n 2)/2 δ/2 ( ) / 0(α)β − −1 −1 x , > ∞ 0 + + x + 1− e− f x = [1 ]x e x 0 ( ) 2 n2 2 , f x =     ( )/   n 2i n n 2i+n1+n2 2 i 0 0 2 0 + 1 i! 1 1 x Kolmogorov–Smirnov: X= 2 2 + n2     x > 0  See Drew, Glen, and Leemis (2000) Noncentral t ( < δ < ): Laplace (double exponential): −∞ ∞ n/2 ( δ2/ ) /α n exp 2 /(α α ) x 1 , < ( ) − 1 1 2 e x 0 f x ( )/ ( ) + /α = √π0(n/2)(n x2) n+1 2 f x = /(α α ) −x 2 , > +  1 1 + 2 e x 0 i ∞ 0 ( )/ δ√  [ n + i + 1 2] x 2 , Log gamma: × i! √ 2 ! i 0 n x β β x /α X= + ( ) /α 0(β) x −e , < < < < f x = [1 ]e e −∞ x ∞ −∞ x ∞

The American Statistician, February 2008, Vol. 62, No. 1 51 Normal (Gaussian): A.3 Functions 1 1 f (x) exp ((x μ)/σ )2 , < x < Gamma function: = √ πσ −2 − −∞ ∞ 2   ∞ 0(c) e−x xc−1 dx Pareto: κλκ = ( ) , > λ Z0 f x = κ 1 x x + Beta function: Power: β β 1 ( ) x − , < < α 1 f x β 0 x ( , ) a−1( )b−1 = α B a b = x 1 − x dx Z0 Rayleigh: 2/α Modified Bessel function of the first kind of order0: ( ) ( /α) −x , > f x = 2x e x 0 ∞ κ2i (κ) Standard Cauchy: I0 = 22i (i!)2 ( ) 1 , < < Xi=0 f x = π( 2) −∞ x ∞ 1 + x [Received October 2007. Revised December 2007.] Standard normal: x2/2 ( ) e− , < < REFERENCES f x = √ π −∞ x ∞ 2 Balakrishnan, N., and Nevzorov, V.B. (2003), A Primer on Statistical Distribu- Standard power: tions, Hoboken, NJ: Wiley. β Benford, F. (1938), “The Law of Anomalous Numbers,” Proceedings of the ( ) β −1, < < f x = x 0 x 1 American Philosophical Society, 78, 551–572. Standard triangular: Boomsma, A., and Molenaar, I.W. (1994), “Four Electronic Tables for Proba- bility Distributions,” The American Statistician, 48, 153–162. , < < ( ) x + 1 −1 x 0 Casella, G., and Berger, R. (2002), Statistical Inference (2nd ed.), Belmont, CA: f x = , < 1 − x 0 ≤ x 1 Duxbury. Chhikara, R.S., and Folks, L.S. (1989), The Inverse Gaussian Distribution: The- Standard uniform: ory, Methodology and Applications, New York: Marcel Dekker, Inc. f (x) 1, 0 < x < 1 Crowder, M.J., Kimber, A.C., Smith, R.L., and Sweeting, T.J. (1991), Statistical = Analysis of Reliability Data, New York: Chapman and Hall. t (Student’s t): Davis, H.T., and Feldstein, M.L (1979), “The Generalized Pareto Law as a 0((n 1)/2) Model for Progressively Censored Survival Data,” Biometrika, 66, 299– ( ) + , < < 306. f x = ( π)1/20( / ) 2/ (n 1)/2 −∞ x ∞ n n 2 [x n + 1] + Devroye, L. (2006), “Nonuniform Random Variate Generation,” in Simulation, Triangular (a < m < b): eds. S.G. Henderson and B.L. Nelson, Vol. 13, Handbooks in Operations Research and Management Science, Amsterdam: North–Holland. ( ) 2 x − a , < < Drew, J.H., Glen, A.G., and Leemis, L.M. (2000), “Computing the Cumula- ( )( ) a x m tive Distribution Function of the Kolmogorov–Smirnov Statistic,” Compu- ( ) b − a m − a f x =  2(b x) tational Statistics and Data Analysis, 34, 1–15.  − , m x < b (b a)(b m) ≤ Evans, M., Hastings, N., and Peacock, B. (2000), Statistical Distributions (3rd − − ed.), New York: Wiley.  TSP (two-sided power): Flanigan–Wagner, M., and Wilson, J.R. (1993), “Using Univariate Bezier´ Dis- tributions to Model Simulation Input Processes,” in Proceedings of the 1993 n x a n−1 − , a < x m Winter Simulation Conference, eds. G. W. Evans, M. Mollaghasemi, E. C. ≤ Russell, and W. E. Biles, Institute of Electrical and Electronics Engineers, f (x)  b − a m − a  =  n−1 pp. 365–373.  n b − x , < m ≤ x b Glen, A., and Leemis, L.M. (1997), “The Arctangent Survival Distribution,” b − a b − m  Journal of Quality Technology, 29, 205–210.  Uniform (continuous rectangular; < a < b < ): Gupta, R.D., and Kundu, D. (2007), “Generalized Exponential Distribution: Ex- −∞ ∞ isting Results and Some Recent Developments,” Journal of Statistical Plan- ( ) /( ), < < f x = 1 b − a a x b ning and Research, 137, 3537–3547. < μ < π de Haan, L., and Ferreira, A. (2006), Extreme Value Theory: An Introduction, von Mises (0 2 ): New York: Springer. κ cos(x μ) ( ) e − , < < π Hjorth, U. (1980), “A Reliability Distribution with Increasing, Decreasing, Con- f x = π (κ) 0 x 2 stant and Bathtub-Shaped Failure Rates,” Technometrics, 22, 99–107. 2 I0 Hogg, R.V., McKean, J.W., and Craig, A.T. (2005), Introduction to Mathemati- Wald (Standard Wald): cal Statistics (6th ed.), Upper Saddle River, NJ: Prentice Hall. Hosking, J.R.M. (1994), “The Four-Parameter Kappa Distribution,” IBM Jour- λ λ ( )2 f (x) e− 2x x−1 , x > 0 nal of Research and Development, 38, 251–258. = r2πx3 Johnson, N.L., Kemp, A.W., and Kotz, S. (2005), Univariate Discrete Distribu- Weibull: tions (3rd ed.), New York: Wiley. β β Johnson, N.L., Kotz, S., and Balakrishnan, N. (1994), Continuous Univariate ( ) (β/α) −1 ( /α) , > f x = x exp − 1 x x 0 Distributions (Vol. I, 2nd ed.), New York: Wiley.   52 Teacher’s Corner (1995), Continuous Univariate Distributions (Vol. II, 2nd ed.), New Patel, J.K., Kapadia, C.H., and Owen, D.B. (1976), Handbook of Statistical York: Wiley. Distributions, New York: Marcel Dekker, Inc. Jones, M.C. (2002), “The Complementary Beta Distribution,” Journal of Statis- Patil, G.P., Boswell, M.T., Joshi, S.W., and Ratnaparkhi, M.V. (1985), Discrete tical Planning and Inference, 104, 329–337. Models, Burtonsville, MD: International Co-operative Publishing House. Jordan, C.W. (1967), Life Contingencies, Chicago: Society of Actuaries. Patil, G.P., Boswell, M.T., and Ratnaparkhi, M.V. (1985), Univariate Con- Kotz, S., and van Dorp, J.R. (2004), Beyond Beta: Other Continuous Families tinuous Models, Burtonsville, MD: International Co-operative Publishing of Distributions with Bounded Support and Applications, Hackensack, NJ: House. World Scientific. Prentice, R.L. (1975), “Discrimination Among Some Parametric Models,” Lan, L., and Leemis, L. (2007), “The Logistic–Exponential Survival Distribu- Biometrika, 62, 607–619. tion,” Technical Report, The College of William & Mary, Department of Ramberg, J.S., and Schmeiser, B.W. (1974), “An Approximate Method for Gen- Mathematics. erating Asymmetric Random Variables,” Communications of the Associa- Leemis, L. (1986), “Relationships Among Common Univariate Distributions,” tion for Computing Machinery, 17, 78–82. The American Statistician, 40, 143–146. Ross, S. (2006), A First Course In Probability (7th ed.), Upper Saddle River, Marshall, A.W., and Olkin, I. (1985), “A Family of Bivariate Distributions Gen- NJ: Prentice Hall. erated by the Bivariate ,” Journal of the American Sta- (2007), Introduction to Probability Models (9th ed.), New York: Aca- tistical Association, 80, 332–338. demic Press. McDonald, J.B. (1984), “Some Generalized Functions for the Size Distribution Seshadri, V. (1993), The Inverse Gaussian Distribution, Oxford: Oxford Uni- of Income,” Econometrica, 52, 647–663. versity Press. Miller, G.K., and Fridell, S.L. (2007), “A Forgotten Discrete Distribution? Re- Shapiro, S.S., and Gross, A.J. (1981), Statistical Modeling Techniques, New viving the Negative Hypergeometric Model,” The American Statistician, 61, York: Marcel Dekker. 347–350. Smith, R.M., and Bain, L.J. (1975), “An Exponential Power Life-Testing Dis- Muth, E.J. (1977), “Reliability Models with Positive Memory Derived from the tribution,” Communications in Statistics, 4, 469–481. Mean Residual Life Function” in The Theory and Applications of Relia- bility, eds. C.P. Tsokos and I. Shimi, New York: Academic Press, Inc., pp. Song, W.T. (2005), “Relationships Among Some Univariate Distributions,” IIE 401–435. Transactions, 37, 651–656. Nakagawa, T., and Yoda, H. (1977), “Relationships Among Distributions,” Stacy, E.W. (1962), “A Generalization of the Gamma Distribution,” Annals of IEEE Transactions on Reliability, 26, 352–353. , 33, 1187–1192. Ord, J.K. (1972), Families of Frequency Distributions, New York: Hafner Pub- Taha, H.A. (1982), Operations Research: An Introduction (3rd ed.), New York: lishing. Macmillan.

The American Statistician, February 2008, Vol. 62, No. 1 53