1 Some Important Continuous Distributions

Total Page:16

File Type:pdf, Size:1020Kb

1 Some Important Continuous Distributions ISyE8843A, Brani Vidakovic Handout 0 1 Some Important Continuous Distributions 1.1 Uniform U(a; b) Distribution Random variable X has uniform U(a; b) distribution if its density is given by 8 ½ < 0; x < a 1 ; a · x · b f(xja; b) = b¡a F (xja; b) = x¡a ; a · x · b 0; else : b¡a 1; x > b k 1 bk+1¡ak+1 ² Moments: EX = b¡a k+1 ; k = 1; 2;::: (b¡a)2 ² Variance V arX = 12 : 1 eitb¡eita ² Characteristic Function '(t) = b¡a it : X¡a ² If X »U(a; b) then Y = b¡a »U(0; 1): ² Typical model: Rounding (to the nearest integer) Error is often modeled as U(¡1=2; 1=2) ² If X » F , where F is a continuous cdf, then Y = F (X) »U(0; 1): 1.2 Exponential E(¸) Distribution Random variable X has exponential E(¸) distribution if its density and cdf are given by ½ ½ ¸e¡¸x; x ¸ 0 1 ¡ e¡¸x; x ¸ 0 f(xj¸) = ;F (xj¸) = 0; else 0; else k k! ² Moments: EX = ¸k ; k = 1; 2;::: 1 ² Variance V arX = ¸2 : ¸ ² Characteristic Function '(t) = ¸¡it : ² Exponential random variable X possesses memoryless property P (X > t + sjX > s) = P (X > t): ² Typical model: Lifetime in reliability. ¡ x 0 0 1 ¸0 k 0 k ² Alternative parametrization, ¸ as scale. f(xj¸ ) = ¸0 e , EX = k!(¸ ) ; k = 1; 2;::: , V arX = (¸0)2: 1.3 Double Exponential DE(¹; σ) Distribution Random variable X has double exponential DE(¹; σ) distribution if its density and cdf are given by 1 1 ³ ´ f(xj¹; σ) = e¡jx¡¹j/σ;F (xj¹; σ) = 1 + sgn(x ¡ ¹)(1 ¡ e¡jx¡¹j/σ ; x; ¹ 2 R; σ > 0: 2σ 2 2k ¹2k ¹2(k¡1)σ2 σ2k 2k+1 2k+1 ² Moments: EX = ¹, EX = [ (2k)! + (2(k¡1))! + ::: 1 ](2k)! EX = ¹ (2k + 1)! ² Variance V arX = 2σ2: ei¹t ² Characteristic Function '(t) = 1+(σt)2 : 1 1.4 Normal (Gaussian) N (¹; σ2) Distribution Random variable X has normal N (¹; σ2) distribution with parameters ¹ 2 R (mean, center) and σ2 > 0 (variance) if its density is given by 2 1 ¡ (x¡¹) f(xj®; ¯) = p e 2σ2 ; ¡ 1 < x < 1 2¼σ2 ² Moments: EX = ¹; E(X ¡ ¹)2k¡1 = 0; E(X ¡ ¹)2k = (2k ¡ 1)(2k ¡ 3) ::: 5 ¢ 3 ¢ 1 ¢ σ2k = (2k ¡ 1)!!σ2k; k = 1; 2;::: 2 2 ² Characteristic function '(t) = ei¹t¡t σ =2: 2 X¡¹ 1 ¡ x ² Z = has standard normal distribution Á(x) = p e 2 : The cdf of standard normal distribution σ 2¼ R x is a special function ©(x) = ¡1 Á(t)dt and its values are tabulated in many introductory statistical texts. ² Standard half-Normal distribution is given by f(x) = 2Á(x)1(x ¸ 0): 2 1.5 Chi-Square Ân Distribution 2 Random variable X has chi-square Ân distribution distribution with n degrees of freedom if its density is given by ( 1 xn=2¡1e¡x=2; x > 0 f(x) = 2n=2¡(n=2) 0; else ² Moments: EXk = n(n + 2) ::: (n + 2(k ¡ 1)); k = 1; 2;:::: ² Expectation EX = n, Variance V arX = 2n, and Mode m = n ¡ 2; n > 2: ² Characteristic function '(t) = (1 ¡ 2it)¡n=2: 2 ² Noncentral chi-square ncÂn distribution is the distribution of sum of squares of n normals: N (¹i; 1): P 2 2 The non-centrality parameter is ± = i ¹i : Density of ncÂn distribution involves Bessel function of the 2 2 first kind. The simplest representation for the cdf of ncÂn is an infinite Poisson mixture of central  ’s where degrees of freedom are mixed: X1 (±=2)i F (xjn; ±) = e¡±=2P (Â2 · x): i! n+2i i=0 2 ² Ân is G(n=2; 2) or in alternative parametrization Gamma(n=2; 1=2): 2 2 2 ² Inverse Ân, inv ¡ Ân is defined as IG(n=2; 2) or IGamma(n=2; 1=2): Scaled inverse Ân, inv ¡ Â2(n; s2) is IG(n=2; 2=(ns2)) or IGamma(n=2; ns2=2): 1.6 Chi Ân Distribution Random variable X has chi-square Ân distribution distribution with n degrees of freedom if its density is given by ( 1 xn¡1e¡x2=2; x > 0 f(x) = 2n=2¡1¡(n=2) 0; else 2 k=2 n+k k 2 ¡( 2 ) ² Moments: EX = ¡(n=2) ; k = 1; 2;:::: h i2 ¡((n+1)=2) ² Variance V arX = n ¡ 2 ¡(n=2) . p 1 P1 (i 2t)k ² Characteristic function '(t) = ¡(n=2) k=0 k! ¡((n + k)=2): ² Ân for n = 2 is Raleigh Distribution, for n = 3 Maxwell distribution. 1.7 Gamma G(®; ¯) Distribution Random variable X has gamma G(®; ¯) distribution with parameters ® > 0 (shape) and ¯ > 0 (scale) if its density is given by ( 1 ®¡1 ¡x=¯ ½ ® x e ; x ¸ 0 0; x · 0 f(xj®; ¯) = ¡(®)¯ F (xj®; ¯) = 0; else inc¡(xj®; ¯); x ¸ 0 ² Moments: EXk = ®(® + 1) ::: (® + k ¡ 1)¯k; k = 1; 2;::: ² Mean EX = ®¯; Variance V arX = ®¯2: ² Mode m = (® ¡ 1)¯: 1 ² Characteristic function '(t) = (1¡it¯)® : ba a¡1 ¡bx k a(a+1):::(a+k¡1) ² Alternative parametrization, Gamma(a; b): f(xja; b) = ¡(a) x e ; x ¸ 0; EX = bk ; k = 1; 2; : : : V arX = a=b2: 2 ² Special cases E(¸) ´ G(1; 1=¸) ´ Gamma(1; ¸) and Ân ´ G(n=2; 2) ´ Gamma(n=2; 1=2): d 1 P® ² If ® is an integer, Ga(®; ¯) = ¡ ¯ i=1 log(U): If ® < 1, then if Y »Be(®; 1 ¡ ®) and Z »E(1), X = YZ »Ga(®; 1): 1.8 Inverse Gamma IG(®; ¯) Distribution Random variable X has gamma IG(®; ¯) distribution with parameters ® > 0 and ¯ > 0 if its density is given by ( 1 e¡1=(¯x); x ¸ 0 f(xj®; ¯) = ¡(®)¯®x®+1 0; else ² Moments: EXk = f(® ¡ 1)(® ¡ 2) ::: (® ¡ k)¯kg¡1; k = 1; 2;::: (® > k): 1 1 ² Mean EX = ¯(®¡1) ; ® > 1, Variance V arX = (®¡1)2(®¡2)¯2 ; ® > 2 1 ² Mode m = (®+1)¯ : ² Characteristic function '(t) = : ² If X »G(®; ¯) then X¡1 »IG(®; ¯): ba ¡b=x k bk ² Alternative parametrization, IGamma(a; b). f(xja; b) = ¡(a)xa+1 e ; x ¸ 0; EX = (a¡1):::(a¡k) ; k = 1; 2; : : : V arX = b2=((a ¡ 1)2(a ¡ 2)): 1.9 Beta Be(®; ¯) Distribution Random variable X has beta Be(®; ¯) distribution with parameters ® > 0 and ¯ > 0 if its density is given by 3 ( 8 ¡(®+¯) < 0; x · 0 x®¡1(1 ¡ x)¯¡1; 0 · x · 1 f(xj®; ¯) = ¡(®)¡(¯) F (xj®; ¯) = incBeta(xj®; ¯); 0 · x · 1 : 0; else 1; x ¸ 1 k ¡(®+k)¡(®+¯) ®(®+1):::(®+k¡1) ² Moments: EX = ¡(®)¡(®+¯+k) = (®+¯)(®+¯+1):::(®+¯+k¡1) : ® ®¯ ² Mean EX = ®+¯ ; Variance V arX = (®+¯)2(®+¯+1) : ®¡1 ² Mode m = ®+¯¡2 ; if ® > 1; ¯ > 1 ¡(®+¯) P1 (it)k ¡(®+k) ² Characteristic function '(t) = ¡(®) k=0 k! ¡(®+¯+k) ² If X1;X2;:::;Xn is a sample from the uniform U(0; 1) distribution, and X(1);X(2);:::;X(n) its order statistics. Then X(k) »Be(k; n ¡ k + 1): ² Be(1; 1) ´ U(0; 1); Be(1=2; 1=2) ´ Arcsine Law 1.10 Student tn Distribution Random variable X has Student tn distribution with n degrees of freedom if its density is given by µ ¶ n+1 n+1 2 ¡ 2 ¡( 2 ) x fn(x) = p n 1 + ; ¡ 1 < x < 1: n¼ ¡( 2 ) n k ¡(n=2¡k)¡(k+1=2) 2k pn 2k¡1 ² Moments: EX = ¼ ¡(n=2) ; 2k < n: EX = 0; k = 1; 2;:::: ² V arX = n ; n > 2 n¡2 p p ¡ njtj P ¡ ¢ p ² Characteristic function '(t) = ¼ ¡(m) e m¡1(2k)! m¡1+k (2 njtj)m¡1¡k; ¡(n=2) 22(m¡1)(m¡1)! k=0 2k n+1 for m = 2 integer. ² t2k¡1; k = 1; 2;::: is infinitely divisible. p 2 ² If Z »N (0; 1) and Y » Ân, then X = Z= Y=n has tn distribution. ² t1 ´ Ca(0; 1): 1.11 Cauchy Ca(a; b) Distribution Random variable X has Cauchy Ca(a; b) distribution with center a 2 R and scale b 2 R+ if its density is given by 1 1 b f(x) = = ; ¡ 1 < x < 1: x¡a 2 2 2 b¼ 1 + ( b ) ¼(b + (x ¡ a) ) If a = 0 and b = 1, the distribution is called standard Cauchy. The cdf is 1 x ¡ a F (x) = 1=2 + arctan ; ¡ 1 < x < 1: ¼ b ² Moments: No finite moments. Value x = a is the mode and median of the distribution. ² Characteristic function. '(t) = eiat¡bjtj: Cauchy distribution is infinitely divisible. Since '(t) = n ('(t=n)) , if X1;:::;Xn are iid Cauchy, X¹ is also a Cauchy. ² If Y; Z are independent standard normal N (0; 1) then X = Y=Z has Cauchy Ca(0; 1) distribution. If a variable U is uniformly distributed between ¡¼=2 and ¼=2, then X = tan U will follow standard Cauchy distribution. ² The Cauchy distribution is sometimes called the Lorentz distribution (especially in engineering com- munity). 4 1.12 Fisher Fm;n Distribution Random variable X has Fisher Fm;n distribution with m and n degrees of freedom if its density is given by mm=2nn=2xm=2¡1 ¡(m+n)=2 f(xjm; n) = B(m=2;n=2) (n + mx) ; x > 0: The cdf F (x) is of no closed form. ¡ ¢ k n k ¡(m=2+k)¡(n=2¡k) ² Moments E(X ) = m ¡(m=2)¡(n=2) ; 2k < n: n 2n2(m+n¡2) ² Mean EX = n¡2 ; n > 2; Variance V arX = m(n¡2)2(n¡4) ; n > 4: ² Mode n=m ¢ (m ¡ 2)=(n + 2) 1.13 Logistic Lo(¹; σ2) Distribution Random variable X has logistic Lo(¹; σ2) distribution with parameters ¹ 2 R and σ2 > 0 if its density is given by h ¡ ¢i ¼ exp ¡ p¼ x¡¹ 2 3 σ f(xj¹; σ ) = p n h ¡ ¢io2 σ 3 1 + exp ¡ p¼ x¡¹ 3 σ 2 ² Expectation: EX = ¹ Variance: V arX =p σ : p it¹ σ 3 σ 3 ² Characteristic function '(t) = e ¡(1 ¡ i ¼ t)¡(1 + i ¼ t): ² Used in modeling in biometry (drug response, toxicology, etc.) ² Alternative parametrization 1.14 Lognormal LN (¹; σ2) Distribution Random variable X has lognormal LN (¹; σ2) distribution with parameters ¹ 2 R and σ2 > 0 if its density is given by ( (ln(x)¡¹)2 1 ¡ p e 2σ2 ; x > 0 f(xj®; ¯) = xσ 2¼ 0; else k 1 2 4 ² Moments: EX = expf 2 k σ + k¹g: 4 4 ² Variance V arX = eσ +¹[eσ ¡ 1]: 2 ² Mode e¹¡σ ² If Z »N (0; 1) then X = expfσ2Z + ¹g » LN (¹; σ2).
Recommended publications
  • 5. the Student T Distribution
    Virtual Laboratories > 4. Special Distributions > 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 5. The Student t Distribution In this section we will study a distribution that has special importance in statistics. In particular, this distribution will arise in the study of a standardized version of the sample mean when the underlying distribution is normal. The Probability Density Function Suppose that Z has the standard normal distribution, V has the chi-squared distribution with n degrees of freedom, and that Z and V are independent. Let Z T= √V/n In the following exercise, you will show that T has probability density function given by −(n +1) /2 Γ((n + 1) / 2) t2 f(t)= 1 + , t∈ℝ ( n ) √n π Γ(n / 2) 1. Show that T has the given probability density function by using the following steps. n a. Show first that the conditional distribution of T given V=v is normal with mean 0 a nd variance v . b. Use (a) to find the joint probability density function of (T,V). c. Integrate the joint probability density function in (b) with respect to v to find the probability density function of T. The distribution of T is known as the Student t distribution with n degree of freedom. The distribution is well defined for any n > 0, but in practice, only positive integer values of n are of interest. This distribution was first studied by William Gosset, who published under the pseudonym Student. In addition to supplying the proof, Exercise 1 provides a good way of thinking of the t distribution: the t distribution arises when the variance of a mean 0 normal distribution is randomized in a certain way.
    [Show full text]
  • A Study of Non-Central Skew T Distributions and Their Applications in Data Analysis and Change Point Detection
    A STUDY OF NON-CENTRAL SKEW T DISTRIBUTIONS AND THEIR APPLICATIONS IN DATA ANALYSIS AND CHANGE POINT DETECTION Abeer M. Hasan A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2013 Committee: Arjun K. Gupta, Co-advisor Wei Ning, Advisor Mark Earley, Graduate Faculty Representative Junfeng Shang. Copyright c August 2013 Abeer M. Hasan All rights reserved iii ABSTRACT Arjun K. Gupta, Co-advisor Wei Ning, Advisor Over the past three decades there has been a growing interest in searching for distribution families that are suitable to analyze skewed data with excess kurtosis. The search started by numerous papers on the skew normal distribution. Multivariate t distributions started to catch attention shortly after the development of the multivariate skew normal distribution. Many researchers proposed alternative methods to generalize the univariate t distribution to the multivariate case. Recently, skew t distribution started to become popular in research. Skew t distributions provide more flexibility and better ability to accommodate long-tailed data than skew normal distributions. In this dissertation, a new non-central skew t distribution is studied and its theoretical properties are explored. Applications of the proposed non-central skew t distribution in data analysis and model comparisons are studied. An extension of our distribution to the multivariate case is presented and properties of the multivariate non-central skew t distri- bution are discussed. We also discuss the distribution of quadratic forms of the non-central skew t distribution. In the last chapter, the change point problem of the non-central skew t distribution is discussed under different settings.
    [Show full text]
  • Benford's Law As a Logarithmic Transformation
    Benford’s Law as a Logarithmic Transformation J. M. Pimbley Maxwell Consulting, LLC Abstract We review the history and description of Benford’s Law - the observation that many naturally occurring numerical data collections exhibit a logarithmic distribution of digits. By creating numerous hypothetical examples, we identify several cases that satisfy Benford’s Law and several that do not. Building on prior work, we create and demonstrate a “Benford Test” to determine from the functional form of a probability density function whether the resulting distribution of digits will find close agreement with Benford’s Law. We also discover that one may generalize the first-digit distribution algorithm to include non-logarithmic transformations. This generalization shows that Benford’s Law rests on the tendency of the transformed and shifted data to exhibit a uniform distribution. Introduction What the world calls “Benford’s Law” is a marvelous mélange of mathematics, data, philosophy, empiricism, theory, mysticism, and fraud. Even with all these qualifiers, one easily describes this law and then asks the simple questions that have challenged investigators for more than a century. Take a large collection of positive numbers which may be integers or real numbers or both. Focus only on the first (non-zero) digit of each number. Count how many numbers of the large collection have each of the nine possibilities (1-9) as the first digit. For typical J. M. Pimbley, “Benford’s Law as a Logarithmic Transformation,” Maxwell Consulting Archives, 2014. number collections – which we’ll generally call “datasets” – the first digit is not equally distributed among the values 1-9.
    [Show full text]
  • 1988: Logarithmic Series Distribution and Its Use In
    LOGARITHMIC SERIES DISTRIBUTION AND ITS USE IN ANALYZING DISCRETE DATA Jeffrey R, Wilson, Arizona State University Tempe, Arizona 85287 1. Introduction is assumed to be 7. for any unit and any trial. In a previous paper, Wilson and Koehler Furthermore, for ~ach unit the m trials are (1988) used the generalized-Dirichlet identical and independent so upon summing across multinomial model to account for extra trials X. = (XI~. X 2 ...... X~.)', the vector of variation. The model allows for a second order counts ~r the j th ~nit has ~3multinomial of pairwise correlation among units, a type of distribution with probability vector ~ = (~1' assumption found reasonable in some biological 72 ..... 71) ' and sample size m. Howe~er, data. In that paper, the two-way crossed responses given by the J units at a particular generalized Dirichlet Multinomial model was used trial may be correlated, producing a set of J to analyze repeated measure on the categorical correlated multinomial random vectors, X I, X 2, preferences of insurance customers. The number • o.~ X • of respondents was assumed to be fixed and Ta~is (1962) developed a model for this known. situation, which he called the generalized- In this paper a generalization of the model multinomial distribution in which a single is made allowing the number of respondents m, to parameter P, is used to reflect the common be random. Thus both the number of units m, and dependency between any two of the dependent the underlying probability vector are allowed to multinomial random vectors. The distribution of vary. The model presented here uses the m logarithmic series distribution to account for the category total Xi3.
    [Show full text]
  • A Note on the Screaming Toes Game
    A note on the Screaming Toes game Simon Tavar´e1 1Department of Statistics, Columbia University, 1255 Amsterdam Avenue, New York, NY 10027, USA June 11, 2020 Abstract We investigate properties of random mappings whose core is composed of derangements as opposed to permutations. Such mappings arise as the natural framework to study the Screaming Toes game described, for example, by Peter Cameron. This mapping differs from the classical case primarily in the behaviour of the small components, and a number of explicit results are provided to illustrate these differences. Keywords: Random mappings, derangements, Poisson approximation, Poisson-Dirichlet distribu- tion, probabilistic combinatorics, simulation, component sizes, cycle sizes MSC: 60C05,60J10,65C05, 65C40 1 Introduction The following problem comes from Peter Cameron’s book, [4, p. 154]. n people stand in a circle. Each player looks down at someone else’s feet (i.e., not at their own feet). At a given signal, everyone looks up from the feet to the eyes of the person they were looking at. If two people make eye contact, they scream. What is the probability qn, say, of at least one pair screaming? The purpose of this note is to put this problem in its natural probabilistic setting, namely that of a random mapping whose core is a derangement, for which many properties can be calculated simply. We focus primarily on the small components, but comment on other limiting regimes in the discussion. We begin by describing the usual model for a random mapping. Let B1,B2,...,Bn be indepen- arXiv:2006.04805v1 [math.PR] 8 Jun 2020 dent and identically distributed random variables satisfying P(B = j)=1/n, j [n], (1) i ∈ where [n] = 1, 2,...,n .
    [Show full text]
  • Notes on Scale-Invariance and Base-Invariance for Benford's
    NOTES ON SCALE-INVARIANCE AND BASE-INVARIANCE FOR BENFORD’S LAW MICHAŁ RYSZARD WÓJCIK Abstract. It is known that if X is uniformly distributed modulo 1 and Y is an arbitrary random variable independent of X then Y + X is also uniformly distributed modulo 1. We prove a converse for any continuous random variable Y (or a reasonable approximation to a continuous random variable) so that if X and Y +X are equally distributed modulo 1 and Y is independent of X then X is uniformly distributed modulo 1 (or approximates the uniform distribution equally reasonably). This translates into a characterization of Benford’s law through a generalization of scale-invariance: from multiplication by a constant to multiplication by an independent random variable. We also show a base-invariance characterization: if a positive continuous random variable has the same significand distribution for two bases then it is Benford for both bases. The set of bases for which a random variable is Benford is characterized through characteristic functions. 1. Introduction Before the early 1970s, handheld electronic calculators were not yet in widespread use and scientists routinely used in their calculations books with tables containing the decimal logarithms of numbers between 1 and 10 spaced evenly with small increments like 0.01 or 0.001. For example, the first page would be filled with numbers 1.01, 1.02, 1.03, . , 1.99 in the left column and their decimal logarithms in the right column, while the second page with 2.00, 2.01, 2.02, 2.03, ..., 2.99, and so on till the ninth page with 9.00, 9.01, 9.02, 9.03, .
    [Show full text]
  • Please Scroll Down for Article
    This article was downloaded by: [informa internal users] On: 4 September 2009 Access details: Access Details: [subscription number 755239602] Publisher Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Communications in Statistics - Theory and Methods Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713597238 Sequential Testing to Guarantee the Necessary Sample Size in Clinical Trials Andrew L. Rukhin a a Statistical Engineering Division, Information Technologies Laboratory, National Institute of Standards and Technology, Gaithersburg, Maryland, USA Online Publication Date: 01 January 2009 To cite this Article Rukhin, Andrew L.(2009)'Sequential Testing to Guarantee the Necessary Sample Size in Clinical Trials',Communications in Statistics - Theory and Methods,38:16,3114 — 3122 To link to this Article: DOI: 10.1080/03610920902947584 URL: http://dx.doi.org/10.1080/03610920902947584 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
    [Show full text]
  • Package 'Distributional'
    Package ‘distributional’ February 2, 2021 Title Vectorised Probability Distributions Version 0.2.2 Description Vectorised distribution objects with tools for manipulating, visualising, and using probability distributions. Designed to allow model prediction outputs to return distributions rather than their parameters, allowing users to directly interact with predictive distributions in a data-oriented workflow. In addition to providing generic replacements for p/d/q/r functions, other useful statistics can be computed including means, variances, intervals, and highest density regions. License GPL-3 Imports vctrs (>= 0.3.0), rlang (>= 0.4.5), generics, ellipsis, stats, numDeriv, ggplot2, scales, farver, digest, utils, lifecycle Suggests testthat (>= 2.1.0), covr, mvtnorm, actuar, ggdist RdMacros lifecycle URL https://pkg.mitchelloharawild.com/distributional/, https: //github.com/mitchelloharawild/distributional BugReports https://github.com/mitchelloharawild/distributional/issues Encoding UTF-8 Language en-GB LazyData true Roxygen list(markdown = TRUE, roclets=c('rd', 'collate', 'namespace')) RoxygenNote 7.1.1 1 2 R topics documented: R topics documented: autoplot.distribution . .3 cdf..............................................4 density.distribution . .4 dist_bernoulli . .5 dist_beta . .6 dist_binomial . .7 dist_burr . .8 dist_cauchy . .9 dist_chisq . 10 dist_degenerate . 11 dist_exponential . 12 dist_f . 13 dist_gamma . 14 dist_geometric . 16 dist_gumbel . 17 dist_hypergeometric . 18 dist_inflated . 20 dist_inverse_exponential . 20 dist_inverse_gamma
    [Show full text]
  • Beta-Cauchy Distribution: Some Properties and Applications
    Journal of Statistical Theory and Applications, Vol. 12, No. 4 (December 2013), 378-391 Beta-Cauchy Distribution: Some Properties and Applications Etaf Alshawarbeh Department of Mathematics, Central Michigan University Mount Pleasant, MI 48859, USA Email: [email protected] Felix Famoye Department of Mathematics, Central Michigan University Mount Pleasant, MI 48859, USA Email: [email protected] Carl Lee Department of Mathematics, Central Michigan University Mount Pleasant, MI 48859, USA Email: [email protected] Abstract Some properties of the four-parameter beta-Cauchy distribution such as the mean deviation and Shannon’s entropy are obtained. The method of maximum likelihood is proposed to estimate the parameters of the distribution. A simulation study is carried out to assess the performance of the maximum likelihood estimates. The usefulness of the new distribution is illustrated by applying it to three empirical data sets and comparing the results to some existing distributions. The beta-Cauchy distribution is found to provide great flexibility in modeling symmetric and skewed heavy-tailed data sets. Keywords and Phrases: Beta family, mean deviation, entropy, maximum likelihood estimation. 1. Introduction Eugene et al.1 introduced a class of new distributions using beta distribution as the generator and pointed out that this can be viewed as a family of generalized distributions of order statistics. Jones2 studied some general properties of this family. This class of distributions has been referred to as “beta-generated distributions”.3 This family of distributions provides some flexibility in modeling data sets with different shapes. Let F(x) be the cumulative distribution function (CDF) of a random variable X, then the CDF G(x) for the “beta-generated distributions” is given by Fx() 1 11 G( x ) [ B ( , )] t(1 t ) dt , 0 , , (1) 0 where B(αβ , )=ΓΓ ( α ) ( β )/ Γ+ ( α β ) .
    [Show full text]
  • Chapter 7 “Continuous Distributions”.Pdf
    CHAPTER 7 Continuous distributions 7.1. Basic theory 7.1.1. Denition, PDF, CDF. We start with the denition a continuous random variable. Denition (Continuous random variables) A random variable X is said to have a continuous distribution if there exists a non- negative function f = f such that X ¢ b P(a 6 X 6 b) = f(x)dx a for every a and b. The function f is called the density function for X or the PDF for X. ¡More precisely, such an X is said to have an absolutely continuous¡ distribution. Note that 1 . In particular, a for every . −∞ f(x)dx = P(−∞ < X < 1) = 1 P(X = a) = a f(x)dx = 0 a 3 ¡Example 7.1. Suppose we are given that f(x) = c=x for x > 1 and 0 otherwise. Since 1 and −∞ f(x)dx = 1 ¢ ¢ 1 1 1 c c f(x)dx = c 3 dx = ; −∞ 1 x 2 we have c = 2. PMF or PDF? Probability mass function (PMF) and (probability) density function (PDF) are two names for the same notion in the case of discrete random variables. We say PDF or simply a density function for a general random variable, and we use PMF only for discrete random variables. Denition (Cumulative distribution function (CDF)) The distribution function of X is dened as ¢ y F (y) = FX (y) := P(−∞ < X 6 y) = f(x)dx: −∞ It is also called the cumulative distribution function (CDF) of X. 97 98 7. CONTINUOUS DISTRIBUTIONS We can dene CDF for any random variable, not just continuous ones, by setting F (y) := P(X 6 y).
    [Show full text]
  • (Introduction to Probability at an Advanced Level) - All Lecture Notes
    Fall 2018 Statistics 201A (Introduction to Probability at an advanced level) - All Lecture Notes Aditya Guntuboyina August 15, 2020 Contents 0.1 Sample spaces, Events, Probability.................................5 0.2 Conditional Probability and Independence.............................6 0.3 Random Variables..........................................7 1 Random Variables, Expectation and Variance8 1.1 Expectations of Random Variables.................................9 1.2 Variance................................................ 10 2 Independence of Random Variables 11 3 Common Distributions 11 3.1 Ber(p) Distribution......................................... 11 3.2 Bin(n; p) Distribution........................................ 11 3.3 Poisson Distribution......................................... 12 4 Covariance, Correlation and Regression 14 5 Correlation and Regression 16 6 Back to Common Distributions 16 6.1 Geometric Distribution........................................ 16 6.2 Negative Binomial Distribution................................... 17 7 Continuous Distributions 17 7.1 Normal or Gaussian Distribution.................................. 17 1 7.2 Uniform Distribution......................................... 18 7.3 The Exponential Density...................................... 18 7.4 The Gamma Density......................................... 18 8 Variable Transformations 19 9 Distribution Functions and the Quantile Transform 20 10 Joint Densities 22 11 Joint Densities under Transformations 23 11.1 Detour to Convolutions......................................
    [Show full text]
  • Chapter 10 “Some Continuous Distributions”.Pdf
    CHAPTER 10 Some continuous distributions 10.1. Examples of continuous random variables We look at some other continuous random variables besides normals. Uniform distribution A continuous random variable has uniform distribution if its density is f(x) = 1/(b a) − if a 6 x 6 b and 0 otherwise. For a random variable X with uniform distribution its expectation is 1 b a + b EX = x dx = . b a ˆ 2 − a Exponential distribution A continuous random variable has exponential distribution with parameter λ > 0 if its λx density is f(x) = λe− if x > 0 and 0 otherwise. Suppose X is a random variable with an exponential distribution with parameter λ. Then we have ∞ λx λa (10.1.1) P(X > a) = λe− dx = e− , ˆa λa FX (a) = 1 P(X > a) = 1 e− , − − and we can use integration by parts to see that EX = 1/λ, Var X = 1/λ2. Examples where an exponential random variable is a good model is the length of a telephone call, the length of time before someone arrives at a bank, the length of time before a light bulb burns out. Exponentials are memoryless, that is, P(X > s + t X > t) = P(X > s), | or given that the light bulb has burned 5 hours, the probability it will burn 2 more hours is the same as the probability a new light bulb will burn 2 hours. Here is how we can prove this 129 130 10. SOME CONTINUOUS DISTRIBUTIONS P(X > s + t) P(X > s + t X > t) = | P(X > t) λ(s+t) e− λs − = λt = e e− = P(X > s), where we used Equation (10.1.1) for a = t and a = s + t.
    [Show full text]