Hand-Book on STATISTICAL DISTRIBUTIONS for Experimentalists

Total Page:16

File Type:pdf, Size:1020Kb

Hand-Book on STATISTICAL DISTRIBUTIONS for Experimentalists Internal Report SUF–PFY/96–01 Stockholm, 11 December 1996 1st revision, 31 October 1998 last modification 10 September 2007 Hand-book on STATISTICAL DISTRIBUTIONS for experimentalists by Christian Walck Particle Physics Group Fysikum University of Stockholm (e-mail: [email protected]) Contents 1 Introduction 1 1.1 Random Number Generation .............................. 1 2 Probability Density Functions 3 2.1 Introduction ........................................ 3 2.2 Moments ......................................... 3 2.2.1 Errors of Moments ................................ 4 2.3 Characteristic Function ................................. 4 2.4 Probability Generating Function ............................ 5 2.5 Cumulants ......................................... 6 2.6 Random Number Generation .............................. 7 2.6.1 Cumulative Technique .............................. 7 2.6.2 Accept-Reject technique ............................. 7 2.6.3 Composition Techniques ............................. 8 2.7 Multivariate Distributions ................................ 9 2.7.1 Multivariate Moments .............................. 9 2.7.2 Errors of Bivariate Moments .......................... 9 2.7.3 Joint Characteristic Function .......................... 10 2.7.4 Random Number Generation .......................... 11 3 Bernoulli Distribution 12 3.1 Introduction ........................................ 12 3.2 Relation to Other Distributions ............................. 12 4 Beta distribution 13 4.1 Introduction ........................................ 13 4.2 Derivation of the Beta Distribution ........................... 13 4.3 Characteristic Function ................................. 14 4.4 Moments ......................................... 14 4.5 Probability Content ................................... 14 4.6 Random Number Generation .............................. 15 5 Binomial Distribution 16 5.1 Introduction ........................................ 16 5.2 Moments ......................................... 16 5.3 Probability Generating Function ............................ 16 5.4 Cumulative Function ................................... 17 5.5 Random Number Generation .............................. 17 5.6 Estimation of Parameters ................................ 17 5.7 Probability Content ................................... 18 6 Binormal Distribution 20 6.1 Introduction ........................................ 20 6.2 Conditional Probability Density ............................. 21 6.3 Characteristic Function ................................. 21 6.4 Moments ......................................... 21 6.5 Box-Muller Transformation ............................... 22 6.6 Probability Content ................................... 23 i 6.7 Random Number Generation .............................. 24 7 Cauchy Distribution 26 7.1 Introduction ........................................ 26 7.2 Moments ......................................... 26 7.3 Normalization ....................................... 27 7.4 Characteristic Function ................................. 27 7.5 Location and Scale Parameters ............................. 27 7.6 Breit-Wigner Distribution ................................ 28 7.7 Comparison to Other Distributions ........................... 28 7.8 Truncation ........................................ 29 7.9 Sum and Average of Cauchy Variables ......................... 30 7.10 Estimation of the Median ................................ 31 7.11 Estimation of the HWHM ................................ 31 7.12 Random Number Generation .............................. 33 7.13 Physical Picture ..................................... 34 7.14 Ratio Between Two Standard Normal Variables .................... 35 8 Chi-square Distribution 36 8.1 Introduction ........................................ 36 8.2 Moments ......................................... 36 8.3 Characteristic Function ................................. 37 8.4 Cumulative Function ................................... 38 8.5 Origin of the Chi-square Distribution .......................... 38 8.6 Approximations ...................................... 39 8.7 Random Number Generation .............................. 39 8.8 Confidence Intervals for the Variance .......................... 40 8.9 Hypothesis Testing .................................... 40 8.10 Probability Content ................................... 40 8.11 Even Number of Degrees of Freedom .......................... 42 8.12 Odd Number of Degrees of Freedom .......................... 42 8.13 Final Algorithm ...................................... 43 8.14 Chi Distribution ..................................... 43 9 Compound Poisson Distribution 45 9.1 Introduction ........................................ 45 9.2 Branching Process .................................... 45 9.3 Moments ......................................... 45 9.4 Probability Generating Function ............................ 45 9.5 Random Number Generation .............................. 46 10 Double-Exponential Distribution 47 10.1 Introduction ........................................ 47 10.2 Moments ......................................... 47 10.3 Characteristic Function ................................. 47 10.4 Cumulative Function ................................... 48 10.5 Random Number Generation .............................. 48 ii 11 Doubly Non-Central F -Distribution 49 11.1 Introduction ........................................ 49 11.2 Moments ......................................... 49 11.3 Cumulative Distribution ................................. 50 11.4 Random Number Generation .............................. 50 12 Doubly Non-Central t-Distribution 51 12.1 Introduction ........................................ 51 12.2 Moments ......................................... 51 12.3 Cumulative Distribution ................................. 52 12.4 Random Number Generation .............................. 52 13 Error Function 53 13.1 Introduction ........................................ 53 13.2 Probability Density Function .............................. 53 14 Exponential Distribution 54 14.1 Introduction ........................................ 54 14.2 Cumulative Function ................................... 54 14.3 Moments ......................................... 54 14.4 Characteristic Function ................................. 54 14.5 Random Number Generation .............................. 55 14.5.1 Method by von Neumann ............................ 55 14.5.2 Method by Marsaglia .............................. 55 14.5.3 Method by Ahrens ................................ 56 15 Extreme Value Distribution 57 15.1 Introduction ........................................ 57 15.2 Cumulative Distribution ................................. 58 15.3 Characteristic Function ................................. 58 15.4 Moments ......................................... 58 15.5 Random Number Generation .............................. 60 16 F-distribution 61 16.1 Introduction ........................................ 61 16.2 Relations to Other Distributions ............................ 62 16.3 1/F ............................................ 62 16.4 Characteristic Function ................................. 62 16.5 Moments ......................................... 62 16.6 F-ratio ........................................... 63 16.7 Variance Ratio ...................................... 64 16.8 Analysis of Variance ................................... 65 16.9 Calculation of Probability Content ........................... 65 16.9.1 The Incomplete Beta function .......................... 66 16.9.2 Final Formulæ .................................. 67 16.10 Random Number Generation .............................. 68 iii 17 Gamma Distribution 69 17.1 Introduction ........................................ 69 17.2 Derivation of the Gamma Distribution ......................... 69 17.3 Moments ......................................... 70 17.4 Characteristic Function ................................. 70 17.5 Probability Content ................................... 71 17.6 Random Number Generation .............................. 71 17.6.1 Erlangian distribution .............................. 71 17.6.2 General case ................................... 71 17.6.3 Asymptotic Approximation ........................... 72 18 Generalized Gamma Distribution 73 18.1 Introduction ........................................ 73 18.2 Cumulative Function ................................... 73 18.3 Moments ......................................... 74 18.4 Relation to Other Distributions ............................. 74 19 Geometric Distribution 75 19.1 Introduction ........................................ 75 19.2 Moments ......................................... 75 19.3 Probability Generating Function ............................ 75 19.4 Random Number Generation .............................. 75 20 Hyperexponential Distribution 77 20.1 Introduction ........................................ 77 20.2 Moments ......................................... 77 20.3 Characteristic Function ................................. 77 20.4 Random Number Generation .............................. 78 21 Hypergeometric Distribution 79 21.1 Introduction ........................................ 79 21.2 Probability Generating Function ............................ 79 21.3 Moments ......................................... 79 21.4 Random Number Generation .............................. 80 22 Logarithmic Distribution 81 22.1 Introduction .......................................
Recommended publications
  • The Poisson Burr X Inverse Rayleigh Distribution and Its Applications
    Journal of Data Science,18(1). P. 56 – 77,2020 DOI:10.6339/JDS.202001_18(1).0003 The Poisson Burr X Inverse Rayleigh Distribution And Its Applications Rania H. M. Abdelkhalek* Department of Statistics, Mathematics and Insurance, Benha University, Egypt. ABSTRACT A new flexible extension of the inverse Rayleigh model is proposed and studied. Some of its fundamental statistical properties are derived. We assessed the performance of the maximum likelihood method via a simulation study. The importance of the new model is shown via three applications to real data sets. The new model is much better than other important competitive models. Keywords: Inverse Rayleigh; Poisson; Simulation; Modeling. * [email protected] Rania H. M. Abdelkhalek 57 1. Introduction and physical motivation The well-known inverse Rayleigh (IR) model is considered as a distribution for a life time random variable (r.v.). The IR distribution has many applications in the area of reliability studies. Voda (1972) proved that the distribution of lifetimes of several types of experimental (Exp) units can be approximated by the IR distribution and studied some properties of the maximum likelihood estimation (MLE) of the its parameter. Mukerjee and Saran (1984) studied the failure rate of an IR distribution. Aslam and Jun (2009) introduced a group acceptance sampling plans for truncated life tests based on the IR. Soliman et al. (2010) studied the Bayesian and non- Bayesian estimation of the parameter of the IR model Dey (2012) mentioned that the IR model has also been used as a failure time distribution although the variance and higher order moments does not exist for it.
    [Show full text]
  • Lesson 7: Measuring Variability for Skewed Distributions (Interquartile Range)
    NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 7 M2 ALGEBRA I Lesson 7: Measuring Variability for Skewed Distributions (Interquartile Range) Student Outcomes . Students explain why a median is a better description of a typical value for a skewed distribution. Students calculate the 5-number summary of a data set. Students construct a box plot based on the 5-number summary and calculate the interquartile range (IQR). Students interpret the IQR as a description of variability in the data. Students identify outliers in a data distribution. Lesson Notes Distributions that are not symmetrical pose some challenges in students’ thinking about center and variability. The observation that the distribution is not symmetrical is straightforward. The difficult part is to select a measure of center and a measure of variability around that center. In Lesson 3, students learned that, because the mean can be affected by unusual values in the data set, the median is a better description of a typical data value for a skewed distribution. This lesson addresses what measure of variability is appropriate for a skewed data distribution. Students construct a box plot of the data using the 5-number summary and describe variability using the interquartile range. Classwork Exploratory Challenge 1/Exercises 1–3 (10 minutes): Skewed Data and Their Measure of Center Verbally introduce the data set as described in the introductory paragraph and dot plot shown below. Exploratory Challenge 1/Exercises 1–3: Skewed Data and Their Measure of Center Consider the following scenario. A television game show, Fact or Fiction, was cancelled after nine shows. Many people watched the nine shows and were rather upset when it was taken off the air.
    [Show full text]
  • Circularly-Symmetric Complex Normal Ratio Distribution for Scalar Transmissibility Functions. Part II: Probabilistic Model and Validation
    Mechanical Systems and Signal Processing 80 (2016) 78–98 Contents lists available at ScienceDirect Mechanical Systems and Signal Processing journal homepage: www.elsevier.com/locate/ymssp Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part II: Probabilistic model and validation Wang-Ji Yan, Wei-Xin Ren n Department of Civil Engineering, Hefei University of Technology, Hefei, Anhui 23009, China article info abstract Article history: In Part I of this study, some new theorems, corollaries and lemmas on circularly- Received 29 May 2015 symmetric complex normal ratio distribution have been mathematically proved. This Received in revised form part II paper is dedicated to providing a rigorous treatment of statistical properties of raw 3 February 2016 scalar transmissibility functions at an arbitrary frequency line. On the basis of statistics of Accepted 27 February 2016 raw FFT coefficients and circularly-symmetric complex normal ratio distribution, explicit Available online 23 March 2016 closed-form probabilistic models are established for both multivariate and univariate Keywords: scalar transmissibility functions. Also, remarks on the independence of transmissibility Transmissibility functions at different frequency lines and the shape of the probability density function Uncertainty quantification (PDF) of univariate case are presented. The statistical structures of probabilistic models are Signal processing concise, compact and easy-implemented with a low computational effort. They hold for Statistics Structural dynamics general stationary vector processes, either Gaussian stochastic processes or non-Gaussian Ambient vibration stochastic processes. The accuracy of proposed models is verified using numerical example as well as field test data of a high-rise building and a long-span cable-stayed bridge.
    [Show full text]
  • Effect of Probability Distribution of the Response Variable in Optimal Experimental Design with Applications in Medicine †
    mathematics Article Effect of Probability Distribution of the Response Variable in Optimal Experimental Design with Applications in Medicine † Sergio Pozuelo-Campos *,‡ , Víctor Casero-Alonso ‡ and Mariano Amo-Salas ‡ Department of Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real, Spain; [email protected] (V.C.-A.); [email protected] (M.A.-S.) * Correspondence: [email protected] † This paper is an extended version of a published conference paper as a part of the proceedings of the 35th International Workshop on Statistical Modeling (IWSM), Bilbao, Spain, 19–24 July 2020. ‡ These authors contributed equally to this work. Abstract: In optimal experimental design theory it is usually assumed that the response variable follows a normal distribution with constant variance. However, some works assume other probability distributions based on additional information or practitioner’s prior experience. The main goal of this paper is to study the effect, in terms of efficiency, when misspecification in the probability distribution of the response variable occurs. The elemental information matrix, which includes information on the probability distribution of the response variable, provides a generalized Fisher information matrix. This study is performed from a practical perspective, comparing a normal distribution with the Poisson or gamma distribution. First, analytical results are obtained, including results for the linear quadratic model, and these are applied to some real illustrative examples. The nonlinear 4-parameter Hill model is next considered to study the influence of misspecification in a Citation: Pozuelo-Campos, S.; dose-response model. This analysis shows the behavior of the efficiency of the designs obtained in Casero-Alonso, V.; Amo-Salas, M.
    [Show full text]
  • The Beta-Binomial Distribution Introduction Bayesian Derivation
    In Danish: 2005-09-19 / SLB Translated: 2008-05-28 / SLB Bayesian Statistics, Simulation and Software The beta-binomial distribution I have translated this document, written for another course in Danish, almost as is. I have kept the references to Lee, the textbook used for that course. Introduction In Lee: Bayesian Statistics, the beta-binomial distribution is very shortly mentioned as the predictive distribution for the binomial distribution, given the conjugate prior distribution, the beta distribution. (In Lee, see pp. 78, 214, 156.) Here we shall treat it slightly more in depth, partly because it emerges in the WinBUGS example in Lee x 9.7, and partly because it possibly can be useful for your project work. Bayesian Derivation We make n independent Bernoulli trials (0-1 trials) with probability parameter π. It is well known that the number of successes x has the binomial distribution. Considering the probability parameter π unknown (but of course the sample-size parameter n is known), we have xjπ ∼ bin(n; π); where in generic1 notation n p(xjπ) = πx(1 − π)1−x; x = 0; 1; : : : ; n: x We assume as prior distribution for π a beta distribution, i.e. π ∼ beta(α; β); 1By this we mean that p is not a fixed function, but denotes the density function (in the discrete case also called the probability function) for the random variable the value of which is argument for p. 1 with density function 1 p(π) = πα−1(1 − π)β−1; 0 < π < 1: B(α; β) I remind you that the beta function can be expressed by the gamma function: Γ(α)Γ(β) B(α; β) = : (1) Γ(α + β) In Lee, x 3.1 is shown that the posterior distribution is a beta distribution as well, πjx ∼ beta(α + x; β + n − x): (Because of this result we say that the beta distribution is conjugate distribution to the binomial distribution.) We shall now derive the predictive distribution, that is finding p(x).
    [Show full text]
  • 9. Binomial Distribution
    J. K. SHAH CLASSES Binomial Distribution 9. Binomial Distribution THEORETICAL DISTRIBUTION (Exists only in theory) 1. Theoretical Distribution is a distribution where the variables are distributed according to some definite mathematical laws. 2. In other words, Theoretical Distributions are mathematical models; where the frequencies / probabilities are calculated by mathematical computation. 3. Theoretical Distribution are also called as Expected Variance Distribution or Frequency Distribution 4. THEORETICAL DISTRIBUTION DISCRETE CONTINUOUS BINOMIAL POISSON Distribution Distribution NORMAL OR Student’s ‘t’ Chi-square Snedecor’s GAUSSIAN Distribution Distribution F-Distribution Distribution A. Binomial Distribution (Bernoulli Distribution) 1. This distribution is a discrete probability Distribution where the variable ‘x’ can assume only discrete values i.e. x = 0, 1, 2, 3,....... n 2. This distribution is derived from a special type of random experiment known as Bernoulli Experiment or Bernoulli Trials , which has the following characteristics (i) Each trial must be associated with two mutually exclusive & exhaustive outcomes – SUCCESS and FAILURE . Usually the probability of success is denoted by ‘p’ and that of the failure by ‘q’ where q = 1-p and therefore p + q = 1. (ii) The trials must be independent under identical conditions. (iii) The number of trial must be finite (countably finite). (iv) Probability of success and failure remains unchanged throughout the process. : 442 : J. K. SHAH CLASSES Binomial Distribution Note 1 : A ‘trial’ is an attempt to produce outcomes which is neither sure nor impossible in nature. Note 2 : The conditions mentioned may also be treated as the conditions for Binomial Distributions. 3. Characteristics or Properties of Binomial Distribution (i) It is a bi parametric distribution i.e.
    [Show full text]
  • New Proposed Length-Biased Weighted Exponential and Rayleigh Distribution with Application
    CORE Metadata, citation and similar papers at core.ac.uk Provided by International Institute for Science, Technology and Education (IISTE): E-Journals Mathematical Theory and Modeling www.iiste.org ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.4, No.7, 2014 New Proposed Length-Biased Weighted Exponential and Rayleigh Distribution with Application Kareema Abed Al-kadim1 Nabeel Ali Hussein 2 1,2.College of Education of Pure Sciences, University of Babylon/Hilla 1.E-mail of the Corresponding author: [email protected] 2.E-mail: [email protected] Abstract The concept of length-biased distribution can be employed in development of proper models for lifetime data. Length-biased distribution is a special case of the more general form known as weighted distribution. In this paper we introduce a new class of length-biased of weighted exponential and Rayleigh distributions(LBW1E1D), (LBWRD).This paper surveys some of the possible uses of Length - biased distribution We study the some of its statistical properties with application of these new distribution. Keywords: length- biased weighted Rayleigh distribution, length- biased weighted exponential distribution, maximum likelihood estimation. 1. Introduction Fisher (1934) [1] introduced the concept of weighted distributions Rao (1965) [2]developed this concept in general terms by in connection with modeling statistical data where the usual practi ce of using standard distributions were not found to be appropriate, in which various situations that can be modeled by weighted distributions, where non-observe ability of some events or damage caused to the original observation resulting in a reduced value, or in a sampling procedure which gives unequal chances to the units in the original That means the recorded observations cannot be considered as a original random sample.
    [Show full text]
  • A Matrix-Valued Bernoulli Distribution Gianfranco Lovison1 Dipartimento Di Scienze Statistiche E Matematiche “S
    Journal of Multivariate Analysis 97 (2006) 1573–1585 www.elsevier.com/locate/jmva A matrix-valued Bernoulli distribution Gianfranco Lovison1 Dipartimento di Scienze Statistiche e Matematiche “S. Vianelli”, Università di Palermo Viale delle Scienze 90128 Palermo, Italy Received 17 September 2004 Available online 10 August 2005 Abstract Matrix-valued distributions are used in continuous multivariate analysis to model sample data matrices of continuous measurements; their use seems to be neglected for binary, or more generally categorical, data. In this paper we propose a matrix-valued Bernoulli distribution, based on the log- linear representation introduced by Cox [The analysis of multivariate binary data, Appl. Statist. 21 (1972) 113–120] for the Multivariate Bernoulli distribution with correlated components. © 2005 Elsevier Inc. All rights reserved. AMS 1991 subject classification: 62E05; 62Hxx Keywords: Correlated multivariate binary responses; Multivariate Bernoulli distribution; Matrix-valued distributions 1. Introduction Matrix-valued distributions are used in continuous multivariate analysis (see, for exam- ple, [10]) to model sample data matrices of continuous measurements, allowing for both variable-dependence and unit-dependence. Their potentials seem to have been neglected for binary, and more generally categorical, data. This is somewhat surprising, since the natural, elementary representation of datasets with categorical variables is precisely in the form of sample binary data matrices, through the 0-1 coding of categories. E-mail address: [email protected] URL: http://dssm.unipa.it/lovison. 1 Work supported by MIUR 60% 2000, 2001 and MIUR P.R.I.N. 2002 grants. 0047-259X/$ - see front matter © 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.jmva.2005.06.008 1574 G.
    [Show full text]
  • Duality for Real and Multivariate Exponential Families
    Duality for real and multivariate exponential families a, G´erard Letac ∗ aInstitut de Math´ematiques de Toulouse, 118 route de Narbonne 31062 Toulouse, France. Abstract n Consider a measure µ on R generating a natural exponential family F(µ) with variance function VF(µ)(m) and Laplace transform exp(ℓµ(s)) = exp( s, x )µ(dx). ZRn −h i A dual measure µ∗ satisfies ℓ′ ( ℓ′ (s)) = s. Such a dual measure does not always exist. One important property µ∗ µ 1 − − is ℓ′′ (m) = (VF(µ)(m))− , leading to the notion of duality among exponential families (or rather among the extended µ∗ notion of T exponential families TF obtained by considering all translations of a given exponential family F). Keywords: Dilogarithm distribution, Landau distribution, large deviations, quadratic and cubic real exponential families, Tweedie scale, Wishart distributions. 2020 MSC: Primary 62H05, Secondary 60E10 1. Introduction One can be surprized by the explicit formulas that one gets from large deviations in one dimension. Consider iid random variables X1,..., Xn,... and the empirical mean Xn = (X1 + + Xn)/n. Then the Cram´er theorem [6] says that the limit ··· 1/n Pr(Xn > m) n α(m) → →∞ 1 does exist for m > E(X ). For instance, the symmetric Bernoulli case Pr(Xn = 1) = leads, for 0 < m < 1, to 1 ± 2 1 m 1+m α(m) = (1 + m)− − (1 m)− . (1) − 1 x For the bilateral exponential distribution Xn e dx we get, for m > 0, ∼ 2 −| | 1 1 √1+m2 2 α(m) = e − (1 + √1 + m ). 2 What strange formulas! Other ones can be found in [17], Problem 410.
    [Show full text]
  • Lecture.7 Poisson Distributions - Properties, Normal Distributions- Properties
    Lecture.7 Poisson Distributions - properties, Normal Distributions- properties Theoretical Distributions Theoretical distributions are 1. Binomial distribution Discrete distribution 2. Poisson distribution 3. Normal distribution Continuous distribution Discrete Probability distribution Bernoulli distribution A random variable x takes two values 0 and 1, with probabilities q and p ie., p(x=1) = p and p(x=0)=q, q-1-p is called a Bernoulli variate and is said to be Bernoulli distribution where p and q are probability of success and failure. It was given by Swiss mathematician James Bernoulli (1654-1705) Example • Tossing a coin(head or tail) • Germination of seed(germinate or not) Binomial distribution Binomial distribution was discovered by James Bernoulli (1654-1705). Let a random experiment be performed repeatedly and the occurrence of an event in a trial be called as success and its non-occurrence is failure. Consider a set of n independent trails (n being finite), in which the probability p of success in any trail is constant for each trial. Then q=1-p is the probability of failure in any trail. 1 The probability of x success and consequently n-x failures in n independent trails. But x successes in n trails can occur in ncx ways. Probability for each of these ways is pxqn-x. P(sss…ff…fsf…f)=p(s)p(s)….p(f)p(f)…. = p,p…q,q… = (p,p…p)(q,q…q) (x times) (n-x times) Hence the probability of x success in n trials is given by x n-x ncx p q Definition A random variable x is said to follow binomial distribution if it assumes non- negative values and its probability mass function is given by P(X=x) =p(x) = x n-x ncx p q , x=0,1,2…n q=1-p 0, otherwise The two independent constants n and p in the distribution are known as the parameters of the distribution.
    [Show full text]
  • 5. the Student T Distribution
    Virtual Laboratories > 4. Special Distributions > 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 5. The Student t Distribution In this section we will study a distribution that has special importance in statistics. In particular, this distribution will arise in the study of a standardized version of the sample mean when the underlying distribution is normal. The Probability Density Function Suppose that Z has the standard normal distribution, V has the chi-squared distribution with n degrees of freedom, and that Z and V are independent. Let Z T= √V/n In the following exercise, you will show that T has probability density function given by −(n +1) /2 Γ((n + 1) / 2) t2 f(t)= 1 + , t∈ℝ ( n ) √n π Γ(n / 2) 1. Show that T has the given probability density function by using the following steps. n a. Show first that the conditional distribution of T given V=v is normal with mean 0 a nd variance v . b. Use (a) to find the joint probability density function of (T,V). c. Integrate the joint probability density function in (b) with respect to v to find the probability density function of T. The distribution of T is known as the Student t distribution with n degree of freedom. The distribution is well defined for any n > 0, but in practice, only positive integer values of n are of interest. This distribution was first studied by William Gosset, who published under the pseudonym Student. In addition to supplying the proof, Exercise 1 provides a good way of thinking of the t distribution: the t distribution arises when the variance of a mean 0 normal distribution is randomized in a certain way.
    [Show full text]
  • Moments of the Product and Ratio of Two Correlated Chi-Square Variables
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Springer - Publisher Connector Stat Papers (2009) 50:581–592 DOI 10.1007/s00362-007-0105-0 REGULAR ARTICLE Moments of the product and ratio of two correlated chi-square variables Anwar H. Joarder Received: 2 June 2006 / Revised: 8 October 2007 / Published online: 20 November 2007 © The Author(s) 2007 Abstract The exact probability density function of a bivariate chi-square distribu- tion with two correlated components is derived. Some moments of the product and ratio of two correlated chi-square random variables have been derived. The ratio of the two correlated chi-square variables is used to compare variability. One such applica- tion is referred to. Another application is pinpointed in connection with the distribution of correlation coefficient based on a bivariate t distribution. Keywords Bivariate chi-square distribution · Moments · Product of correlated chi-square variables · Ratio of correlated chi-square variables Mathematics Subject Classification (2000) 62E15 · 60E05 · 60E10 1 Introduction Fisher (1915) derived the distribution of mean-centered sum of squares and sum of products in order to study the distribution of correlation coefficient from a bivariate nor- mal sample. Let X1, X2,...,X N (N > 2) be two-dimensional independent random vectors where X j = (X1 j , X2 j ) , j = 1, 2,...,N is distributed as a bivariate normal distribution denoted by N2(θ, ) with θ = (θ1,θ2) and a 2 × 2 covariance matrix = (σik), i = 1, 2; k = 1, 2. The sample mean-centered sums of squares and sum of products are given by a = N (X − X¯ )2 = mS2, m = N − 1,(i = 1, 2) ii j=1 ij i i = N ( − ¯ )( − ¯ ) = and a12 j=1 X1 j X1 X2 j X2 mRS1 S2, respectively.
    [Show full text]