On the Gamma-Half Normal Distribution and Its Applications

Total Page:16

File Type:pdf, Size:1020Kb

On the Gamma-Half Normal Distribution and Its Applications Journal of Modern Applied Statistical Methods Volume 12 | Issue 1 Article 15 5-1-2013 On The aG mma-Half Normal Distribution and Its Applications Ayman Alzaatreh Austin Peay State University, Clarksville, TN Kristen Knight Austin Peay State University, Clarksville, TN Follow this and additional works at: http://digitalcommons.wayne.edu/jmasm Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons Recommended Citation Alzaatreh, Ayman and Knight, Kristen (2013) "On The aG mma-Half Normal Distribution and Its Applications," Journal of Modern Applied Statistical Methods: Vol. 12 : Iss. 1 , Article 15. DOI: 10.22237/jmasm/1367381640 Available at: http://digitalcommons.wayne.edu/jmasm/vol12/iss1/15 This Regular Article is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized editor of DigitalCommons@WayneState. Journal of Modern Applied Statistical Methods Copyright © 2013 JMASM, Inc. May 2013, Vol. 12, No. 1, 103-119 1538 – 9472/13/$95.00 On The Gamma-Half Normal Distribution and Its Applications Ayman Alzaatreh Kristen Knight Austin Peay State University, Clarksville, TN A new distribution, the gamma-half normal distribution, is proposed and studied. Various structural properties of the gamma-half normal distribution are derived. The shape of the distribution may be unimodal or bimodal. Results for moments, limit behavior, mean deviations and Shannon entropy are provided. To estimate the model parameters, the method of maximum likelihood estimation is proposed. Three real-life data sets are used to illustrate the applicability of the gamma-half normal distribution. Key words: T-X families; gamma-X family; unimodal; bimodal; Shannon entropy. Introduction in the literature (e.g., Alzaatreh, et al. (2013a); In recent years, advancements in technology and Alzaatreh, et al. (2013b); Alzaatreh, et al. science have resulted in a wealth of information, (2012a); Alzaatreh, et al. (2012b); Lee, et al. which is expanding the level of knowledge (2013)). across many disciplines. This information is One well-known distribution is the half- gathered and analyzed by statisticians, who hold normal distribution, which has been used in the responsibility of accurately assessing the variety of applications. Previous work by Bland data and making inferences about the population and Altman (1999) used the half-normal of interest. Without this precise evaluation of distribution to study the relationship between data, each field remains limited to its current measurement error and magnitude. Bland (2005) state of knowledge. In the last decade, it has extended the work of Bland and Altman (1999) been discovered that many well-known by using the distribution to estimate the standard distributions used to model data sets do not offer deviation as a function so that measurement enough flexibility to provide an adequate fit. For error could be controlled. In his work, various this reason, new methods are being proposed exercise tests were analyzed and it was and used to derive generalizations of well- determined that variability of performance does known distributions. With these distributions, decline with practice (Bland, 2005). strong applications have been made to real-life Manufacturing industries have utilized the half- scenarios. normal distribution to model lifetime processes Alzaatreh, et al. (2013b) proposed the T- under fatigue. These industries often produce X families of distributions. These families of goods with a long lifetime need for customers, distributions were used to generate a new class making the cost of the resources needed to of distributions which offer more flexibility in analyze the product failure times very high. To modeling a variety of data sets. Several save time and money the half normal members of the T-X families have been studied distribution is used in this reliability analysis to study the probabilistic aspects of the product failure times (Castro, et al., 2012). Due to the fact that the half-normal Ayman Alzaatreh is an Assistant Professor in the distribution has only one shape, various Department of Mathematics & Statistics. Email generalizations of the distribution have been him at: [email protected]. Kristen Knight is derived. These generalizations include the a student in the Department of Mathematics and generalized half-normal distribution (Cooray, et Statistics. Email her at: al., 2008), the beta-generalized half-normal [email protected]. (Pescrim, et al., 2010) and the Kumaraswamy 103 ON THE GAMMA-HALF NORMAL DISTRIBUTION AND ITS APPLICATIONS generalized half-normal (Cordeiro, et al., 2012). (2009). When α =1 and 1/β =∈n , the Several of the corresponding applications gamma-X family reduces to the distribution of include the stress-rupture life of kevlar 49/epoxy the first order statistics of the random variable strands placed under sustained pressure (Cooray, X . et al., 2008), failure times of mechanical If X is the half normal random variable components and flood data (Cordeiro, et al., with the density function 2012). In this article the gamma and half normal 2 − 22θ distributions are combined to propose a new fx()=> ex /2 , x 0, then (1.3) gives generalization of the half-normal distribution, θπ namely, the gamma half-normal distribution. Let Fx() be the cumulative distribution gx()= 2 function (CDF) of any random variable X and − x 1 −−−1 2 2 xxα − rt() be the probability density function (PDF) e 2θ (−Φ log(2 ( )))1 (2 Φ ( ))β , α of a random variable T defined on [0,∞ ) . The πθΓ() α β θθ CDF of the T-X family of distributions defined αβθ,,>> 0;0x by Alzaatreh, et al. (2013b) is given by (1.4) −−log() 1Fx ( ) Φ Gx()==−− rtdt () R{} log(1 Fx () where is the CDF of the standard normal 0 distribution. (1.1) A random variable X with the PDF g(x) in (1.4) is said to follow the gamma-half normal When X is a continuous random variable, the distribution with parameters α , β and θ . From probability density function of the T-X family is (1.1), the CDF of the gamma- half normal distribution is obtained as fx() gx()=−− r() log1() Fx () − −1 1()Fx Gx()=−γ {,α β log(2( Φ−Γ x /θα ))}/( ), = hxr()() Hx (), (1.5) (1.2) t α −− where γα(,)tuedu= 1 u is the incomplete 0 where hx() and H ()x are the hazard and the gamma function. cumulative hazard functions of the random A series representation of Gx() in (1.5) variable X associated with f ()x . can be obtained by using the series expansion of If a random variable T follows the the incomplete gamma function from Nadarajah α β gamma distribution with parameters and , and Pal (2008) as − ααβ1 −− rt()=Γ()βα ( ) t1/ et , t ≥ 0, then the ∞ (1)− kkxα + γα(,)x = . (1.6) definition in (1.2) leads to the gamma-X family α + with the PDF k =0 kk!( ) gx()= From (1.6), the CDF of the gamma half-normal distribution can be written as 1 1 α −1 −1 −− − β α fx()() log1() Fx ()() 1 Fx () . Γ()αβ 1∞ (−− 1)kk [ log(2 Φ− (x /θ ))]α + Gx()= . (1.3) Γ+ααβ α +k ()k=0 kk !( ) (1.7) β = When 1, the gamma-X family in (1.3) reduces to the gamma-generated distribution The hazard function associated with the gamma- introduced by Zografos and Balakrishnan half normal distribution is 104 AYMAN ALZAATREH & KRISTEN KNIGHT gx() Lemma 2 Proof hx()= 1()− Gx Since the random variable X is defined on (0,∞ ) , this implies limgx ( )= 0 . Using 1 →∞ −1 x −−22θα 2exxx /2 (−Φ− log(2 ( /θθ ))) 1 (2 Φ− ( / )) β L’Hôpital’s rule it can be shown that = , πθβα Γ− α γ α − β−1 Φ− θ limhx ( ) =∞. Now, hx()[1−= Gx ()] gx () [() {, log(2(x / ))}] x→∞ x > 0. = implies that lim++gx ( ) lim hx ( ). Results in (1.8) xx→→00 (2.1) follow immediately from definition (1.4). Some Properties of the Gamma-Half Normal The modes of the gamma-half normal Distribution distribution can be obtained by taking the Lemma 1 gives the relation between the derivative of gx(). The derivative with respect gamma-half normal distribution and the gamma to x of (1.4) can be simplified to distribution. gx'( ) = 2 Lemma 1 (Transformation) − x 1 −−−2 If a random variable Y follows the 2 θ 2 xxα −2 β ekx2 (−Φ log(2 ( ))) [2 Φ ( )] ( ), α gamma distribution with parameters α and β , πθΓ() α β θθ then the random variable (2.2) −− X =Φθ 1(1 − 0.5e Y ) follows the gamma-half normal distribution with parameters α , β and where θ . kx()= x −xx Lemma 1 (Transformation) Proof log(2Φ+− ( )) (α 1)h ( ) The results follow by using the θθz θ transformation technique. − +−β −1 xx Φ The limiting behaviors of the gamma- ( 1)hz ( )log(2 ( ) half normal PDF and the hazard function are θθ given in Lemma 2. Setting (2.2) to 0, the critical values of Lemma 2 gx() are x = 0 and the solution of the equation The limit of the gamma-half normal kx()= 0. The solution of kx()= 0 is →∞ density function as x is 0 and the limit of equivalent to the equation the gamma-half normal hazard function as x →∞ is ∞. Also, the limit of the gamma-half + 11α − normal and hazard function as x → 0 is given xhx=−−θθ(/)1 , z βθΦ− by log(2 (x / )) 0,α > 1 (2.3) 2 θ =−Φφ θθ limgx ( )== lim hx ( ) ,α = 1 where hxz (/) (/)/(1(/)). x x →→++ xx00 πθβ Corollary 1 α ≤ β ≤ ∞<,1.α If 1 and 1, the gamma-half (2.1) normal distribution is unimodal and the mode is at x = 0 .
Recommended publications
  • A Random Variable X with Pdf G(X) = Λα Γ(Α) X ≥ 0 Has Gamma
    DISTRIBUTIONS DERIVED FROM THE NORMAL DISTRIBUTION Definition: A random variable X with pdf λα g(x) = xα−1e−λx x ≥ 0 Γ(α) has gamma distribution with parameters α > 0 and λ > 0. The gamma function Γ(x), is defined as Z ∞ Γ(x) = ux−1e−udu. 0 Properties of the Gamma Function: (i) Γ(x + 1) = xΓ(x) (ii) Γ(n + 1) = n! √ (iii) Γ(1/2) = π. Remarks: 1. Notice that an exponential rv with parameter 1/θ = λ is a special case of a gamma rv. with parameters α = 1 and λ. 2. The sum of n independent identically distributed (iid) exponential rv, with parameter λ has a gamma distribution, with parameters n and λ. 3. The sum of n iid gamma rv with parameters α and λ has gamma distribution with parameters nα and λ. Definition: If Z is a standard normal rv, the distribution of U = Z2 called the chi-square distribution with 1 degree of freedom. 2 The density function of U ∼ χ1 is −1/2 x −x/2 fU (x) = √ e , x > 0. 2π 2 Remark: A χ1 random variable has the same density as a random variable with gamma distribution, with parameters α = 1/2 and λ = 1/2. Definition: If U1,U2,...,Uk are independent chi-square rv-s with 1 degree of freedom, the distribution of V = U1 + U2 + ... + Uk is called the chi-square distribution with k degrees of freedom. 2 Using Remark 3. and the above remark, a χk rv. follows gamma distribution with parameters 2 α = k/2 and λ = 1/2.
    [Show full text]
  • Stat 5101 Notes: Brand Name Distributions
    Stat 5101 Notes: Brand Name Distributions Charles J. Geyer February 14, 2003 1 Discrete Uniform Distribution Symbol DiscreteUniform(n). Type Discrete. Rationale Equally likely outcomes. Sample Space The interval 1, 2, ..., n of the integers. Probability Function 1 f(x) = , x = 1, 2, . , n n Moments n + 1 E(X) = 2 n2 − 1 var(X) = 12 2 Uniform Distribution Symbol Uniform(a, b). Type Continuous. Rationale Continuous analog of the discrete uniform distribution. Parameters Real numbers a and b with a < b. Sample Space The interval (a, b) of the real numbers. 1 Probability Density Function 1 f(x) = , a < x < b b − a Moments a + b E(X) = 2 (b − a)2 var(X) = 12 Relation to Other Distributions Beta(1, 1) = Uniform(0, 1). 3 Bernoulli Distribution Symbol Bernoulli(p). Type Discrete. Rationale Any zero-or-one-valued random variable. Parameter Real number 0 ≤ p ≤ 1. Sample Space The two-element set {0, 1}. Probability Function ( p, x = 1 f(x) = 1 − p, x = 0 Moments E(X) = p var(X) = p(1 − p) Addition Rule If X1, ..., Xk are i. i. d. Bernoulli(p) random variables, then X1 + ··· + Xk is a Binomial(k, p) random variable. Relation to Other Distributions Bernoulli(p) = Binomial(1, p). 4 Binomial Distribution Symbol Binomial(n, p). 2 Type Discrete. Rationale Sum of i. i. d. Bernoulli random variables. Parameters Real number 0 ≤ p ≤ 1. Integer n ≥ 1. Sample Space The interval 0, 1, ..., n of the integers. Probability Function n f(x) = px(1 − p)n−x, x = 0, 1, . , n x Moments E(X) = np var(X) = np(1 − p) Addition Rule If X1, ..., Xk are independent random variables, Xi being Binomial(ni, p) distributed, then X1 + ··· + Xk is a Binomial(n1 + ··· + nk, p) random variable.
    [Show full text]
  • 5. the Student T Distribution
    Virtual Laboratories > 4. Special Distributions > 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 5. The Student t Distribution In this section we will study a distribution that has special importance in statistics. In particular, this distribution will arise in the study of a standardized version of the sample mean when the underlying distribution is normal. The Probability Density Function Suppose that Z has the standard normal distribution, V has the chi-squared distribution with n degrees of freedom, and that Z and V are independent. Let Z T= √V/n In the following exercise, you will show that T has probability density function given by −(n +1) /2 Γ((n + 1) / 2) t2 f(t)= 1 + , t∈ℝ ( n ) √n π Γ(n / 2) 1. Show that T has the given probability density function by using the following steps. n a. Show first that the conditional distribution of T given V=v is normal with mean 0 a nd variance v . b. Use (a) to find the joint probability density function of (T,V). c. Integrate the joint probability density function in (b) with respect to v to find the probability density function of T. The distribution of T is known as the Student t distribution with n degree of freedom. The distribution is well defined for any n > 0, but in practice, only positive integer values of n are of interest. This distribution was first studied by William Gosset, who published under the pseudonym Student. In addition to supplying the proof, Exercise 1 provides a good way of thinking of the t distribution: the t distribution arises when the variance of a mean 0 normal distribution is randomized in a certain way.
    [Show full text]
  • On a Problem Connected with Beta and Gamma Distributions by R
    ON A PROBLEM CONNECTED WITH BETA AND GAMMA DISTRIBUTIONS BY R. G. LAHA(i) 1. Introduction. The random variable X is said to have a Gamma distribution G(x;0,a)if du for x > 0, (1.1) P(X = x) = G(x;0,a) = JoT(a)" 0 for x ^ 0, where 0 > 0, a > 0. Let X and Y be two independently and identically distributed random variables each having a Gamma distribution of the form (1.1). Then it is well known [1, pp. 243-244], that the random variable W = X¡iX + Y) has a Beta distribution Biw ; a, a) given by 0 for w = 0, (1.2) PiW^w) = Biw;x,x)=\ ) u"-1il-u)'-1du for0<w<l, Ío T(a)r(a) 1 for w > 1. Now we can state the converse problem as follows : Let X and Y be two independently and identically distributed random variables having a common distribution function Fix). Suppose that W = Xj{X + Y) has a Beta distribution of the form (1.2). Then the question is whether £(x) is necessarily a Gamma distribution of the form (1.1). This problem was posed by Mauldon in [9]. He also showed that the converse problem is not true in general and constructed an example of a non-Gamma distribution with this property using the solution of an integral equation which was studied by Goodspeed in [2]. In the present paper we carry out a systematic investigation of this problem. In §2, we derive some general properties possessed by this class of distribution laws Fix).
    [Show full text]
  • Chapter 8 Fundamental Sampling Distributions And
    CHAPTER 8 FUNDAMENTAL SAMPLING DISTRIBUTIONS AND DATA DESCRIPTIONS 8.1 Random Sampling pling procedure, it is desirable to choose a random sample in the sense that the observations are made The basic idea of the statistical inference is that we independently and at random. are allowed to draw inferences or conclusions about a Random Sample population based on the statistics computed from the sample data so that we could infer something about Let X1;X2;:::;Xn be n independent random variables, the parameters and obtain more information about the each having the same probability distribution f (x). population. Thus we must make sure that the samples Define X1;X2;:::;Xn to be a random sample of size must be good representatives of the population and n from the population f (x) and write its joint proba- pay attention on the sampling bias and variability to bility distribution as ensure the validity of statistical inference. f (x1;x2;:::;xn) = f (x1) f (x2) f (xn): ··· 8.2 Some Important Statistics It is important to measure the center and the variabil- ity of the population. For the purpose of the inference, we study the following measures regarding to the cen- ter and the variability. 8.2.1 Location Measures of a Sample The most commonly used statistics for measuring the center of a set of data, arranged in order of mag- nitude, are the sample mean, sample median, and sample mode. Let X1;X2;:::;Xn represent n random variables. Sample Mean To calculate the average, or mean, add all values, then Bias divide by the number of individuals.
    [Show full text]
  • A Form of Multivariate Gamma Distribution
    Ann. Inst. Statist. Math. Vol. 44, No. 1, 97-106 (1992) A FORM OF MULTIVARIATE GAMMA DISTRIBUTION A. M. MATHAI 1 AND P. G. MOSCHOPOULOS2 1 Department of Mathematics and Statistics, McOill University, Montreal, Canada H3A 2K6 2Department of Mathematical Sciences, The University of Texas at El Paso, El Paso, TX 79968-0514, U.S.A. (Received July 30, 1990; revised February 14, 1991) Abstract. Let V,, i = 1,...,k, be independent gamma random variables with shape ai, scale /3, and location parameter %, and consider the partial sums Z1 = V1, Z2 = 171 + V2, . • •, Zk = 171 +. • • + Vk. When the scale parameters are all equal, each partial sum is again distributed as gamma, and hence the joint distribution of the partial sums may be called a multivariate gamma. This distribution, whose marginals are positively correlated has several interesting properties and has potential applications in stochastic processes and reliability. In this paper we study this distribution as a multivariate extension of the three-parameter gamma and give several properties that relate to ratios and conditional distributions of partial sums. The general density, as well as special cases are considered. Key words and phrases: Multivariate gamma model, cumulative sums, mo- ments, cumulants, multiple correlation, exact density, conditional density. 1. Introduction The three-parameter gamma with the density (x _ V)~_I exp (x-7) (1.1) f(x; a, /3, 7) = ~ , x>'7, c~>O, /3>0 stands central in the multivariate gamma distribution of this paper. Multivariate extensions of gamma distributions such that all the marginals are again gamma are the most common in the literature.
    [Show full text]
  • Chapter 4: Central Tendency and Dispersion
    Central Tendency and Dispersion 4 In this chapter, you can learn • how the values of the cases on a single variable can be summarized using measures of central tendency and measures of dispersion; • how the central tendency can be described using statistics such as the mode, median, and mean; • how the dispersion of scores on a variable can be described using statistics such as a percent distribution, minimum, maximum, range, and standard deviation along with a few others; and • how a variable’s level of measurement determines what measures of central tendency and dispersion to use. Schooling, Politics, and Life After Death Once again, we will use some questions about 1980 GSS young adults as opportunities to explain and demonstrate the statistics introduced in the chapter: • Among the 1980 GSS young adults, are there both believers and nonbelievers in a life after death? Which is the more common view? • On a seven-attribute political party allegiance variable anchored at one end by “strong Democrat” and at the other by “strong Republican,” what was the most Democratic attribute used by any of the 1980 GSS young adults? The most Republican attribute? If we put all 1980 95 96 PART II :: DESCRIPTIVE STATISTICS GSS young adults in order from the strongest Democrat to the strongest Republican, what is the political party affiliation of the person in the middle? • What was the average number of years of schooling completed at the time of the survey by 1980 GSS young adults? Were most of these twentysomethings pretty close to the average on
    [Show full text]
  • Notes Mean, Median, Mode & Range
    Notes Mean, Median, Mode & Range How Do You Use Mode, Median, Mean, and Range to Describe Data? There are many ways to describe the characteristics of a set of data. The mode, median, and mean are all called measures of central tendency. These measures of central tendency and range are described in the table below. The mode of a set of data Use the mode to show which describes which value occurs value in a set of data occurs most frequently. If two or more most often. For the set numbers occur the same number {1, 1, 2, 3, 5, 6, 10}, of times and occur more often the mode is 1 because it occurs Mode than all the other numbers in the most frequently. set, those numbers are all modes for the data set. If each number in the set occurs the same number of times, the set of data has no mode. The median of a set of data Use the median to show which describes what value is in the number in a set of data is in the middle if the set is ordered from middle when the numbers are greatest to least or from least to listed in order. greatest. If there are an even For the set {1, 1, 2, 3, 5, 6, 10}, number of values, the median is the median is 3 because it is in the average of the two middle the middle when the numbers are Median values. Half of the values are listed in order. greater than the median, and half of the values are less than the median.
    [Show full text]
  • A Note on the Existence of the Multivariate Gamma Distribution 1
    - 1 - A Note on the Existence of the Multivariate Gamma Distribution Thomas Royen Fachhochschule Bingen, University of Applied Sciences e-mail: [email protected] Abstract. The p - variate gamma distribution in the sense of Krishnamoorthy and Parthasarathy exists for all positive integer degrees of freedom and at least for all real values pp 2, 2. For special structures of the “associated“ covariance matrix it also exists for all positive . In this paper a relation between central and non-central multivariate gamma distributions is shown, which implies the existence of the p - variate gamma distribution at least for all non-integer greater than the integer part of (p 1) / 2 without any additional assumptions for the associated covariance matrix. 1. Introduction The p-variate chi-square distribution (or more precisely: “Wishart-chi-square distribution“) with degrees of 2 freedom and the “associated“ covariance matrix (the p (,) - distribution) is defined as the joint distribu- tion of the diagonal elements of a Wp (,) - Wishart matrix. Its probability density (pdf) has the Laplace trans- form (Lt) /2 |ITp 2 | , (1.1) with the ()pp - identity matrix I p , , T diag( t1 ,..., tpj ), t 0, and the associated covariance matrix , which is assumed to be non-singular throughout this paper. The p - variate gamma distribution in the sense of Krishnamoorthy and Parthasarathy [4] with the associated covariance matrix and the “degree of freedom” 2 (the p (,) - distribution) can be defined by the Lt ||ITp (1.2) of its pdf g ( x1 ,..., xp ; , ). (1.3) 2 For values 2 this distribution differs from the p (,) - distribution only by a scale factor 2, but in this paper we are only interested in positive non-integer values 2 for which ||ITp is the Lt of a pdf and not of a function also assuming any negative values.
    [Show full text]
  • Negative Binomial Regression Models and Estimation Methods
    Appendix D: Negative Binomial Regression Models and Estimation Methods By Dominique Lord Texas A&M University Byung-Jung Park Korea Transport Institute This appendix presents the characteristics of Negative Binomial regression models and discusses their estimating methods. Probability Density and Likelihood Functions The properties of the negative binomial models with and without spatial intersection are described in the next two sections. Poisson-Gamma Model The Poisson-Gamma model has properties that are very similar to the Poisson model discussed in Appendix C, in which the dependent variable yi is modeled as a Poisson variable with a mean i where the model error is assumed to follow a Gamma distribution. As it names implies, the Poisson-Gamma is a mixture of two distributions and was first derived by Greenwood and Yule (1920). This mixture distribution was developed to account for over-dispersion that is commonly observed in discrete or count data (Lord et al., 2005). It became very popular because the conjugate distribution (same family of functions) has a closed form and leads to the negative binomial distribution. As discussed by Cook (2009), “the name of this distribution comes from applying the binomial theorem with a negative exponent.” There are two major parameterizations that have been proposed and they are known as the NB1 and NB2, the latter one being the most commonly known and utilized. NB2 is therefore described first. Other parameterizations exist, but are not discussed here (see Maher and Summersgill, 1996; Hilbe, 2007). NB2 Model Suppose that we have a series of random counts that follows the Poisson distribution: i e i gyii; (D-1) yi ! where yi is the observed number of counts for i 1, 2, n ; and i is the mean of the Poisson distribution.
    [Show full text]
  • Volatility Modeling Using the Student's T Distribution
    Volatility Modeling Using the Student’s t Distribution Maria S. Heracleous Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Economics Aris Spanos, Chair Richard Ashley Raman Kumar Anya McGuirk Dennis Yang August 29, 2003 Blacksburg, Virginia Keywords: Student’s t Distribution, Multivariate GARCH, VAR, Exchange Rates Copyright 2003, Maria S. Heracleous Volatility Modeling Using the Student’s t Distribution Maria S. Heracleous (ABSTRACT) Over the last twenty years or so the Dynamic Volatility literature has produced a wealth of uni- variateandmultivariateGARCHtypemodels.Whiletheunivariatemodelshavebeenrelatively successful in empirical studies, they suffer from a number of weaknesses, such as unverifiable param- eter restrictions, existence of moment conditions and the retention of Normality. These problems are naturally more acute in the multivariate GARCH type models, which in addition have the problem of overparameterization. This dissertation uses the Student’s t distribution and follows the Probabilistic Reduction (PR) methodology to modify and extend the univariate and multivariate volatility models viewed as alternative to the GARCH models. Its most important advantage is that it gives rise to internally consistent statistical models that do not require ad hoc parameter restrictions unlike the GARCH formulations. Chapters 1 and 2 provide an overview of my dissertation and recent developments in the volatil- ity literature. In Chapter 3 we provide an empirical illustration of the PR approach for modeling univariate volatility. Estimation results suggest that the Student’s t AR model is a parsimonious and statistically adequate representation of exchange rate returns and Dow Jones returns data.
    [Show full text]
  • The Value at the Mode in Multivariate T Distributions: a Curiosity Or Not?
    The value at the mode in multivariate t distributions: a curiosity or not? Christophe Ley and Anouk Neven Universit´eLibre de Bruxelles, ECARES and D´epartement de Math´ematique,Campus Plaine, boulevard du Triomphe, CP210, B-1050, Brussels, Belgium Universit´edu Luxembourg, UR en math´ematiques,Campus Kirchberg, 6, rue Richard Coudenhove-Kalergi, L-1359, Luxembourg, Grand-Duchy of Luxembourg e-mail: [email protected]; [email protected] Abstract: It is a well-known fact that multivariate Student t distributions converge to multivariate Gaussian distributions as the number of degrees of freedom ν tends to infinity, irrespective of the dimension k ≥ 1. In particular, the Student's value at the mode (that is, the normalizing ν+k Γ( 2 ) constant obtained by evaluating the density at the center) cν;k = k=2 ν converges towards (πν) Γ( 2 ) the Gaussian value at the mode c = 1 . In this note, we prove a curious fact: c tends k (2π)k=2 ν;k monotonically to ck for each k, but the monotonicity changes from increasing in dimension k = 1 to decreasing in dimensions k ≥ 3 whilst being constant in dimension k = 2. A brief discussion raises the question whether this a priori curious finding is a curiosity, in fine. AMS 2000 subject classifications: Primary 62E10; secondary 60E05. Keywords and phrases: Gamma function, Gaussian distribution, value at the mode, Student t distribution, tail weight. 1. Foreword. One of the first things we learn about Student t distributions is the fact that, when the degrees of freedom ν tend to infinity, we retrieve the Gaussian distribution, which has the lightest tails in the Student family of distributions.
    [Show full text]