Multivariate Distributions

Total Page:16

File Type:pdf, Size:1020Kb

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing1 in particular on multivariate normal, normal-mixture, spherical and elliptical distributions. In addition to studying their properties, we will also discuss techniques for simulating and, very briefly, estimating these distributions. Familiarity with these important classes of multivariate distributions is important for many aspects of risk management. We will defer the study of copulas until later in the course. 1 Preliminary Definitions Let X = (X1;:::Xn) be an n-dimensional vector of random variables. We have the following definitions and statements. > n Definition 1 (Joint CDF) For all x = (x1; : : : ; xn) 2 R , the joint cumulative distribution function (CDF) of X satisfies FX(x) = FX(x1; : : : ; xn) = P (X1 ≤ x1;:::;Xn ≤ xn): Definition 2 (Marginal CDF) For a fixed i, the marginal CDF of Xi satisfies FXi (xi) = FX(1;:::; 1; xi; 1;::: 1): It is straightforward to generalize the previous definition to joint marginal distributions. For example, the joint marginal distribution of Xi and Xj satisfies Fij(xi; xj) = FX(1;:::; 1; xi; 1;:::; 1; xj; 1;::: 1). If the joint CDF is absolutely continuous, then it has an associated probability density function (PDF) so that Z x1 Z xn FX(x1; : : : ; xn) = ··· f(u1; : : : ; un) du1 : : : dun: −∞ −∞ Similar statements also apply to the marginal CDF's. A collection of random variables is independent if the joint CDF (or PDF if it exists) can be factored into the product of the marginal CDFs (or PDFs). If > > X1 = (X1;:::;Xk) and X2 = (Xk+1;:::;Xn) is a partition of X then the conditional CDF satisfies FX2jX1 (x2jx1) = P (X2 ≤ x2jX1 = x1): If X has a PDF, f(·), then it satisfies Z xk+1 Z xn f(x1; : : : ; xk; uk+1; : : : ; un) FX2jX1 (x2jx1) = ··· duk+1 : : : dun −∞ −∞ fX1 (x1) where fX1 (·) is the joint marginal PDF of X1. Assuming it exists, the mean vector of X is given by > E[X] := (E[X1];:::; E[Xn]) whereas, again assuming it exists, the covariance matrix of X satisfies Cov(X) := Σ := E (X − E[X]) (X − E[X])> 1We will not study copulas in these notes as well defer this topic until later in the course. Multivariate Distributions 2 th so that the (i; j) element of Σ is simply the covariance of Xi and Xj. Note that the covariance matrix is > symmetric so that Σ = Σ, its diagonal elements satisfy Σi;i ≥ 0, and it is positive semi-definite so that > n th x Σx ≥ 0 for all x 2 R . The correlation matrix, ρ(X) has as its (i; j) element ρij := Corr(Xi;Xj). It is also symmetric, positive semi-definite and has 1's along the diagonal. For any matrix A 2 Rk×n and vector a 2 Rk we have E[AX + a] = AE[X] + a (1) Cov(AX + a) = A Cov(X) A>: (2) Finally, the characteristic function of X is given by h is>Xi n φX(s) := E e for s 2 R (3) and, if it exists, the moment-generating function (MGF) is given by (3) with s replaced by −i s. 2 The Multivariate Normal Distribution If the n-dimensional vector X is multivariate normal with mean vector µ and covariance matrix Σ then we write X ∼ MNn(µ; Σ): The standard multivariate normal has µ = 0 and Σ = In, the n × n identity matrix. The PDF of X is given by 1 − 1 (x−µ)> Σ−1(x−µ) f(x) = e 2 (4) (2π)n=2jΣj1=2 where j · j denotes the determinant, and its characteristic function satisfies h is>Xi is>µ− 1 s>Σs φX(s) = E e = e 2 : (5) > > Recall again our partition of X into X1 = (X1;:::;Xk) and X2 = (Xk+1;:::;Xn) . If we extend this notation naturally so that µ Σ Σ µ = 1 and Σ = 11 12 : µ2 Σ21 Σ22 then we obtain the following results regarding the marginal and conditional distributions of X. Marginal Distribution The marginal distribution of a multivariate normal random vector is itself multivariate normal. In particular, Xi ∼ MN(µi; Σii), for i = 1; 2. Conditional Distribution Assuming Σ is positive definite, the conditional distribution of a multivariate normal distribution is also a multivariate normal distribution. In particular, X2 j X1 = x1 ∼ MN(µ2:1; Σ2:1) Multivariate Distributions 3 −1 −1 where µ2:1 = µ2 + Σ21 Σ11 (x1 − µ1) and Σ2:1 = Σ22 − Σ21Σ11 Σ12. Linear Combinations Linear combinations of multivariate normal random vectors remain normally distributed with mean vector and covariance matrix given by (1) and (2), respectively. Estimation of Multivariate Normal Distributions The simplest and most common method of estimating a multivariate normal distribution is to take the sample mean vector and sample covariance matrix as our estimators of µ and Σ, respectively. It is easy to justify this choice since they are the maximum likelihood estimators. It is also common to take n=(n − 1) times the sample covariance matrix as an estimator of Σ as this estimator is known to be unbiased. Testing Normality and Multivariate Normality There are many tests that can be employed for testing normality of random variables and vectors. These include standard univariate tests and tests based on QQplots, as well omnibus moment tests based on whether the skewness and kurtosis of the data are consistent with a multivariate normal distribution. Section 3:1:4 of MFE should be consulted for details on these tests. 2.1 Generating Multivariate Normally Distributed Random Vectors Suppose we wish to generate X = (X1;:::;Xn) where X ∼ MNn(0; Σ). Note that it is then easy to handle the > case where E[X] 6= 0. Let Z = (Z1;:::;Zn) where the Zi's are IID N(0; 1) for i = 1; : : : ; n. If C is an (n × m) matrix then it follows that C>Z ∼ MN(0; C>C): Our problem therefore reduces to finding C such that C>C = Σ. We can use the Cholesky decomposition of Σ to find such a matrix, C. The Cholesky Decomposition of a Symmetric Positive-Definite Matrix A well known fact from linear algebra is that any symmetric positive-definite matrix, M, may be written as M = U>DU where U is an upper triangular matrix and D is a diagonal matrix with positive diagonal elements. Since Σ is symmetric positive-definite, we can therefore write p p p p Σ = U>DU = (U> D)( DU) = ( DU)>( DU): p The matrix C = DU therefore satisfies C>C = Σ. It is called the Cholesky Decomposition of Σ. The Cholesky Decomposition in Matlab and R It is easy to compute the Cholesky decomposition of a symmetric positive-definite matrix in Matlab and R using the chol command and so it is also easy to simulate multivariate normal random vectors. As before, let Σ be an (n × n) variance-covariance matrix and let C be its Cholesky decomposition. If X ∼ MN(0; Σ) then we can generate random samples of X in Matlab as follows: Sample Matlab Code Multivariate Distributions 4 >> Sigma = [1.0 0.5 0.5; 0.5 2.0 0.3; 0.5 0.3 1.5]; >> C = chol(Sigma); >> Z = randn(3,1000000); >> X = C'*Z; >> cov(X') ans = 0.9972 0.4969 0.4988 0.4969 1.9999 0.2998 0.4988 0.2998 1.4971 We must be very careful2 in Matlab3 and R to pre-multiply Z by C> and not C. We have the following algorithm for generating multivariate random vectors, X. Generating Correlated Normal Random Variables generate Z ∼ MN(0; I) =∗ Now compute the Cholesky Decomposition ∗= compute C such that C>C = Σ set X = C>Z 3 Normal-Mixture Models Normal-mixture models are a class of models generated by introducing randomness into the covariance matrix and / or the mean vector. Following the development of MFE we have the following definition of a normal variance mixture: Definition 3 The random vector X has a normal variance mixture if p X ∼ µ + W AZ where (i) Z ∼ MNk(0; Ik) (ii) W ≥ 0 is a scalar random variable independent of Z and (iii) A 2 Rn×k and µ 2 Rn are a matrix and vector of constants, respectively. Note that if we condition on W , then X is multivariate normally distributed. This observation also leads to an obvious simulation algorithm for generating samples of X: first simulate a value of W and then simulate X conditional on the generated value of W . We are typically interested in the case where rank(A) = n ≤ k and Σ = AA> is a full-rank positive definite matrix. In this case we obtain a non-singular normal variance mixture. Assuming W is integrable4, we immediately see that E[X] = µ and Cov(X) = E[W ] Σ: 2We must also be careful that Σ is indeed a genuine variance-covariance matrix. 3Unfortunately, some languages take C> to be the Cholesky Decomposition rather C. You must therefore always be aware of exactly what convention your programming language / package is using. 4That is, W is integrable if E[W ] < 1. Multivariate Distributions 5 We refer to µ and Σ as the location vector and dispersion matrix of the distribution. It is also clear that the correlation matrices of X and AZ are the same as long as W is integrable. This means that if A = Ik then the components of X are uncorrelated, though they are not in general independent. The following result5 emphasizes this point. Lemma 1 Let X = (X1;X2) have a normal mixture distribution with A = I2, µ = 0 and E[W ] < 1 so that Cov(X1;X2) = 0.
Recommended publications
  • A Random Variable X with Pdf G(X) = Λα Γ(Α) X ≥ 0 Has Gamma
    DISTRIBUTIONS DERIVED FROM THE NORMAL DISTRIBUTION Definition: A random variable X with pdf λα g(x) = xα−1e−λx x ≥ 0 Γ(α) has gamma distribution with parameters α > 0 and λ > 0. The gamma function Γ(x), is defined as Z ∞ Γ(x) = ux−1e−udu. 0 Properties of the Gamma Function: (i) Γ(x + 1) = xΓ(x) (ii) Γ(n + 1) = n! √ (iii) Γ(1/2) = π. Remarks: 1. Notice that an exponential rv with parameter 1/θ = λ is a special case of a gamma rv. with parameters α = 1 and λ. 2. The sum of n independent identically distributed (iid) exponential rv, with parameter λ has a gamma distribution, with parameters n and λ. 3. The sum of n iid gamma rv with parameters α and λ has gamma distribution with parameters nα and λ. Definition: If Z is a standard normal rv, the distribution of U = Z2 called the chi-square distribution with 1 degree of freedom. 2 The density function of U ∼ χ1 is −1/2 x −x/2 fU (x) = √ e , x > 0. 2π 2 Remark: A χ1 random variable has the same density as a random variable with gamma distribution, with parameters α = 1/2 and λ = 1/2. Definition: If U1,U2,...,Uk are independent chi-square rv-s with 1 degree of freedom, the distribution of V = U1 + U2 + ... + Uk is called the chi-square distribution with k degrees of freedom. 2 Using Remark 3. and the above remark, a χk rv. follows gamma distribution with parameters 2 α = k/2 and λ = 1/2.
    [Show full text]
  • Relationships Among Some Univariate Distributions
    IIE Transactions (2005) 37, 651–656 Copyright C “IIE” ISSN: 0740-817X print / 1545-8830 online DOI: 10.1080/07408170590948512 Relationships among some univariate distributions WHEYMING TINA SONG Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan, Republic of China E-mail: [email protected], wheyming [email protected] Received March 2004 and accepted August 2004 The purpose of this paper is to graphically illustrate the parametric relationships between pairs of 35 univariate distribution families. The families are organized into a seven-by-five matrix and the relationships are illustrated by connecting related families with arrows. A simplified matrix, showing only 25 families, is designed for student use. These relationships provide rapid access to information that must otherwise be found from a time-consuming search of a large number of sources. Students, teachers, and practitioners who model random processes will find the relationships in this article useful and insightful. 1. Introduction 2. A seven-by-five matrix An understanding of probability concepts is necessary Figure 1 illustrates 35 univariate distributions in 35 if one is to gain insights into systems that can be rectangle-like entries. The row and column numbers are modeled as random processes. From an applications labeled on the left and top of Fig. 1, respectively. There are point of view, univariate probability distributions pro- 10 discrete distributions, shown in the first two rows, and vide an important foundation in probability theory since 25 continuous distributions. Five commonly used sampling they are the underpinnings of the most-used models in distributions are listed in the third row.
    [Show full text]
  • Stat 5101 Notes: Brand Name Distributions
    Stat 5101 Notes: Brand Name Distributions Charles J. Geyer February 14, 2003 1 Discrete Uniform Distribution Symbol DiscreteUniform(n). Type Discrete. Rationale Equally likely outcomes. Sample Space The interval 1, 2, ..., n of the integers. Probability Function 1 f(x) = , x = 1, 2, . , n n Moments n + 1 E(X) = 2 n2 − 1 var(X) = 12 2 Uniform Distribution Symbol Uniform(a, b). Type Continuous. Rationale Continuous analog of the discrete uniform distribution. Parameters Real numbers a and b with a < b. Sample Space The interval (a, b) of the real numbers. 1 Probability Density Function 1 f(x) = , a < x < b b − a Moments a + b E(X) = 2 (b − a)2 var(X) = 12 Relation to Other Distributions Beta(1, 1) = Uniform(0, 1). 3 Bernoulli Distribution Symbol Bernoulli(p). Type Discrete. Rationale Any zero-or-one-valued random variable. Parameter Real number 0 ≤ p ≤ 1. Sample Space The two-element set {0, 1}. Probability Function ( p, x = 1 f(x) = 1 − p, x = 0 Moments E(X) = p var(X) = p(1 − p) Addition Rule If X1, ..., Xk are i. i. d. Bernoulli(p) random variables, then X1 + ··· + Xk is a Binomial(k, p) random variable. Relation to Other Distributions Bernoulli(p) = Binomial(1, p). 4 Binomial Distribution Symbol Binomial(n, p). 2 Type Discrete. Rationale Sum of i. i. d. Bernoulli random variables. Parameters Real number 0 ≤ p ≤ 1. Integer n ≥ 1. Sample Space The interval 0, 1, ..., n of the integers. Probability Function n f(x) = px(1 − p)n−x, x = 0, 1, . , n x Moments E(X) = np var(X) = np(1 − p) Addition Rule If X1, ..., Xk are independent random variables, Xi being Binomial(ni, p) distributed, then X1 + ··· + Xk is a Binomial(n1 + ··· + nk, p) random variable.
    [Show full text]
  • On Multivariate Runs Tests for Randomness
    On Multivariate Runs Tests for Randomness Davy Paindaveine∗ Universit´eLibre de Bruxelles, Brussels, Belgium Abstract This paper proposes several extensions of the concept of runs to the multi- variate setup, and studies the resulting tests of multivariate randomness against serial dependence. Two types of multivariate runs are defined: (i) an elliptical extension of the spherical runs proposed by Marden (1999), and (ii) an orig- inal concept of matrix-valued runs. The resulting runs tests themselves exist in various versions, one of which is a function of the number of data-based hyperplanes separating pairs of observations only. All proposed multivariate runs tests are affine-invariant and highly robust: in particular, they allow for heteroskedasticity and do not require any moment assumption. Their limit- ing distributions are derived under the null hypothesis and under sequences of local vector ARMA alternatives. Asymptotic relative efficiencies with respect to Gaussian Portmanteau tests are computed, and show that, while Marden- type runs tests suffer severe consistency problems, tests based on matrix-valued ∗Davy Paindaveine is Professor of Statistics, Universit´eLibre de Bruxelles, E.C.A.R.E.S. and D´epartement de Math´ematique, Avenue F. D. Roosevelt, 50, CP 114, B-1050 Bruxelles, Belgium, (e-mail: [email protected]). The author is also member of ECORE, the association between CORE and ECARES. This work was supported by a Mandat d’Impulsion Scientifique of the Fonds National de la Recherche Scientifique, Communaut´efran¸caise de Belgique. 1 runs perform uniformly well for moderate-to-large dimensions. A Monte-Carlo study confirms the theoretical results and investigates the robustness proper- ties of the proposed procedures.
    [Show full text]
  • Chapter 6 Continuous Random Variables and Probability
    EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2019 Chapter 6 Continuous Random Variables and Probability Distributions Chap 6-1 Probability Distributions Probability Distributions Ch. 5 Discrete Continuous Ch. 6 Probability Probability Distributions Distributions Binomial Uniform Hypergeometric Normal Poisson Exponential Chap 6-2/62 Continuous Probability Distributions § A continuous random variable is a variable that can assume any value in an interval § thickness of an item § time required to complete a task § temperature of a solution § height in inches § These can potentially take on any value, depending only on the ability to measure accurately. Chap 6-3/62 Cumulative Distribution Function § The cumulative distribution function, F(x), for a continuous random variable X expresses the probability that X does not exceed the value of x F(x) = P(X £ x) § Let a and b be two possible values of X, with a < b. The probability that X lies between a and b is P(a < X < b) = F(b) -F(a) Chap 6-4/62 Probability Density Function The probability density function, f(x), of random variable X has the following properties: 1. f(x) > 0 for all values of x 2. The area under the probability density function f(x) over all values of the random variable X is equal to 1.0 3. The probability that X lies between two values is the area under the density function graph between the two values 4. The cumulative density function F(x0) is the area under the probability density function f(x) from the minimum x value up to x0 x0 f(x ) = f(x)dx 0 ò xm where
    [Show full text]
  • 1. How Different Is the T Distribution from the Normal?
    Statistics 101–106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M §7.1 and §7.2, ignoring starred parts. Reread M&M §3.2. The eects of estimated variances on normal approximations. t-distributions. Comparison of two means: pooling of estimates of variances, or paired observations. In Lecture 6, when discussing comparison of two Binomial proportions, I was content to estimate unknown variances when calculating statistics that were to be treated as approximately normally distributed. You might have worried about the effect of variability of the estimate. W. S. Gosset (“Student”) considered a similar problem in a very famous 1908 paper, where the role of Student’s t-distribution was first recognized. Gosset discovered that the effect of estimated variances could be described exactly in a simplified problem where n independent observations X1,...,Xn are taken from (, ) = ( + ...+ )/ a normal√ distribution, N . The sample mean, X X1 Xn n has a N(, / n) distribution. The random variable X Z = √ / n 2 2 Phas a standard normal distribution. If we estimate by the sample variance, s = ( )2/( ) i Xi X n 1 , then the resulting statistic, X T = √ s/ n no longer has a normal distribution. It has a t-distribution on n 1 degrees of freedom. Remark. I have written T , instead of the t used by M&M page 505. I find it causes confusion that t refers to both the name of the statistic and the name of its distribution. As you will soon see, the estimation of the variance has the effect of spreading out the distribution a little beyond what it would be if were used.
    [Show full text]
  • Ph 21.5: Covariance and Principal Component Analysis (PCA)
    Ph 21.5: Covariance and Principal Component Analysis (PCA) -v20150527- Introduction Suppose we make a measurement for which each data sample consists of two measured quantities. A simple example would be temperature (T ) and pressure (P ) taken at time (t) at constant volume(V ). The data set is Ti;Pi ti N , which represents a set of N measurements. We wish to make sense of the data and determine thef dependencej g of, say, P on T . Suppose P and T were for some reason independent of each other; then the two variables would be uncorrelated. (Of course we are well aware that P and V are correlated and we know the ideal gas law: PV = nRT ). How might we infer the correlation from the data? The tools for quantifying correlations between random variables is the covariance. For two real-valued random variables (X; Y ), the covariance is defined as (under certain rather non-restrictive assumptions): Cov(X; Y ) σ2 (X X )(Y Y ) ≡ XY ≡ h − h i − h i i where ::: denotes the expectation (average) value of the quantity in brackets. For the case of P and T , we haveh i Cov(P; T ) = (P P )(T T ) h − h i − h i i = P T P T h × i − h i × h i N−1 ! N−1 ! N−1 ! 1 X 1 X 1 X = PiTi Pi Ti N − N N i=0 i=0 i=0 The extension of this to real-valued random vectors (X;~ Y~ ) is straighforward: D E Cov(X;~ Y~ ) σ2 (X~ < X~ >)(Y~ < Y~ >)T ≡ X~ Y~ ≡ − − This is a matrix, resulting from the product of a one vector and the transpose of another vector, where X~ T denotes the transpose of X~ .
    [Show full text]
  • 6 Probability Density Functions (Pdfs)
    CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs) 6 Probability Density Functions (PDFs) In many cases, we wish to handle data that can be represented as a real-valued random variable, T or a real-valued vector x =[x1,x2,...,xn] . Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties. We describe the probabilities of a real-valued scalar variable x with a Probability Density Function (PDF), written p(x). Any real-valued function p(x) that satisfies: p(x) 0 for all x (1) ∞ ≥ p(x)dx = 1 (2) Z−∞ is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case p for PDFs. With the PDF we can specify the probability that the random variable x falls within a given range: x1 P (x0 x x1)= p(x)dx (3) ≤ ≤ Zx0 This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls within a range, we compute the area under the curve for that range. The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis- tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with N possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num- ber; a PDF is defined as the limiting case of this discrete distribution.
    [Show full text]
  • Extremal Dependence Concepts
    Extremal dependence concepts Giovanni Puccetti1 and Ruodu Wang2 1Department of Economics, Management and Quantitative Methods, University of Milano, 20122 Milano, Italy 2Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON N2L3G1, Canada Journal version published in Statistical Science, 2015, Vol. 30, No. 4, 485{517 Minor corrections made in May and June 2020 Abstract The probabilistic characterization of the relationship between two or more random variables calls for a notion of dependence. Dependence modeling leads to mathematical and statistical challenges and recent developments in extremal dependence concepts have drawn a lot of attention to probability and its applications in several disciplines. The aim of this paper is to review various concepts of extremal positive and negative dependence, including several recently established results, reconstruct their history, link them to probabilistic optimization problems, and provide a list of open questions in this area. While the concept of extremal positive dependence is agreed upon for random vectors of arbitrary dimensions, various notions of extremal negative dependence arise when more than two random variables are involved. We review existing popular concepts of extremal negative dependence given in literature and introduce a novel notion, which in a general sense includes the existing ones as particular cases. Even if much of the literature on dependence is focused on positive dependence, we show that negative dependence plays an equally important role in the solution of many optimization problems. While the most popular tool used nowadays to model dependence is that of a copula function, in this paper we use the equivalent concept of a set of rearrangements.
    [Show full text]
  • On a Problem Connected with Beta and Gamma Distributions by R
    ON A PROBLEM CONNECTED WITH BETA AND GAMMA DISTRIBUTIONS BY R. G. LAHA(i) 1. Introduction. The random variable X is said to have a Gamma distribution G(x;0,a)if du for x > 0, (1.1) P(X = x) = G(x;0,a) = JoT(a)" 0 for x ^ 0, where 0 > 0, a > 0. Let X and Y be two independently and identically distributed random variables each having a Gamma distribution of the form (1.1). Then it is well known [1, pp. 243-244], that the random variable W = X¡iX + Y) has a Beta distribution Biw ; a, a) given by 0 for w = 0, (1.2) PiW^w) = Biw;x,x)=\ ) u"-1il-u)'-1du for0<w<l, Ío T(a)r(a) 1 for w > 1. Now we can state the converse problem as follows : Let X and Y be two independently and identically distributed random variables having a common distribution function Fix). Suppose that W = Xj{X + Y) has a Beta distribution of the form (1.2). Then the question is whether £(x) is necessarily a Gamma distribution of the form (1.1). This problem was posed by Mauldon in [9]. He also showed that the converse problem is not true in general and constructed an example of a non-Gamma distribution with this property using the solution of an integral equation which was studied by Goodspeed in [2]. In the present paper we carry out a systematic investigation of this problem. In §2, we derive some general properties possessed by this class of distribution laws Fix).
    [Show full text]
  • 1 One Parameter Exponential Families
    1 One parameter exponential families The world of exponential families bridges the gap between the Gaussian family and general dis- tributions. Many properties of Gaussians carry through to exponential families in a fairly precise sense. • In the Gaussian world, there exact small sample distributional results (i.e. t, F , χ2). • In the exponential family world, there are approximate distributional results (i.e. deviance tests). • In the general setting, we can only appeal to asymptotics. A one-parameter exponential family, F is a one-parameter family of distributions of the form Pη(dx) = exp (η · t(x) − Λ(η)) P0(dx) for some probability measure P0. The parameter η is called the natural or canonical parameter and the function Λ is called the cumulant generating function, and is simply the normalization needed to make dPη fη(x) = (x) = exp (η · t(x) − Λ(η)) dP0 a proper probability density. The random variable t(X) is the sufficient statistic of the exponential family. Note that P0 does not have to be a distribution on R, but these are of course the simplest examples. 1.0.1 A first example: Gaussian with linear sufficient statistic Consider the standard normal distribution Z e−z2=2 P0(A) = p dz A 2π and let t(x) = x. Then, the exponential family is eη·x−x2=2 Pη(dx) / p 2π and we see that Λ(η) = η2=2: eta= np.linspace(-2,2,101) CGF= eta**2/2. plt.plot(eta, CGF) A= plt.gca() A.set_xlabel(r'$\eta$', size=20) A.set_ylabel(r'$\Lambda(\eta)$', size=20) f= plt.gcf() 1 Thus, the exponential family in this setting is the collection F = fN(η; 1) : η 2 Rg : d 1.0.2 Normal with quadratic sufficient statistic on R d As a second example, take P0 = N(0;Id×d), i.e.
    [Show full text]
  • A Tail Quantile Approximation Formula for the Student T and the Symmetric Generalized Hyperbolic Distribution
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Schlüter, Stephan; Fischer, Matthias J. Working Paper A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution IWQW Discussion Papers, No. 05/2009 Provided in Cooperation with: Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics Suggested Citation: Schlüter, Stephan; Fischer, Matthias J. (2009) : A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution, IWQW Discussion Papers, No. 05/2009, Friedrich-Alexander-Universität Erlangen-Nürnberg, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung (IWQW), Nürnberg This Version is available at: http://hdl.handle.net/10419/29554 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu IWQW Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung Diskussionspapier Discussion Papers No.
    [Show full text]