Distribution on the Sphere

Total Page:16

File Type:pdf, Size:1020Kb

Distribution on the Sphere Arthur CHARPENTIER - Multivariate Distributions Multivariate Distributions: A brief overview (Spherical/Elliptical Distributions, Distributions on the Simplex & Copulas) A. Charpentier (Université de Rennes 1 & UQàM) Université de Rennes 1 Workshop, November 2015. http://freakonometrics.hypotheses.org @freakonometrics 1 Arthur CHARPENTIER - Multivariate Distributions Geometry in Rd and Statistics T P The standard inner product is < x, y >`2 = x y = i xiyi. Hence, x ⊥ y if < x, y >`2 = 0. 1 1 The Euclidean norm is kxk =< x, x > 2 = Pn x2 2 . `2 `2 i=1 i d d The unit sphere of R is Sd = {x ∈ R : kxk`2 = 1}. If x = {x1, ··· , xn}, note that the empirical covariance is Cov(x, y) =< x − x, y − y >`2 and Var(x) = kx − xk`2 . T For the (multivariate) linear model, yi = β0 + β1 xi + εi, or equivalently, yi = β0+ < β1, xi >`2 +εi @freakonometrics 2 Arthur CHARPENTIER - Multivariate Distributions The d dimensional Gaussian Random Vector T If Z ∼ N (0, I), then X = AZ + µ ∼ N (µ, Σ) where Σ = AA . Conversely (Cholesky decomposition), if X ∼ N (µ, Σ), then X = LZ + µ for T 1 some lower triangular matrix L satisfying Σ = LL . Denote L = Σ 2 . With Cholesky decomposition, we have the particular case (with a Gaussian distribution) of Rosenblatt (1952)’s chain, f(x1, x2, ··· , xd) = f1(x1) · f2|1(x2|x1) · f3|2,1(x3|x2, x1) ··· ··· fd|d−1,··· ,2,1(xd|xd−1, ··· , x2, x1). 1 1 T −1 d f(x; µ, Σ) = d 1 exp − (x − µ) Σ (x − µ) for all x ∈ R . (2π) 2 |Σ| 2 2 | {z } kxkµ,Σ @freakonometrics 3 Arthur CHARPENTIER - Multivariate Distributions The d dimensional Gaussian Random Vector T −1 Note that kxkµ,Σ = (x − µ) Σ (x − µ) is the Mahalanobis distance. d Define the ellipsoid Eµ,Σ = {x ∈ R : kxkµ,Σ = 1} Let X1 µ1 Σ11 Σ12 X = ∼ N , X2 µ2 Σ21 Σ22 then −1 −1 X1|X2 = x2 ∼ N (µ1 + Σ12Σ22 (x2 − µ2) , Σ11 − Σ12Σ22 Σ21) X1 ⊥⊥ X2 if and only if Σ12 = 0. Further, if X ∼ N (µ, Σ), then AX + b ∼ N (Aµ + b, AΣAT). @freakonometrics 4 Arthur CHARPENTIER - Multivariate Distributions The Gaussian Distribution, as a Spherical Distributions If X ∼ N (0, I), then X = R · U, where 2 2 2 1 R = kXk`2 ∼ χ (d) 0 ● −1 and −2 −2 −1 −2 0 −1 U = X/kXk ∼ U(S ), 1 `2 d 0 1 2 2 with R ⊥⊥ U. @freakonometrics 5 Arthur CHARPENTIER - Multivariate Distributions The Gaussian Distribution, as an Elliptical Distributions 1 If X ∼ N (µ, Σ), then X = µ + R · Σ 2 · U, where | {z } 2 2 2 1 R = kXk`2 ∼ χ (d) 0 ● −1 −2 −2 and −1 −2 0 −1 U = X/kXk ∼ U(S ), 0 1 `2 d 1 2 2 with R ⊥⊥ U. @freakonometrics 6 Arthur CHARPENTIER - Multivariate Distributions Spherical Distributions T T Let M denote an orthogonal matrix, M M = MM = I. X has a spherical distribution if X =L MX. E.g. in R2, cos(θ) − sin(θ) X1 L X1 = sin(θ) cos(θ) X2 X2 d T L For every a ∈ R , a X = kak`2 Yi for any i ∈ {1, ··· , d}. Further, the generating function of X can be written T [eit X ] = ϕ(tTt) = ϕ(ktk2 ), ∀t ∈ d, E `2 R for some ϕ : R+ → R+. @freakonometrics 7 Arthur CHARPENTIER - Multivariate Distributions Uniform Distribution on the Sphere Actually, more complex that it seems... x1 = ρ sin ϕ cos θ x2 = ρ sin ϕ sin θ x3 = ρ cos ϕ with ρ > 0, ϕ ∈ [0, 2π] and θ ∈ [0, π]. If Φ ∼ U([0, 2π]) and Θ ∼ U([0, π]), we do not have a uniform distribution on the sphere... see https://en.wikibooks.org/wiki/Mathematica/Uniform_Spherical_Distribution, http://freakonometrics.hypotheses.org/10355 @freakonometrics 8 Arthur CHARPENTIER - Multivariate Distributions Spherical Distributions ● ● ● ● 2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ●●● ● ● 1 ●● ●● ● ●●● ●●● ●●● ● ● ● ● ● ● ● ● ● ●●●● ● ● ● ● ●● ● ● ●● ● ●● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● Random vector X as a spherical distribution if ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ●● ●● ● ● ●● ● ●●● ● ● ●● ●● ● ● ● ● ● ●●●● ● ● ● ● ● ● ● ● ● ●●●● ● ● ●●● ● ● ●●● ● ● ●● ● ●● ●● ● ● ● ●●●●●● ●●●●● ● ● ● ●●●●●●●● ● ● ● ● ● ● ● ● −1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● X = R · U ● ● ● ● −2 ● ● ● −2 −1 0 1 2 where R is a positive random variable and U is uniformly d distributed on the unit sphere of R , Sd, with R ⊥⊥ U ● ● ● 0.02 ● 2 ● ● ● 0.04 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.08 ● ● ● ● ● ● ● ● 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.14 ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●0.12 ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● E.g. X ∼ N (0, ). −1 ● ● ● ● ● ● I ● ● 0.06 ● ● ● ● ● ● ● ● ● ● −2 ● ● ● −2 −1 0 1 2 @freakonometrics 9 Arthur CHARPENTIER - Multivariate Distributions Elliptical Distributions 2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 1 ● ● ● ● ● ● ● ● ●●●● ●● ● ● ●●● ●●●● ● ● ● ● ● ●●● ● ● ●● ● ●● ● ● ● ●●● ● ●●● ● ● ● ●● ● ● ● ●●●● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ●● ● ● ● ● Random vector X as a elliptical distribution if ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● 0 ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●●● ● ● ●● ● ● ● ●●●●● ● ● ● ● ● ● ●●●● ● ● ●● ● ● ● ● ● ●●● ● ● ●● ● ●● ●●●● ●● ● ● ● ●●● ●● ● ●●●●●● ● ● ●● ●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −1 ● ● ● ● ● ● ● ● X = µ + R · A · U ● ● −2 0 −2 −1 0 1 2 where A satisfies AA = Σ, U(Sd), with R ⊥⊥ U. Denote 1 Σ 2 = A. 2 ● ● ● ● 0.02 ● ● 0.04 ● ● ● ● ● 0.06● ● ● ● ● ● ● ● ● ● 1 ● ● ● ● ● ● ● ● ● ● ● ● 0.12● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.14 ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.08 ● ● ● ● ● ● E.g. X ∼ N (µ, Σ). −1 ● ● ● ● ● ● ● ● ● ● −2 −2 −1 0 1 2 @freakonometrics 10 Arthur CHARPENTIER - Multivariate Distributions Elliptical Distributions 1 X = µ + RΣ 2 U where R is a positive random variable, U ∼ U(Sd), with U ⊥⊥ R. If X ∼ FR, then X ∼ E(µ, Σ,FR). Remark Instead of FR it is more common to use ϕ such that itTX itTµ T d E[e ] = e ϕ(t Σt), t ∈ R . 0 E[X] = µ and Var[X] = −2ϕ (0)Σ q 1 T −1 f(x) ∝ 1 f( (x − µ) Σ (x − µ)) |Σ| 2 where f : R+ → R+ is called radial density. Note that dF (r) ∝ rd−1f(r)1(x > 0). @freakonometrics 11 Arthur CHARPENTIER - Multivariate Distributions Elliptical Distributions If X ∼ E(µ, Σ,FR), then T AX + b ∼ E(Aµ + b, AΣA ,FR) If X1 µ1 Σ11 Σ12 X = ∼ E , ,FR X2 µ2 Σ21 Σ22 then −1 −1 X1|X2 = x2 ∼ E(µ1 + Σ12Σ22 (x2 − µ2) Σ11 − Σ12Σ22 Σ21,F1|2) where 2 1 F1|2 is the c.d.f. of (R − ?) 2 given X2 = x2. @freakonometrics 12 Arthur CHARPENTIER - Multivariate Distributions Mixtures of Normal Distributions Let Z ∼ N (0, I). Let W denote a positive random variable, Z ⊥⊥ W . Set √ 1 X = µ + W Σ 2 Z, so that X|W = w ∼ N (µ, wΣ). E[X] = µ and Var[X] = E[W ]Σ itTX itTµ− 1 W tTΣt) d E[e ] = E e 2 , t ∈ R . i.e. X ∼ E(µ, Σ, ϕ) where ϕ is the generating function of W , i.e. ϕ(t) = E[e−tW ]. If W has an inverse Gamma distribution, W ∼ IG(ν/2, ν/2), then X has a multivariate t distribution, with ν degrees of freedom. @freakonometrics 13 Arthur CHARPENTIER - Multivariate Distributions Multivariate Student t X ∼ t(µ, Σ, ν) if 1 Z X = µ + Σ 2 √ W /ν where Z ∼ N (0, I) and W ∼ χ2(ν), with Z ⊥⊥ W . Note that ν Var[X] = Σ if ν > 2. ν − 2 @freakonometrics 14 Arthur CHARPENTIER - Multivariate Distributions Multivariate Student t (r = 0.1, ν = 4), (r = 0.9, ν = 4), (r = 0.5, ν = 4) and (r = 0.5, ν = 10). @freakonometrics 15 Arthur CHARPENTIER - Multivariate Distributions On Conditional Independence, de Finetti & Hewitt Instead of X =L MX for any orthogonal matrix M, consider the equality for any permutation matrix M, i.e. L (X1, ··· ,Xd) = (Xσ(1), ··· ,Xσ(d)) for any permutation of {1, ··· , d} E.g. X ∼ N (0, Σ) with Σi,i = 1 and Σi,j = ρ when i 6= j. Note that necessarily 1 ρ = Corr(X ,X ) = − . i j d − 1 From de Finetti (1931), X1, ··· ,Xd, ··· are exchangeable {0, 1} variables if and only if there is a c.d.f. Π on [0, 1] such that Z 1 xT1 n−xT1 P[X = x] = θ [1 − θ] dΠ(θ), 0 i.e. X1, ··· ,Xd, ··· are (conditionnaly) independent given Θ ∼ Π. @freakonometrics 16 Arthur CHARPENTIER - Multivariate Distributions On Conditional Independence, de Finetti & Hewitt-Savage More generally, from Hewitt & Savage (1955) random variables X1, ··· ,Xd, ··· are exchangeable if and only if there is F such that X1, ··· ,Xd, ··· are (conditionnaly) independent given F. E.g. popular shared frailty models. Consider lifetimes T1, ··· ,Td, with Cox-type proportional hazard µi(t) = Θ · µi,0(t), so that θ P[Ti > t|Θ = θ] = F i,0(t) Assume that lifetimes are (conditionnaly) independent given Θ. @freakonometrics 17 Arthur CHARPENTIER - Multivariate Distributions d The Simplex Sd ⊂ R ( d ) d X Sd = x = (x1, x2, ··· , xd) ∈ R xi > 0, i = 1, 2, ··· , d; xi = 1 . i=1 Henre, the simplex here is the set of d-dimensional probability vectors. Note that d Sd = {x ∈ R+ : kxk`1 = 1} Remark Sometimes the simplex is ( d−1 ) ˜ d−1 X Sd−1 = x = (x1, x2, ··· , xd−1) ∈ R xi > 0, i = 1, 2, ··· , d; xi≤1 . i=1 T Note that if x˜ ∈ S˜d−1, then (x˜, 1 − x˜ 1) ∈ Sd. d If h : R+ → R+ is homogeneous of order 1, i.e. h(λx) = λ · h(x) for all λ > 0. Then x x h(x) = kxk`1 · h where ∈ Sd. kxk`1 kxk`1 @freakonometrics 18 Arthur CHARPENTIER - Multivariate Distributions Compositional Data and Geometry
Recommended publications
  • On Multivariate Runs Tests for Randomness
    On Multivariate Runs Tests for Randomness Davy Paindaveine∗ Universit´eLibre de Bruxelles, Brussels, Belgium Abstract This paper proposes several extensions of the concept of runs to the multi- variate setup, and studies the resulting tests of multivariate randomness against serial dependence. Two types of multivariate runs are defined: (i) an elliptical extension of the spherical runs proposed by Marden (1999), and (ii) an orig- inal concept of matrix-valued runs. The resulting runs tests themselves exist in various versions, one of which is a function of the number of data-based hyperplanes separating pairs of observations only. All proposed multivariate runs tests are affine-invariant and highly robust: in particular, they allow for heteroskedasticity and do not require any moment assumption. Their limit- ing distributions are derived under the null hypothesis and under sequences of local vector ARMA alternatives. Asymptotic relative efficiencies with respect to Gaussian Portmanteau tests are computed, and show that, while Marden- type runs tests suffer severe consistency problems, tests based on matrix-valued ∗Davy Paindaveine is Professor of Statistics, Universit´eLibre de Bruxelles, E.C.A.R.E.S. and D´epartement de Math´ematique, Avenue F. D. Roosevelt, 50, CP 114, B-1050 Bruxelles, Belgium, (e-mail: [email protected]). The author is also member of ECORE, the association between CORE and ECARES. This work was supported by a Mandat d’Impulsion Scientifique of the Fonds National de la Recherche Scientifique, Communaut´efran¸caise de Belgique. 1 runs perform uniformly well for moderate-to-large dimensions. A Monte-Carlo study confirms the theoretical results and investigates the robustness proper- ties of the proposed procedures.
    [Show full text]
  • D. Normal Mixture Models and Elliptical Models
    D. Normal Mixture Models and Elliptical Models 1. Normal Variance Mixtures 2. Normal Mean-Variance Mixtures 3. Spherical Distributions 4. Elliptical Distributions QRM 2010 74 D1. Multivariate Normal Mixture Distributions Pros of Multivariate Normal Distribution • inference is \well known" and estimation is \easy". • distribution is given by µ and Σ. • linear combinations are normal (! VaR and ES calcs easy). • conditional distributions are normal. > • For (X1;X2) ∼ N2(µ; Σ), ρ(X1;X2) = 0 () X1 and X2 are independent: QRM 2010 75 Multivariate Normal Variance Mixtures Cons of Multivariate Normal Distribution • tails are thin, meaning that extreme values are scarce in the normal model. • joint extremes in the multivariate model are also too scarce. • the distribution has a strong form of symmetry, called elliptical symmetry. How to repair the drawbacks of the multivariate normal model? QRM 2010 76 Multivariate Normal Variance Mixtures The random vector X has a (multivariate) normal variance mixture distribution if d p X = µ + WAZ; (1) where • Z ∼ Nk(0;Ik); • W ≥ 0 is a scalar random variable which is independent of Z; and • A 2 Rd×k and µ 2 Rd are a matrix and a vector of constants, respectively. > Set Σ := AA . Observe: XjW = w ∼ Nd(µ; wΣ). QRM 2010 77 Multivariate Normal Variance Mixtures Assumption: rank(A)= d ≤ k, so Σ is a positive definite matrix. If E(W ) < 1 then easy calculations give E(X) = µ and cov(X) = E(W )Σ: We call µ the location vector or mean vector and we call Σ the dispersion matrix. The correlation matrices of X and AZ are identical: corr(X) = corr(AZ): Multivariate normal variance mixtures provide the most useful examples of elliptical distributions.
    [Show full text]
  • Generalized Skew-Elliptical Distributions and Their Quadratic Forms
    Ann. Inst. Statist. Math. Vol. 57, No. 2, 389-401 (2005) @2005 The Institute of Statistical Mathematics GENERALIZED SKEW-ELLIPTICAL DISTRIBUTIONS AND THEIR QUADRATIC FORMS MARC G. GENTON 1 AND NICOLA M. R. LOPERFIDO2 1Department of Statistics, Texas A ~M University, College Station, TX 77843-3143, U.S.A., e-mail: genton~stat.tamu.edu 2 Instituto di scienze Econoraiche, Facoltd di Economia, Universit~ degli Studi di Urbino, Via SaJ~ 42, 61029 Urbino ( PU), Italy, e-mail: nicolaQecon.uniurb.it (Received February 10, 2003; revised March 1, 2004) Abstract. This paper introduces generalized skew-elliptical distributions (GSE), which include the multivariate skew-normal, skew-t, skew-Cauchy, and skew-elliptical distributions as special cases. GSE are weighted elliptical distributions but the dis- tribution of any even function in GSE random vectors does not depend on the weight function. In particular, this holds for quadratic forms in GSE random vectors. This property is beneficial for inference from non-random samples. We illustrate the latter point on a data set of Australian athletes. Key words and phrases: Elliptical distribution, invariance, kurtosis, selection model, skewness, weighted distribution. 1. Introduction Probability distributions that are more flexible than the normal are often needed in statistical modeling (Hill and Dixon (1982)). Skewness in datasets, for example, can be modeled through the multivariate skew-normal distribution introduced by Azzalini and Dalla Valle (1996), which appears to attain a reasonable compromise between mathe- matical tractability and shape flexibility. Its probability density function (pdf) is (1.1) 2~)p(Z; ~, ~-~) . O(ozT(z -- ~)), Z E ]~P, where Cp denotes the pdf of a p-dimensional normal distribution centered at ~ C ]~P with scale matrix ~t E ]~pxp and denotes the cumulative distribution function (cdf) of a standard normal distribution.
    [Show full text]
  • Elliptical Symmetry
    Elliptical symmetry Abstract: This article first reviews the definition of elliptically symmetric distributions and discusses identifiability issues. It then presents results related to the corresponding characteristic functions, moments, marginal and conditional distributions, and considers the absolutely continuous case. Some well known instances of elliptical distributions are provided. Finally, inference in elliptical families is briefly discussed. Keywords: Elliptical distributions; Mahalanobis distances; Multinormal distributions; Pseudo-Gaussian tests; Robustness; Shape matrices; Scatter matrices Definition Until the 1970s, most procedures in multivariate analysis were developed under multinormality assumptions, mainly for mathematical convenience. In most applications, however, multinormality is only a poor approximation of reality. In particular, multinormal distributions do not allow for heavy tails, that are so common, e.g., in financial data. The class of elliptically symmetric distributions extends the class of multinormal distributions by allowing for both lighter-than-normal and heavier-than-normal tails, while maintaining the elliptical geometry of the underlying multinormal equidensity contours. Roughly, a random vector X with elliptical density is obtained as the linear transformation of a spherically distributed one Z| namely, a random vector with spherical equidensity contours, the distribution of which is invariant under rotations centered at the origin; such vectors always can be represented under the form Z = RU, where
    [Show full text]
  • Tracy-Widom Limit for the Largest Eigenvalue of High-Dimensional
    Tracy-Widom limit for the largest eigenvalue of high-dimensional covariance matrices in elliptical distributions Wen Jun1, Xie Jiahui1, Yu Long1 and Zhou Wang1 1Department of Statistics and Applied Probability, National University of Singapore, e-mail: *[email protected] Abstract: Let X be an M × N random matrix consisting of independent M-variate ellip- tically distributed column vectors x1,..., xN with general population covariance matrix Σ. In the literature, the quantity XX∗ is referred to as the sample covariance matrix after scaling, where X∗ is the transpose of X. In this article, we prove that the limiting behavior of the scaled largest eigenvalue of XX∗ is universal for a wide class of elliptical distribu- tions, namely, the scaled largest eigenvalue converges weakly to the same limit regardless of the distributions that x1,..., xN follow as M,N →∞ with M/N → φ0 > 0 if the weak fourth moment of the radius of x1 exists . In particular, via comparing the Green function with that of the sample covariance matrix of multivariate normally distributed data, we conclude that the limiting distribution of the scaled largest eigenvalue is the celebrated Tracy-Widom law. Keywords and phrases: Sample covariance matrices, Elliptical distributions, Edge uni- versality, Tracy-Widom distribution, Tail probability. 1. Introduction Suppose one observed independent and identically distributed (i.i.d.) data x1,..., xN with mean 0 from RM , where the positive integers N and M are the sample size and the dimension of 1 N data respectively. Define = N − x x∗, referred to as the sample covariance matrix of W i=1 i i x1,..., xN , where is the conjugate transpose of matrices throughout this article.
    [Show full text]
  • Von Mises-Fisher Elliptical Distribution Shengxi Li, Student Member, IEEE, Danilo Mandic, Fellow, IEEE
    1 Von Mises-Fisher Elliptical Distribution Shengxi Li, Student Member, IEEE, Danilo Mandic, Fellow, IEEE Abstract—A large class of modern probabilistic learning recent applications [13], [14], [15], this type of skewed systems assumes symmetric distributions, however, real-world elliptical distributions results in a different stochastic rep- data tend to obey skewed distributions and are thus not always resentation from the (symmetric) elliptical distribution, which adequately modelled through symmetric distributions. To ad- dress this issue, elliptical distributions are increasingly used to is prohibitive to invariance analysis, sample generation and generalise symmetric distributions, and further improvements parameter estimation. A further extension employs a stochastic to skewed elliptical distributions have recently attracted much representation of elliptical distributions by assuming some attention. However, existing approaches are either hard to inner dependency [16]. However, due to the added dependency, estimate or have complicated and abstract representations. To the relationships between parameters become unclear and this end, we propose to employ the von-Mises-Fisher (vMF) distribution to obtain an explicit and simple probability repre- sometimes interchangeable, impeding the estimation. sentation of the skewed elliptical distribution. This is shown In this paper, we start from the stochastic representation of not only to allow us to deal with non-symmetric learning elliptical distributions, and propose a novel generalisation by systems, but also to provide a physically meaningful way of employing the von Mises-Fisher (vMF) distribution to explic- generalising skewed distributions. For rigour, our extension is itly specify the direction and skewness, whilst keeping the proved to share important and desirable properties with its symmetric counterpart.
    [Show full text]
  • Estimation of Moment Parameter in Elliptical Distributions
    J. Japan Statist. Soc. Vol. 33 No. 2 2003 215–229 ESTIMATION OF MOMENT PARAMETER IN ELLIPTICAL DISTRIBUTIONS Yosihito Maruyama* and Takashi Seo** As a typical non-normal case, we consider a family of elliptically symmetric dis- tributions. Then, the moment parameter and its consistent estimator are presented. Also, the asymptotic expectation and the asymptotic variance of the consistent es- timator of the general moment parameter are given. Besides, the numerical results obtained by Monte Carlo simulation for some selected parameters are provided. Key words and phrases:Asymptotic expansion, consistent estimator, elliptical distribution, kurtosis parameter, moment parameter, Monte Carlo simulation, per- turbation method. 1. Introduction The general moment parameter includes the important kurtosis parameter in the study of multivariate statistical analysis for elliptical populations. The kurtosis parameter, especially with relation to the estimation problem, has been considered by many authors. Mardia (1970, 1974) defined a measure of multi- variate sample kurtosis and derived its asymptotic distribution for samples from a multivariate normal population. Also, the testing normality was considered by using the asymptotic result. The related discussion of the kurtosis parameter under the elliptical distribution has been given by Anderson (1993), and Seo and Toyama (1996). Henze (1994) has discussed the asymptotic variance of the mul- tivariate sample kurtosis for general distributions. Here we deal with the estima- tion of the general moment parameters in elliptical distributions. In particular, we make a generalization of the results of Anderson (1993) and give an extension of asymptotic properties in Mardia (1970, 1974), Seo and Toyama (1996). In general, it is not easy to derive the exact distribution of test statistics or the percentiles for the testing problem under the elliptical populations, and so the asymptotic expansion of the statistics is considered.
    [Show full text]
  • A Matrix Variate Generalization of the Power Exponential Family of Distributions
    A MATRIX VARIATE GENERALIZATION OF THE POWER EXPONENTIAL FAMILY OF DISTRIBUTIONS Key Words: vector distribution; elliptically contoured distribution; stochastic representation. ABSTRACT This paper proposes a matrix variate generalization of the power exponential distribution family, which can be useful in generalizing statistical procedures in multivariate analysis and in designing robust alternatives to them. An example is added to show an application of the generalization. 1. INTRODUCTION In this paper, we make a matrix variate generalization of the power exponential distri- bution and study its properties. The one-dimensional power exponential distribution was established in [1] and has been used in many studies about robust inference (see [2], [3]). A multivariate generalization was proposed in [4]. The power exponential distribution has proved useful to model random phenomena whose distributions have tails that are thicker or thinner than those of the normal distribution, and so to supply robust alternatives to many statistical procedures. The location parameter of these distributions is the mean, so linear and nonlinear models can be easily constructed. The covariance matrix permits a structure that embodies the uniform and serial dependence (see [5]). The power exponential multivariate distribution has been applied in several fields. An application to repeated measurements can be seen in [5]. It has also been applied to obtain robust models for nonlinear repeated measurements, in order to model dependencies among responses, as an alternative to models where the multivariate t distribution is used (see [6]). In Bayesian network applications, these distributions have been used as an alternative to the mixture of normal distributions; some references in the field of speech recognition and image processing are [7], [8], [9], [10] and [11].
    [Show full text]
  • Multivariate Distributions
    IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing1 in particular on multivariate normal, normal-mixture, spherical and elliptical distributions. In addition to studying their properties, we will also discuss techniques for simulating and, very briefly, estimating these distributions. Familiarity with these important classes of multivariate distributions is important for many aspects of risk management. We will defer the study of copulas until later in the course. 1 Preliminary Definitions Let X = (X1;:::Xn) be an n-dimensional vector of random variables. We have the following definitions and statements. > n Definition 1 (Joint CDF) For all x = (x1; : : : ; xn) 2 R , the joint cumulative distribution function (CDF) of X satisfies FX(x) = FX(x1; : : : ; xn) = P (X1 ≤ x1;:::;Xn ≤ xn): Definition 2 (Marginal CDF) For a fixed i, the marginal CDF of Xi satisfies FXi (xi) = FX(1;:::; 1; xi; 1;::: 1): It is straightforward to generalize the previous definition to joint marginal distributions. For example, the joint marginal distribution of Xi and Xj satisfies Fij(xi; xj) = FX(1;:::; 1; xi; 1;:::; 1; xj; 1;::: 1). If the joint CDF is absolutely continuous, then it has an associated probability density function (PDF) so that Z x1 Z xn FX(x1; : : : ; xn) = ··· f(u1; : : : ; un) du1 : : : dun: −∞ −∞ Similar statements also apply to the marginal CDF's. A collection of random variables is independent if the joint CDF (or PDF if it exists) can be factored into the product of the marginal CDFs (or PDFs). If > > X1 = (X1;:::;Xk) and X2 = (Xk+1;:::;Xn) is a partition of X then the conditional CDF satisfies FX2jX1 (x2jx1) = P (X2 ≤ x2jX1 = x1): If X has a PDF, f(·), then it satisfies Z xk+1 Z xn f(x1; : : : ; xk; uk+1; : : : ; un) FX2jX1 (x2jx1) = ··· duk+1 : : : dun −∞ −∞ fX1 (x1) where fX1 (·) is the joint marginal PDF of X1.
    [Show full text]
  • Eventual Convexity of Probability Constraints with Elliptical Distributions Wim Van Ackooij, Jérôme Malick
    Eventual convexity of probability constraints with elliptical distributions Wim van Ackooij, Jérôme Malick To cite this version: Wim van Ackooij, Jérôme Malick. Eventual convexity of probability constraints with elliptical distri- butions. Mathematical Programming, Series A, Springer, 2019, 175 (1-2), pp.1-27. 10.1007/s10107- 018-1230-3. hal-02015783 HAL Id: hal-02015783 https://hal.archives-ouvertes.fr/hal-02015783 Submitted on 12 Feb 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Mathematical Programming manuscript No. (will be inserted by the editor) Eventual convexity of probability constraints with elliptical distributions Wim van Ackooij · J´er^omeMalick Received: date / Accepted: date Abstract Probability constraints are often employed to intuitively define safety of given decisions in optimization problems. They simply express that a given system of inequalities depending on a decision vector and a random vector is satisfied with high enough probability. It is known that, even if this system is convex in the decision vector, the associated probability constraint is not convex in general. In this paper, we show that some degree of convexity is still preserved, for the large class of elliptical random vectors, encompassing for example Gaussian or Student random vectors.
    [Show full text]
  • Inference in Multivariate Dynamic Models with Elliptical Innovations∗
    Inference in multivariate dynamic models with elliptical innovations∗ Dante Amengual CEMFI, Casado del Alisal 5, E-28014 Madrid, Spain <amengual@cemfi.es> Enrique Sentana CEMFI, Casado del Alisal 5, E-28014 Madrid, Spain <sentana@cemfi.es> 15 February 2011 Preliminary and incomplete. Please do not cite without permission. Abstract We obtain analytical expressions for the score of conditionally heteroskedastic dynamic regression models when the conditional distribution is elliptical. We pay special attention not only to the Student t and Kotz distributions, but also to flexible families such as discrete scale mixtures of normals and polynomial expansions. We derive score tests for multivariate normality versus those elliptical distributions. The alternative tests for multivariate normal- ity present power properties that differ substantially under different alternative hypotheses. Finally, we illustrate the small sample performance of the alternative tests through Monte Carlo simulations. Keywords: Financial Returns, Elliptical Distributions, Normality Tests. JEL: C12, C13, C51, C52 ∗We would like to thank Manuel Arellano, Olivier Faugeras, Gabriele Fiorentini, Javier Mencía, Francisco Peñaranda and David Veredas, as well as participants at CEMFI Econometrics Workshop, the 2010 Toulouse School of Economics Financial Econometrics Conference, the XVIII Finance Forum (Elche) and the XXXV Symposium on Economic Analysis (Madrid) for useful comments and suggestions. Carlos González provided excellent research assistance. Of course, the usual caveat applies. Financial support from the Spanish Ministry of Science and Innovation through grant ECO 2008-00280 is gratefully acknowledged. 1Introduction Many empirical studies with financial time series data indicate that the distribution of asset returns is usually rather leptokurtic, even after controlling for volatility clustering effects.
    [Show full text]
  • A Note on Skew-Elliptical Distributions and Linear Functions of Order Statistics Nicola Loperfido
    A note on skew-elliptical distributions and linear functions of order statistics Nicola Loperfido To cite this version: Nicola Loperfido. A note on skew-elliptical distributions and linear functions of order statistics. Statistics and Probability Letters, Elsevier, 2009, 78 (18), pp.3184. 10.1016/j.spl.2008.06.004. hal- 00510972 HAL Id: hal-00510972 https://hal.archives-ouvertes.fr/hal-00510972 Submitted on 23 Aug 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Accepted Manuscript A note on skew-elliptical distributions and linear functions of order statistics Nicola Loperfido PII: S0167-7152(08)00295-2 DOI: 10.1016/j.spl.2008.06.004 Reference: STAPRO 5108 To appear in: Statistics and Probability Letters Received date: 13 December 2007 Revised date: 6 June 2008 Accepted date: 6 June 2008 Please cite this article as: Loperfido, N., A note on skew-elliptical distributions and linear functions of order statistics. Statistics and Probability Letters (2008), doi:10.1016/j.spl.2008.06.004 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript.
    [Show full text]