On some properties of elliptical distributions

Fredrik Armerin∗

Abstract We look at a characterization of elliptical distributions in the case when finiteness of moments of the random vector is not assumed. Some addi- tional results regarding elliptical distributions are also presented.

Keywords: Elliptical distributions, multivariate distributions.

JEL Classification: C10.

∗CEFIN, KTH, Sweden. Email: [email protected] 1 Introduction

Let the n-dimensional random vector X have finite second moments and the property that the distribution of every on the form hT X +a for every h ∈ Rn and a ∈ R is determined by its and . Chamberlain [2] showed that if the matrix of X is positive definite, then this is equivalent to the fact that X is elliptically distributed. There are, however, elliptical distributions that do not have even finite first moments. In this note we show that for a random vector to be elliptically distributed is equivalent to it fulfilling a condition generalizing the condition above, and one that can be defined even if the random vector do not have finite moments of any order. In portfolio analysis, if r is an n-dimensional random vector of returns and there is a risk-free rate rf , then the expected utility of having the portfolio n (w, w0) ∈ R × R is given by

 T  E u w r + w0rf , where u is a utility function (we assume that this expected value is well defined). See e.g. Back [1], Cochrane [3] or Munk [5] for the underlying theory. If the T distribution of w r + rf only depends on its mean and variance, then

 T  T T  E u w r + w0rf = U w µ + w0rf , w Σw (1) for some function U (this is one of the applications considered in Chamberlain [2]). If we only consider bounded utility functions, then the expected value is well defined even if r does not have any finite moments. Below we will show that if Equation (1) holds for any bounded and measurable function u, then this is in fact a defining property of elliptical distributions, i.e. r must be elliptically distributed if Equation (1) holds for any bounded and measurable u. The results presented in this note are previously known, or are basic gener- alizations of known results.

2 Basic definitions

The general reference for this section is McNeil et al [4].

Definition 2.1 An n-dimensional random vector X has a spherical distribution if UX =d X for every orthogonal n × n matrix U, i.e. for every n × n matrix U such that

UU T = U T U = I.

The following is Theorem 3.19 in McNeil [4].

1 Theorem 2.2 Let X be an n-dimensional random vector. The following are equivalent. (i) The random vector X has a spherical distribution. (ii) There exists a function ψ of a scalar variable such that

h T i E eih X = ψ(hT h)

for every h ∈ Rn. (iii) For every h ∈ Rn T d h X = khkX1, where khk2 = hT h. We call ψ in the theorem above the characteristic generator of X, and write X ∼ Sn(ψ) if X is n-dimensional and has a spherical distribution with characteristic generator ψ. For a strictly positive integer n we let (ek) for k = 1, . . . , n denote the vectors n of the standard basis in R . If X ∼ Sn(ψ) for some characteristic generator ψ, then by choosing first h = ek and then h = −ek we get from property (iii) in Theorem 2.2 that d −Xk = Xk, i.e. each component in a random vector which has a spherical distribution is a symmetric random variable. By choosing first h = ek and then h = e` and again using (iii) in Theorem 2.2, we get

d Xk = X` for every k, ` = 1, . . . , n, i.e. every component of a spherically distributed ran- dom vector has the same distribution.

Definition 2.3 An n-dimensional random vector X is said to be elliptically distributed if X =d µ + AY, n T where µ ∈ R , A is an n × k-matrix and Y ∼ Sk(ψ). With Σ = AA we write X ∼ En(µ, Σ, ψ) in this case.

The characteristic function of X ∼ En(µ, Σ, ψ) is given by

h T i T E eih X = eih µψ(hT Σh).

If X has finite mean, then µ = E [X] , and if X has finite variance, then we can choose

Σ = Var(X).

2 If X ∼ En(µ, Σ, ψ), B is an k × n-matrix and b is an k × 1-dimensional vector, then T BX + b ∼ Ek(Bµ + b, BΣB , ψ).

Alternatively, if X ∼ En(µ, Σ, ψ), then

BX + b =d Bµ + BAY,

T where Y ∼ Sn(ψ) and AA = Σ. Finally, when Σ is a positive definite matrix we have the equivalence

−1/2 X ∼ En(µ, Σ, ψ) ⇔ Σ (X − µ) ∼ Sn(ψ).

3 Characterizing elliptical distributions

The following proposition shows the structure of any elliptically distributed random vector.

Proposition 3.1 Let µ ∈ Rn and let Σ be an n × n symmetric and positive semidefinite matrix. For an n-dimensional random vector X the following are equivalent.

(i) X ∼ En(µ, Σ, ψ). (ii) We have √ hT X =d hT µ + hT Σh Z for any h ∈ Rn, where Z is a symmetric random variable with E eitZ  = ψ(t2).

n Proof. (i) ⇒ (ii): If X ∼ En(µ, Σ, ψ), then for every h ∈ R and some matrix A such that AAT = Σ

hT X =d hT µ + hT AY = hT µ + (AT h)T Y =d hT µ + kAT hkY √ 1 = hT µ + hT AAT hY √ 1 T T = h µ + h ΣhY1.

Since Y has a spherical distribution, Y1 is a symmetric random variable with characteristic function E eitY1  = ψ(t2). (ii) ⇒ (i): If X has the property that √ hT X =d hT µ + hT Σh Z

3 n  itZ  2 for every h ∈ R and where E e = ψ(t ), then √ h T i T h T i T E eih X = eih µE ei h Σh Z = eih µψ(hT Σh), i.e. X ∼ En(µ, Σ, ψ). 2 Note that the previous proposition is true even if Σ is only a positive semidefinite matrix.

Remark 3.2 With the same notation as in Proposition 3.1, if the random vec- tor X has the property that for every h ∈ Rn √ hT X =d hT µ + hT Σh Z holds and Σ has at least one non-zero diagonal element (the only case when this does not hold is when Σ = 0), then Z must be symmetric. To see this we assume, without loss of generality, that Σ11 > 0. Now first choose h = e1, and then h = −e1. We get

d p d p X1 = µ1 + Σ11 Z and − X1 = −µ1 + Σ11 Z respectively, or X − µ X − µ √1 1 =d Z and − √1 1 =d Z Σ11 Σ11 respectively. It follows that Z =d −Z.

Using the representation √ hT X =d hT µ + hT Σh Z we see that the finiteness of moments of the vector X is equivalent to the finiteness of the moments of the random variable Z. This representation is also a practical way of both defining new and understanding well known elliptical distributions. When Z ∼ N(0, 1) we get the multivariate , and when Z ∼ t(ν) we get the multivariate t-distribution with ν > 0 degrees of freedom. The multivariate t-distribution with ν ∈ (0, 1] is an example of an elliptical distribution which does not have finite mean. Now assume that the random vector X has the property that the distribution of hT X + a is determined by its mean and variance for every h ∈ Rn and a ∈ R. If we let µ = E [X] and Σ = Var(X), which we assume is a positive definite matrix, then Chamberlain [2] showed that X must be elliptically distributed. Hence in this case, if

 T   T  T T E h1 X + a1 = E h2 X + a2 and Var(h1 X + a1) = Var(h2 X + a2), then we must have T d T h1 X + a1 = h2 X + a2.

4 This property can, with notation as above, be rewritten as follows: If

T T T T h1 µ + a1 = h2 µ + a2 and h1 Σh1 = h2 Σh2, then T d T h1 X + a1 = h2 X + a2.

It turns out that this condition, which is well defined for any X ∼ En(µ, Σ, ψ) even if no moments exists, is a defining property of elliptical distributions if Σ is a positive definite matrix.

Proposition 3.3 Let µ ∈ Rn and let Σ be an n × n symmetric and positive definite matrix. For an n-dimensional random vector X the following are equiv- alent.

(i) X ∼ En(µ, Σ, ψ) for some characteristic generator ψ.

(ii) For any measurable and bounded f : R → R and any h ∈ Rn and a ∈ R E f(hT X + a) = F (hT µ + a, hT Σh)

for some function F : R × R+ → R. (iii) If T T T T h1 µ + a1 = h2 µ + a2 and h1 Σh1 = h2 Σh2 n for h1, h2 ∈ R and a1, a2 ∈ R, then

T d T h1 X + a1 = h2 X + a2.

For a proof of this, see Section A.1. It is possible to reformulate this proposition without using the constants a, a1 and a2.

Proposition 3.4 Let µ ∈ Rn and let Σ be an n × n symmetric and positive definite matrix. For an n-dimensional random vector X the following are equiv- alent.

(i) X ∼ En(µ, Σ, ψ) for some characteristic generator ψ.

(ii) For any measurable and bounded g : R → R and any h ∈ Rn E g(hT (X − µ)) = G(hT Σh)

for some function G : R+ → R. (iii) If T T h1 Σh1 = h2 Σh2 n for h1, h2 ∈ R , then

T d T h1 (X − µ) = h2 (X − µ).

5 For a proof, see Section A.2. In Propositions 3.3 and 3.4 we assumed that the matrix Σ was positive def- inite. The implications (i) ⇒ (ii) and (ii) ⇒ (iii) in these propositions are still valid when Σ is only positive semidefinite (and the general characterization of elliptical distributions in Proposition 3.1 also holds in this case). The implica- tions (iii) ⇒ (i) in the propositions above are not true in general, as is seen in the following example.

Example 3.5 Let  1 0  Σ = 0 0 and  U  X = , 0 where U ∼ N(0, 1). In this case we have

T T T d T h1 Σh1 = h2 Σh2 ⇒ h1 X = h2 X, so X has property (iii) from Proposition 3.4. But X is not spherically dis- tributed. This follows from the fact that every component of a spherically distributed random vector must have the same distribution. By letting µ = [0 0]T it is possible to also construct a counterexample to the implication (iii) ⇒ (i) in Proposition 3.3. 2

A Proofs A.1 Proof of Proposition 3.3 (i) ⇒ (ii): We know that there exists a symmetric random variable Z such that √ hT X =d hT µ + hT Σh Z for any h ∈ Rn. Hence for any measurable and bounded f h  √ i E f(hT X + a) = E f hT µ + hT Σh Z + a = F hT µ + a, hT Σh , where √ F (x, y) = E [f(x + y Z)] .

(ii) ⇒ (iii): Fix t ∈ R and let f1(x) = sin tx and f2(x) = cos tx (which are two bounded and measurable functions). Define Fi, i = 1, 2, by

 T  T T E fi(h X + a) = Fi(h µ + a, h Σh).

n Now take any h1, h2 ∈ R and a1, a2 ∈ R such that T T T T h1 µ + a1 = h2 µ + a2 and h1 Σh1 = h2 Σh2.

6 Then

h T i it(h1 X+a1)  T T  E e = E sin(t(h1 X + a1)) + i cos(t(h1 X + a1)) T T T T = F1(h1 µ + a1, h1 Σh1) + iF2(h1 µ + a1, h1 Σh1) T T T T = F1(h2 µ + a2, h2 Σh2) + iF2(h2 µ + a2, h2 Σh2)  T T  = E sin(t(h2 X + a2)) + i cos(t(h2 X + a2)) h it(hT X+a )i = E e 2 2 .

Since this holds for any t ∈ R we have

T d T h1 X + a1 = h2 X + a2.

(iii) ⇒ (i): Take h ∈ Rn and let

 −1/2  −1/2 h1 = Σ h h2 = khkΣ e1 T −1/2 and T −1/2 a1 = −h Σ µ a2 = −khke1 Σ µ. Then T 2 T 2 h1 Σh1 = khk and h2 Σh2 = khk . We also have T T −1/2 T −1/2 h1 µ + a1 = h Σ µ + (−h Σ µ) = 0 and T T −1/2 T −1/2 h2 µ + a2 = khke1 Σ µ + (−khke1 Σ µ) = 0. It follows that T d T h1 X + a1 = h2 X + a2 ⇔ T −1/2 d T −1/2 h Σ (X − µ) = khke1 Σ (X − µ). This shows that −1/2 Σ (X − µ) ∼ Sn(ψ), which, since Σ is a positive definite matrix, is equivalent to

X ∼ E(µ, Σ, ψ).



A.2 Proof of Proposition 3.4 (i) ⇒ (ii): There exists a symmetric random variable Z such that √ hT X =d hT µ + hT Σh Z

7 for any h ∈ Rn. It follows that for any bounded and measurable g h √ i E g(hT (X − µ)) = E g hT Σh Z = G hT Σh , where √ G(x) = E g( x Z) .

(ii) ⇒ (iii): Fix t ∈ R and let g1(x) = sin tx and g2(x) = cos tx (which are two bounded and measurable functions). Define Gi, i = 1, 2, by

 T  T E gi(h (X − µ)) = Gi(h Σh).

Now take any h1, h2 such that

T T h1 Σh1 = h2 Σh2. Then

h T i ith1 (X−µ)  T T  E e = E sin(th1 (X − µ)) + i cos(th1 (X − µ)) T T = G1(h1 Σh1) + iG2(h1 Σh1) T T = G1(h2 Σh2) + iG2(h2 Σh2)  T T  = E sin(th2 (X − µ)) + i cos(th2 (X − µ)) h ithT (X−µ)i = E e 2 .

T d T Since this holds for any t ∈ R we have h1 (X − µ) = h2 (X − µ).

(iii) ⇒ (i): Take h ∈ Rn and let −1/2 −1/2 h1 = Σ h and h2 = khkΣ e1,. Then T 2 h1 Σh1 = khk and T 2 h2 Σh2 = khk . Hence T T h1 Σh1 = h2 Σh2 and it follows that T d T h1 (X − µ) = h2 (X − µ) ⇔ T −1/2 d T −1/2 h Σ (X − µ) = khke1 Σ (X − µ). Since Σ is a positive definite matrix, this shows, as in the proof of Proposition 3.3, that X ∼ En(µ, Σ, ψ). 

8 References

[1] Back, K. & E. (2010), “Asset Pricing and Portfolio Choice Theory”, Oxford University Press. [2] Chamberlain, C. (1983), “A Characterization of the Distributions That Im- ply Mean-Variance Utility functions”, Journal of Economic Theory 29, p. 185-201.

[3] Cochrane, J. H. (2001), “Asset Pricing”, Princeton University Press. [4] McNeil, A. J., Frey, R. & Embrechts P. (2005), “Quantitative Risk Man- agement”, Princeton University Press. [5] Munk, C. (2013), “Financial Asset Pricing Theory”, Oxford University Press.

9