D. Normal Mixture Models and Elliptical Models 1. Normal Variance Mixtures 2. Normal Mean-Variance Mixtures 3. Spherical Distributions 4. Elliptical Distributions QRM 2010 74 D1. Multivariate Normal Mixture Distributions Pros of Multivariate Normal Distribution • inference is \well known" and estimation is \easy". • distribution is given by µ and Σ. • linear combinations are normal (! VaR and ES calcs easy). • conditional distributions are normal. > • For (X1;X2) ∼ N2(µ; Σ), ρ(X1;X2) = 0 () X1 and X2 are independent: QRM 2010 75 Multivariate Normal Variance Mixtures Cons of Multivariate Normal Distribution • tails are thin, meaning that extreme values are scarce in the normal model. • joint extremes in the multivariate model are also too scarce. • the distribution has a strong form of symmetry, called elliptical symmetry. How to repair the drawbacks of the multivariate normal model? QRM 2010 76 Multivariate Normal Variance Mixtures The random vector X has a (multivariate) normal variance mixture distribution if d p X = µ + WAZ; (1) where • Z ∼ Nk(0;Ik); • W ≥ 0 is a scalar random variable which is independent of Z; and • A 2 Rd×k and µ 2 Rd are a matrix and a vector of constants, respectively. > Set Σ := AA . Observe: XjW = w ∼ Nd(µ; wΣ). QRM 2010 77 Multivariate Normal Variance Mixtures Assumption: rank(A)= d ≤ k, so Σ is a positive definite matrix. If E(W ) < 1 then easy calculations give E(X) = µ and cov(X) = E(W )Σ: We call µ the location vector or mean vector and we call Σ the dispersion matrix. The correlation matrices of X and AZ are identical: corr(X) = corr(AZ): Multivariate normal variance mixtures provide the most useful examples of elliptical distributions. QRM 2010 78 Properties of Multivariate Normal Variance Mixtures 1. Characteristic function of multivariate normal variance mixtures > φX(t) = E expfit Xg = E E expfit>XgjW 1 = E expfit>µ − W t>Σtg : 2 Denote by H the d.f. of W . Define the Laplace-Stieltjes transform of H Z 1 H^ (θ) := E(e−θW ) = e−θudH(u): 0 Then 1 φ (t) = expfit>µgH^ t>Σt : X 2 Based on this, we use the notation X ∼ Md(µ; Σ; H^ ). QRM 2010 79 Properties of Multivariate Normal Variance Mixtures 2. Linear operations. For X ∼ Md(µ; Σ; H^ ) and Y = BX + b, where B 2 Rk×d and b 2 Rk, we have > Y ∼ Mk(Bµ + b;BΣB ; H^ ): As a special case, if a 2 Rd, > > > a X ∼ M1(a µ; a Σa; H^ ): Proof: it>(BX+b) it>b > φY(t) = E e = e φX(B t) > 1 = eit (b+Bµ) H^ t>BΣB>t : 2 QRM 2010 80 Properties of Multivariate Normal Variance Mixtures 3. Density. If P [W = 0] = 0 then as XjW = w ∼ Nd(µ; wΣ), Z 1 fX(x) = fXjW (xjw)dH(w) 0 Z 1 w−d=2 (x − µ)>Σ−1(x − µ) = exp − dH(w): d=2 1=2 0 (2π) jΣj 2w The density depends on x only through (x − µ)>Σ−1(x − µ). QRM 2010 81 Properties of Multivariate Normal Variance Mixtures 4. Independence. If Σ is diagonal, then the components of X are uncorrelated. But, in general, they are not independent, e.g. for X ∼ M2(µ;I2; H^ ), ρ(X1;X2) = 0 ; X1 and X2 are independent: Indeed, X1 and X2 are independent iff W is a.s. constant. > i.e. when X = (X1;X2) is multivariate normally distributed. QRM 2010 82 Examples of Multivariate Normal Variance Mixtures Two point mixture ( k1 with probability p, W = k1; k2 > 0; k1 6= k2: k2 with probability 1 − p Could be used to model two regimes - ordinary and stress. Multivariate t W has an inverse gamma distribution, W ∼ Ig(ν=2; ν=2). ν 2 Equivalently, W ∼ χν. This gives multivariate t with ν degrees of freedom. Symmetric generalised hyperbolic W has a GIG (generalised inverse Gaussian) distribution. QRM 2010 83 The Multivariate t Distribution Density of multivariate t (ν+d) (x − µ)0Σ−1(x − µ)− 2 f(x) = k 1 + Σ,ν;d ν where µ 2 Rd, Σ 2 Rd×d is a positive definite matrix, ν is the degrees of freedom and kΣ,ν;d is a normalizing constant. • E (X) = µ. ν ν • As E(W ) = ν−2, we get cov (X) = ν−2Σ. For finite variances/correlations, ν > 2. Notation: X ∼ td(ν; µ; Σ). QRM 2010 84 Bivariate Normal and t 4 4 2 2 y y 0 0 -2 -2 -4 -4 ¢ ¡ ¢ -4 -2 0 ¡ 2 4 -4 -2 0 2 4 x x 0.25 0.4 0.2 0.3 0.15 Z Z 0.2 0.1 0.1 0.05 0 0 4¨ 4¨ ¥ ¥ 2§ 4 2§ 4 ¤ ¤ 0 2 0 2 Y© Y© ¦ 0 ¦ 0 -2 X -2 £ X -2 £ -2 -4 £ -4 -4 £ -4 Left plot is bivariate normal, right plot is bivariate t with ν = 3. Mean is zero, all variances equal 1 and ρ = −0:7. QRM 2010 85 Fitted Normal and t3 Distributions · · · 0.10 0.10 · · · · ·· · · · · · · · · · · · · · 0.05 · · 0.05 · ·· · · · ··· · · ·· ·· ·· · ·· ····· · ···· ·· · · · · ···· · · ··· ····························· · · ·· ·· ·· ··········· ·· · · ································ ·· ·· · ···· ·· · ················· · ·· · ·· · ···················································· · · ······································· · · · ·························································· · ················································ · · ······························································ · · ························································ · ·· ································································ · · · · · ···· ··························································· 0.0 · ··· ··················· ·· · 0.0 · · · ······························· · · ··· ································································· · ·· ························································ ··· · · · ························································· ·· ·· ···················································· ·· · · ··· ················································ ·· · · ·· ······································ ·· ·· · ·········································· · ··· ···· ······························· · · ···························· · · · ··········· ··· ···· · SIEMENS · · SIEMENS · · ·············· ···· · · ·· · ············ ···· ·· · ········ · · · · ·· ·· ··· · · · · · · · · · · · · · · · · -0.05 -0.05 · · · · · · · · · · · -0.10 -0.10 · -0.15-0.15 -0.10 -0.05 0.0 0.05 0.10 -0.15-0.15 -0.10 -0.05 0.0 0.05 0.10 BMW BMW Simulated data (2000) from models fitted by maximum likelihood to BMW-Siemens data. Left plot is fitted normal, right plot is fitted t3. QRM 2010 86 Simulating Normal Variance Mixture Distributions To simulate X ∼ Md(µ; Σ; H^ ). > 1. Generate Z ∼ Nd(00; Σ), with Σ = AA . 2. Generate W with df H (with Laplace-Stieltjes transform H^ ), independent of Z. p 3. Set X = µ + WAZ. QRM 2010 87 Simulating Normal Variance Mixture Distributions Example: t distribution To simulate a vector X ∼ td(ν; µ; Σ). > 1. Generate Z ∼ Nd(00; Σ), with Σ = AA . 2 ν 2. Generate V ∼ χν and set W = V . p 3. Set X = µ + WAZ. QRM 2010 88 Symmetry in Normal Variance Mixture Distributions Elliptical symmetry means 1-dimensional margins are symmetric. Observation for stock returns: negative returns (losses) have heavier tails than positive returns (gains). Introduce asymmetry by mixing normal distributions with different means as well as different variances. This gives the class of multivariate normal mean-variance mixtures. QRM 2010 89 D2. Multivariate Normal Mean-Variance Mixtures The random vector X has a (multivariate) normal mean-variance mixture distribution if p X =d m(W ) + WAZ; (2) where • Z ∼ Nk(0;Ik); • W ≥ 0 is a scalar random variable which is independent of Z; and • A 2 Rd×k and µ 2 Rd are a matrix and a vector of constants, respectively. • m : [0; 1) ! Rd is a measurable function. QRM 2010 90 Normal Mean-Variance Mixtures Normal mean-variance mixture distributions add asymmetry. In general, they are no longer elliptical and corr(X) 6= corr(AZ). Set Σ := AA>. Observe: XjW = w ∼ Nd(m(w); wΣ): A concrete specification of m(W ) is m(W ) = µ + W γ. Example: Let W have generalized inverse Gaussian distribution to get X generalised hyperbolic. γ = 0 places us back in the (elliptical) normal variance mixture family. QRM 2010 91 D3. Spherical Distributions d×d > > Recall that a map U 2 R is orthogonal if UU = U U = Id. > A random vector Y = (Y1;:::;Yd) has a spherical distribution if for every orthogonal map U 2 Rd×d d Y = UY: Use k·k to denote the Euclidean norm, i.e. for t 2 Rd, 2 2 1=2 ktk = (t1 + ··· + td) . QRM 2010 92 Spherical Distributions THEOREM The following are equivalent. 1. Y is spherical. 2. There exists a function of a scalar variable such that it>Y 2 d φY(t) = E(e ) = (ktk ); 8t 2 R : 3. For every a 2 Rd, > d a Y = kakY1: We call the characteristic generator of the spherical distribution. Notation: Y ∼ Sd( ). QRM 2010 93 Examples of Spherical Distributions • X ∼ Nd(00;Id) is spherical. The characteristic function is > 1 φ (t) = E(eit X) = exp − t>t : X 2 1 Then X ∼ Sd( ) with (t) = exp −2t . d p • X ∼ Md(00;Id; H^ ) is spherical, i.e. X = W Z. The characteristic function is 1 φ (t) = H^ t>t : X 2 ^ 1 Then X ∼ Sd( ) with (t) = H(2t). QRM 2010 94 D4. Elliptical distributions > A random vector X = (X1;:::;Xd) is called elliptical if it is an > affine transform of a spherical random vector Y = (Y1;:::;Yk) , i.e. d X = µ + AY; d×k d where Y ∼ Sk( ) and A 2 R , µ 2 R are a matrix and vector of constants, respectively. Set Σ = AA>. Example: Multivariate normal variance mixture distributions d p X = µ + WAZ. QRM 2010 95 Properties of Elliptical Distributions 1. Characteristic function of elliptical distributions The characteristic function is it>X it>(µ+AY) it>µ > φX(t) = E(e ) = E(e ) = e t Σt Notation: X ∼ Ed(µ; Σ; ). We call µ the location vector, Σ the dispersion matrix and the characteristic generator. Remark: µ is unique but Σ and are only unique up to a positive constant, since for any c > 0, · X ∼ E (µ; Σ; ) ∼ E µ; cΣ; d d c QRM 2010 96 Properties of Elliptical Distributions 2. Linear operations. For X ∼ Ed(µ; Σ; ) and Y = BX + b, where B 2 Rk×d and b 2 Rk, we have > Y ∼ Ek(Bµ + b;BΣB ; ): As a special case, if a 2 Rd, > > > a X ∼ E1(a µ; a Σa; ): Proof: it>(BX+b) it>b > φY(t) = E e = e φX(B t) > = eit (b+Bµ) t>BΣB>t : QRM 2010 97 Properties of Elliptical Distributions 3.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages31 Page
-
File Size-