<<

RS – 4 – Multivariate Distributions

Chapter 4 Multivariate distributions

k ≥ 2

Multivariate Distributions All the results derived for the bivariate case can be generalized to n RV.

The joint CDF of X1, X2, …, Xk will have the form:

P(x1, x2, …, xk ) when the RVs are discrete

F(x1, x2, …, xk ) when the RVs are continuous

1 RS – 4 – Multivariate Distributions

Joint

Definition: Joint Probability Function

Let X1, X2, …, Xk denote k discrete random variables, then

p(x1, x2, …, xk )

is joint probability function of X1, X2, …, Xk if

1. 0px1 , , xn 1

2. px 1 , , xn   1 xx1 n

3. P XXA11 ,,nn  pxx ,,

 x1,, xAn 

Joint Density Function

Definition: Joint density function

Let X1, X2, …, Xk denote k continuous random variables, then n f(x1, x2, …, xk) = δ /δx1,δx2, …,δxk F(x1, x2, …, xk)

is the joint density function of X1, X2, …, Xk if

1. fx1 , , xn  0  2. fx ,, xdx ,, dx 1 11nn   A 3. P XXA ,, fxxdxdx ,, ,, 111nnn 

2 RS – 4 – Multivariate Distributions

Example: The

Suppose that we observe an that has k possible outcomes

{O1, O2, …, Ok } independently n times.

Let p1, p2, …, pk denote of O1, O2, …, Ok respectively.

Let Xi denote the number of times that Oi occurs in the n repetitions of the experiment.

Then the joint probability function of the random variables X1, X2, …, Xk is

n! xx12 xk px112,, xnk pp p xx12!! xk !

Example: The Multinomial distribution

xx12 xk Note: p12pp k is the probability of a sequence of length n containing

x1 outcomes O1

x2 outcomes O2 …

xk outcomes Ok n! n   xx12!! xk !x12xx k

is the number of ways of choosing the positions for the x1 outcomes O1, x2 outcomes O2, …, xk outcomes Ok

3 RS – 4 – Multivariate Distributions

Example: The Multinomial distribution

nnx 1 nx12 x xk   xx12 x3 xk n! nx!! nx x  112  xnx112123123!!! x nx x !! x nx x x ! n!  x12!!xx k !

n! xx12 xk px112,, xnk pp p xx12!! xk ! n xx12 xk  pp12 pk xx12 xk This is called the Multinomial distribution

Example: The Multinomial distribution Suppose that an earnings announcements has three possible outcomes:

O1 – Positive stock price reaction – (30% chance) O2 – No stock price reaction – (50% chance) O3 - Negative stock price reaction – (20% chance) Hence p1 = 0.30, p2 = 0.50, p3 = 0.20. Suppose today 4 firms released earnings announcements (n = 4). Let X = the number that result in a positive stock price reaction, Y = the number that result in no reaction, and Z = the number that result in a negative reaction. Find the distribution of X, Y and Z. Compute P[X + Y ≥ Z]

4! xyz pxyz, , 0.30 0.50 0.20 x y z 4 xyz!!!

4 RS – 4 – Multivariate Distributions

z Table: p(x,y,z) x y 01234 0 0 0 0 0 0 0.0016 0 1 0 0 0 0.0160 0 0 2 0 0 0.0600 0 0 0 3 0 0.1000 0 0 0 0 4 0.0625 0 0 0 0 1 0 0 0 0 0.0096 0 1 1 0 0 0.0720 0 0 1 2 0 0.1800 0 0 0 1 3 0.1500 0 0 0 0 1400000 2 0 0 0 0.0216 0 0 2 1 0 0.1080 0 0 0 2 2 0.1350 0 0 0 0 2300000 2400000 3 0 0 0.0216 0 0 0 3 1 0.0540 0 0 0 0 3200000 3300000 3400000 4 0 0.0081 0 0 0 0 4100000 4200000 4300000 4400000

z P [X + Y ≥ Z] x y 01234 0 0 0 0 0 0 0.0016 0 1 0 0 0 0.0160 0 = 0.9728 0 2 0 0 0.0600 0 0 0 3 0 0.1000 0 0 0 0 4 0.0625 0 0 0 0 1 0 0 0 0 0.0096 0 1 1 0 0 0.0720 0 0 1 2 0 0.1800 0 0 0 1 3 0.1500 0 0 0 0 1400000 2 0 0 0 0.0216 0 0 2 1 0 0.1080 0 0 0 2 2 0.1350 0 0 0 0 2300000 2400000 3 0 0 0.0216 0 0 0 3 1 0.0540 0 0 0 0 3200000 3300000 3400000 4 0 0.0081 0 0 0 0 4100000 4200000 4300000 4400000

5 RS – 4 – Multivariate Distributions

Example: The Multivariate Recall the normal distribution

2 1 x 1  2  fx e 2

the bivariate normal distribution

2 2 1  xxxxxxyy  2  1 212 xxyy fxy, e    2 21xy  

Example: The Multivariate Normal distribution

The k-variate Normal distribution is given by:

1  1 1  2 xμ x μ fx1,, xk  fx k /2 1/2 e 2  where

x1 1  11 12  1k  x     x  2 μ   2   12 22 2k             xk k  12kk  kk

6 RS – 4 – Multivariate Distributions

Marginal joint probability function

Definition: Marginal joint probability function

Let X1, X2, …, Xq, Xq+1 …, Xk denote k discrete random variables with joint probability function

p(x1, x2, …, xq, xq+1 …, xk )

then the marginal joint probability function of X1, X2, …, Xq is pxx,, pxx ,, 12qq 1  1 n xxqn1

When X1, X2, …, Xq, Xq+1 …, Xk is continuous, then the marginal joint density function of X1, X2, …, Xq is  f xx,, fxxdxdx ,, 12qq 1 1 nqn 1   

Conditional joint probability function

Definition: Conditional joint probability function

Let X1, X2, …, Xq, Xq+1 …, Xk denote k discrete random variables with joint probability function

p(x1, x2, …, xq, xq+1 …, xk )

then the conditional joint probability function of X1, X2, …, Xq given Xq+1 = xq+1 , …, Xk = xk is

px 1 ,, xk  pxxxx11qq k 11,,qq ,, k  p xx,, qkq11  k

For the continuous case, we have: fx ,, x fxxxx,, ,, 1  k 11qq k 11qq k f xx,, qkq11  k

7 RS – 4 – Multivariate Distributions

Conditional joint probability function

Definition: Independence of sects of vectors

Let X1, X2, …, Xq, Xq+1 …, Xk denote k continuous random variables with joint probability density function

f(x1, x2, …, xq, xq+1 …, xk )

then the variables X1, X2, …, Xq are independent of Xq+1, …, Xk if

f xxfxxfx,, ,, ,,  x 11111 kq qqkqk  

A similar definition for discrete random variables.

Conditional joint probability function

Definition: Mutual Independence

Let X1, X2, …, Xk denote k continuous random variables with joint probability density function

f(x1, x2, …, xk )

then the variables X1, X2, …, Xk are called mutually independent if

f xxfxfxfx11122,, kkk       A similar definition for discrete random variables.

8 RS – 4 – Multivariate Distributions

Multivariate marginal pdfs - Example

Let X, Y, Z denote 3 jointly distributed random with joint density function then

2 Kx yz01,01,01 x  y  z fxyz,,    0otherwise

Find the value of K. Determine the marginal distributions of X, Y and Z. Determine the joint marginal distributions of X, Y X, Z Y, Z

Multivariate marginal pdfs - Example

Solution: Determining the value of K.

 111 1,,  f x y z dxdydz  K x2 yz dxdydz    000 x1 11x3 111 K xyz dydz  K   yz dydz 0033x0 00

y1 11111y 2  KyzdzKzdz   0032 32 12 y0 if K  1 7 zz2 11 7 KKK 1 340  34 12

9 RS – 4 – Multivariate Distributions

Multivariate marginal pdfs - Example

The of X.

 12 11 f x f x,, y z dydz x2 yz dydz 1       7 00

112 y 1 1222y 12 1 x y z dz   x z dz 7200y  0 72

2 1 1222z 12 1 x zx for 0  x 1 74740 

Multivariate marginal pdfs - Example

The marginal distribution of X,Y.  12 1 f xy,,, f xyzdz x2 yzdz 12      7 0

2 z1 12 2 z xz y 72z0

122 1 x yxy for 0  1,0 1 72

10 RS – 4 – Multivariate Distributions

Multivariate marginal pdfs - Example

Find the conditional distribution of: 1. Z given X = x, Y = y, 2. Y given X = x, Z = z, 3. X given Y = y, Z = z, 4. Y , Z given X = x, 5. X , Z given Y = y 6. X , Y given Z = z 7. Y given X = x, 8. X given Y = y 9. X given Z = z 10. Z given X = x, 11. Z given Y = y 12. Y given Z = z

Multivariate marginal pdfs - Example

The marginal distribution of X,Y.

122 1 f12 xy, x y for 01,01  x y 72

Thus the conditional distribution of Z given X = x,Y = y is

12 2 fxyz,, x  yz  7 fxy, 122 1 12 x  y 72

xyz2   for 0z 1 1 xy2  2

11 RS – 4 – Multivariate Distributions

Multivariate marginal pdfs - Example

The marginal distribution of X.

122 1 f1 xx for 0  x 1 74

Then, the conditional distribution of Y , Z given X = x is

12 2 fxyz,, x  yz  7 fx 122 1 1 x  74

xyz2   for 0yz 1,0 1 1 x2  4

Expectations for Multivariate Distributions

Definition: Expectation

Let X1, X2, …, Xn denote n jointly distributed with joint density function

f(x1, x2, …, xn ) then

EgX1 ,, Xn   g x,, x f x ,, x dx ,, dx 111nnn    

12 RS – 4 – Multivariate Distributions

Expectations for Multivariate Distributions - Example

Let X, Y, Z denote 3 jointly distributed random variable with joint density function then

 12 2  7 x  yz01,01,01 x  y  z fxyz,,    0otherwise

Determine E[XYZ]. Solution:

111 12 E XYZ xyz x2 yz dxdydz 000 7

12 111 x 322yz xy z dxdydz 7 000

Expectations for Multivariate Distributions - Example

111 12 12 111 E XYZ xyzx2 yzdxdydz x 322yz xy z dxdydz   7  000 7 000

1142x 1 11 12xx22 3 22 yz y zdydz  yz 2 y zdydz 7400 2x  0 7 00

1123y 1 3312yy22 zzdzzzdz2  7200 3y  0 723

1 32zz23 31231717   74 90 749 73684

13 RS – 4 – Multivariate Distributions

Some Rules for Expectations – Rule 1

  1. E Xxfxxdxdx , ,  x fxdx iinn11 ii i i   

Thus you can calculate E[Xi] either from the joint distribution of

X1, … , Xn or the marginal distribution of Xi.

Proof:  x fx,, x dx ,, dx in11 n     x fx,, x dxdxdx dxdx iniini1111    

  x fxdx  ii i i 

Some Rules for Expectations – Rule 2

2. EaX11 aXnn  aEX 1 1  aEX n n This property is called the Linearity property.

Proof:  ax a x f x,, x dx dx 11nn 1  n 1 n     axfxxdxdx ,, 1111nn    axfxxdxdx,, nn11 n n  

14 RS – 4 – Multivariate Distributions

Some Rules for Expectations – Rule 3

3. (The Multiplicative property) Suppose X1, … , Xq are independent of Xq+1, … , Xk then EgX,, X hX ,, X 11qq  k  EgX,, X EhX ,, X 11qqk  

In the simple case when k = 2 , and g(X)=X & h(Y)=Y: EXY E X E Y 

if X and Y are independent

Some Rules for Expectations – Rule 3

Proof: EgX,, X hX ,, X 11qq  k   gx,, x hx ,, x f x ,, x dx dx 1111qq  k  k n     gx,, x hx ,, x f x ,, x 1111qq  k  q   f21 xqk,, x dx 1 dx qqk dx 1  dx    hx,, x f x ,, x gx ,, x qkqk121   1 q     f x,, x dx dx dx dx 11 qqqk 1  1  EgX,, X 1  q  h x,, x f x ,, x dx dx qkqkqk1211   

15 RS – 4 – Multivariate Distributions

Some Rules for Expectations – Rule 3

EgX,, X 1  q

 h x,, x f x ,, x dx dx qkqkqk1211   

 EgX ,, X EhX ,, X   11qqk   

Some Rules for – Rule 1

1. Var X YXY Var   Var   2Cov XY , 

w here C ov XY , = E  XXY Y  Proof:

2 Var XY E XY    XY 

where  X  YXY EX  Y   Thus,

2 Var XY E XY    XY

EX222 X Y Y XXYY  VarX 2Cov XY ,  Var Y

16 RS – 4 – Multivariate Distributions

Some Rules for Variance – Rule 1

Note: If X and Y are independent, then

CovXY , = E XXY Y 

= EX  XY EY  

=0 EX   XY EY  

and VarX YXY Var   Var 

Some Rules for Variance – Rule 1 - XY

Definition: Correlation coefficient For any two random variables X and Y then define the correlation coefficient

XY to be: Cov X ,YXY Cov ,   xy =  VarXY Var XY

Thus CovXY , = X YXY

22 and VarXY X YXYXY 2

22   X   Y if X and Y are independent.

17 RS – 4 – Multivariate Distributions

Some Rules for Variance – Rule 1 - XY Cov X ,YXY Cov ,  Recall  xy =  VarXY Var XY

Property 1. If X and Y are independent, then XY =0. (Cov(X,Y)=0.)

The converse is not necessarily true. That is, XY = 0 does not imply that X and Y are independent. Example: y\x 6810f(y) y E(X)=8, E(Y)=2, E(XY)=16 1 .2 0 .2 .4 Cov(X,Y) =16 – 8*2 = 0 20.20.2 3 .2 0 .2 .4 P(X=6,Y=2)=0≠P(X=6)*P(Y=2)=.4* *.2=.08=> X&Y are not independent fx(x) .4 .2 .4 1

Some Rules for Variance – Rule 1 - XY

Property 2. 11 XY 

and  XY  1 if there exists a and b such that PY bX a 1

where XY = +1 if b > 0 and XY = -1 if b< 0

Proof: Let UX XY and VY  . gb E V bU2 0 for all b. Let    We will pick b to minimize g(b). 2 gb E V bU EV 2222 bVUbU      222 EV2 bEVU   bEU 

18 RS – 4 – Multivariate Distributions

Some Rules for Variance – Rule 1 - XY

Taking first of g(b) w..t b gb E V bU2 222    EV 2 bEVU   bEU EVU gb 22 EVU  bEU2  0 => bb      min 2 EU

Since g(b) ≥ 0, then g(bmin) ≥ 0 222  gbmin EV2 bEVUbEU min   min  2 EVU  EVU 2 EV2 2 EVU E[U ]   22 EU  EU 2  EVU EV2 0  2 EU

Some Rules for Variance – Rule 1 - XY

2  EVU EV2 0  2 EU 2 EVU Thus,  1 22 EU EV

2 EX Y or XY 2   XY  1 EX22 EY XY

=> 11 XY 

19 RS – 4 – Multivariate Distributions

Some Rules for Variance – Rule 1 - XY

 222 Note: gb min  EV 2 bEVUbEU min   min

EVbU2 0 min

2 If and only if  XY  1

This will be true if

PYYX bmin  X  01

PY bmin X a1 where a YX b min 

i.e., PV  bmin U01

Some Rules for Variance – Rule 1 - XY • Summary:

11 XY 

and  XY  1 if there exists a and b such that

PY bX a 1 EX  Y  where  XX bbmin EX  2 X

Cov XY ,   XYXY  Y  ==2  XY VarX  X  X

 Y and ab YXYXYXmin     X

20 RS – 4 – Multivariate Distributions

Some Rules for Variance – Rule 2

2. VaraX bY a22 Var X  b Var Y  2 ab Cov X , Y

Proof 2 Var aX bY E aX  bY   aX bY 

with aX bYEaX bY a X b Y Thus, 2 Var aX bY E aX  bY a  b  XY Ea22 X222 abX Y b Y XXYY aXabXYbY22Var  2 Cov ,  Var  

Some Rules for Variance – Rule 3

3. Var aX11 aXnn 

22 aX11Var aXnn Var   

2Cov,aa12 X 1 X 2  2Cov, aa 1nn X 1 X 

2Cov,aa23 X 2 X 3  2Cov, aa 2nn X 2 X 

2Cov,aann11 X n X n n 2 aXiiVar 2 aaXX ijij Cov , i1 ij n 2   aXXXiiVar if 1 , , n are mutually independent i1

21 RS – 4 – Multivariate Distributions

The and variance of a Binomial RV

We have already computed this by other methods: 1. Using the probability function p(x).

2. Using the generating function mX(t). Now, we will apply the previous rules for mean and .

Suppose that we have observed n independent repetitions of a

Let X1, … , Xn be n mutually independent random variables each having with p and defined by 1 if repetition ip is S (prob  ) X i   0 if repetition iq is F (prob  )

The mean and variance of a Binomial RV

 E Xpqpi  10 2 2 2 2 2  Var[X i ]  (1 p) p  (0  p) q  (1 p) p  (0  p) (1 p)   (1 p) ( p  p2  p2 )  qp

• Now X = X1 + … + Xn has a with n and pThen, X is the total number of successes in the n repetitions.

XnE XEXppnp1   

2  Xnvar X1   var X pq pq npq

22 RS – 4 – Multivariate Distributions

Conditional Expectation

Definition: Conditional Joint Probability Function

Let X1, X2, …, Xq, Xq+1 …, Xk denote k continuous random variables with joint probability density function

f(x1, x2, …, xq, xq+1 …, xk )

then the conditional joint probability function of X1, X2, …, Xq given Xq+1 = xq+1 , …, Xk = xk is fx ,, x fxxxx,, ,, 1  k 11qq k 11qq k f xx,, qkq11  k

23 RS – 4 – Multivariate Distributions

Definition: Conditional Joint Probability Function

Let U = h( X1, X2, …, Xq, Xq+1 …, Xk ) then the Conditional Expectation of U given Xq+1 = xq+1 , …, Xk = xk is

EUx,, x qk1  hx,, x f x ,, x x ,, x dx dx  1111kqqkq11qq k   

Note: This will be a function of xq+1 , …, xk.

Conditional Expectation of a Function - Example

Let X, Y, Z denote 3 jointly distributed RVs with joint density function then  12 2  7 x  yz01,01,01 x  y  z fxyz,,    0otherwise Determine the conditional expectation of U = X 2 + Y + Z given X = x, Y = y.

Integration over z, gives us the marginal distribution of X,Y:

122 1 f12 xy, x y for 0  x 1,0 y 1 72

24 RS – 4 – Multivariate Distributions

Conditional Expectation of a Function - Example Then, the conditional distribution of Z given X = x,Y = y is

12 2 fxyz,, x  yz  7 fxy, 122 1 12 x  y 72 xyz2   for 0z 1 1 xy2  2

Conditional Expectation of a Function - Example The conditional expectation of U = X 2 + Y + Z given X = x, Y = y. 1 x 2  yz E Uxy,  x2 y z dz  2 1 0 x  2 y 1 1 x 22yzx yzdz 2 1   xy 2 0 1 1 yz22 y x y x 222 z x x y dz 2 1    xy 2 0 z 1 1 zz32 y yxyx2222 xxyz 2 1   xy 2 32z 0 11 1 yyxyxxxy2222 2 1   xy 2 32

25 RS – 4 – Multivariate Distributions

Conditional Expectation of a Function - Example Thus the conditional expectation of U = X 2 + Y + Z given X = x, Y = y. 11 1 EUxy,  y yxyx2222 xxy 2 1   xy 2 32 1 yx2 x 221 yx y 2 1 2 xy 2 32

11xy2  23x 2 y 2 1 xy 2

A Useful Tool: Iterated Expectations

Theorem

Let (x1, x2, … , xq, y1, y2, … , ym) = (x, y) denote q + m RVs.

Let U(x1, x2, … , xq, y1, y2, … , ym) = g(x, y). Then,

EU E E Uy   y  

Var U E Var Uyy Var E U    yy   

The first result is commonly referred as the Law of iterated expectations. The second result is commonly referred as the or variance decomposition formula.

26 RS – 4 – Multivariate Distributions

A Useful Tool: Iterated Expectations

Proof: (in the simple case of 2 variables X and Y) First, we prove the Law of iterated expectations.

Thus UgXY  ,

 E U  g x,, y f x y dxdy  

 EUY E g XY,, Y gxy f xydx    XY    fxy ,    g xy, dx  fyY   hence E EUY EUy  f ydy YY  

A Useful Tool: Iterated Expectations

 E EUY EUy  f ydy YY   fxy ,   g x, y dx f y dy  Y    fyY      g xy,, f xydxdy   

 g xy,, f xydxdy EU  

27 RS – 4 – Multivariate Distributions

A Useful Tool: Iterated Expectations

Now, for the Law of total variance:

2 2 Var U E U E  U  2 EEUY2 EEUY   YY    2 2 EVarUY EUY  EEUY  YY     2 2 E VarUY E E  UY  E  E  UY YY   Y   E Var U Y Var E  U Y YY 

A Useful Tool: Iterated Expectations - Example

Example: Suppose that a rectangle is constructed by first choosing its length, X and then choosing its width Y. Its length X is selected form an with mean  1 = / = 5. Once the length has been chosen its width, Y, is selected from a uniform distribution form 0 to half its length. Find the mean and variance of the area of the rectangle A = XY.

28 RS – 4 – Multivariate Distributions

A Useful Tool: Iterated Expectations - Example

Solution:

1 1  5 x fxX   5 e for x 0

1 fYX yx if 0 y x 2 x 2

f xy, fX  x fYX  yx

11xx1 12eeyxx55= if 0 2 , 0 55x 2 x

We could compute the mean and variance of A = XY from the joint density f(x,y)

A Useful Tool: Iterated Expectations - Example  EA EXY    xyfxydxdy,   xx22 11xx xy22 e55 dydx ye dydx 55x  00 00  22222 E A E  X Y  x y f x, y dxdy  

xx22 11xx x 22y22 e55 dydx xy 2 e dydx 55x  00 00

2 2 and Var A E A E A 

29 RS – 4 – Multivariate Distributions

A Useful Tool: Iterated Expectations - Example

x 2 2 yx 2 11xxy E Ayedydxe2255 dx  55   00 0 2 y0 1 3 11 3 212255xx 1   5  58x edx 20 xedx 1 3  3 005 

3 3 1   5 125 25 20 20212.5 10 2 1 3 5

A Useful Tool: Iterated Expectations - Example

x 2 3 yx 2 11xxy E A2222 xy e55 dydx xe dx 55   00 0 3 y 0

1 5 11 5 2114455xx 1   5  538x edx 60 xedx 1 5  5 005 

 5 54 1  55 4 60 604! 12 24 5 2 1250 1 5 5

2 2 Thus Var A E A E A  1250 12.52  1093.75

30 RS – 4 – Multivariate Distributions

A Useful Tool: Iterated Expectations - Example Now, let’s use the previous theorem. That is, E A E XY E E XY X      X   

and Var A  Var XY  E Var XY X Var E XY X  XX    X Now E XYXXEYXX1 X2 4 4 X 20 2 221 4 and VarXYX XVarYX X 48 X  12

This is because given X, Y has a uniform distribution from 0 to X/2

A Useful Tool: Iterated Expectations - Example

E A E XY E E XY X  Thus     X    EX1122 EX  1 XX44  42

where nd 1 2 2 moment for the exponential dist'n with 5 k! Note   for the exponential distn k  k 11225 Thus EA442 2 12.5 1 2 5

• The same answer as previously calculated!! And no integration needed!

31 RS – 4 – Multivariate Distributions

A Useful Tool: Iterated Expectations - Example

1 2 and 1 4 Now E XY X 4 X Var XY X  48 X Also Var A  Var XY 

E Var XY X Var E XY X  XX    4! 54 EVarXYX E111 X4 XX48 484 48 4  1 2 5

2 Var E XY X Var 11 X22 Var X XXX  44 

2 112242  2   EXXX EX   42 44   

A Useful Tool: Iterated Expectations - Example

2 Var E XY X Var 11 X22 Var X XXX  44  2  2 4! 2! 4452 1 555 4 42 224! 2! 20 4 1144 55 Thus Var A  Var XY  E Var XY X Var E XY X XX 

45 5544 15 14 5   5  1093.75 24 24 8

• The same answer as previously calculated!! And no integration needed!

32 RS – 4 – Multivariate Distributions

The Multivariate MGF Definition: Multivariate MGF

Let X1, X2, … , Xq be q random variables with a joint density function given by f(x1, x2, … , xq). The multivariate MGF is

mX (t)  EX [exp(t' X)]

where t’= (t1, t2, … , tq) and X= (X1, X2, … , Xq )’.

If X1, X2, … , Xn are n independent random variables, then n m (t)  m (t ) X  X i i i1

The MGF of a Multivariate Normal Definition: MGF for the Multivariate Normal

Let X1, X2, … , Xq be n normal random variables. The multivariate normal MGF is 1 m (t)  E [exp(t' X)]  exp(t'μ  t' t) X X 2

where t= (t1, t2, … , tq)’, X= (X1, X2, … , Xq )’ and μ= (μ1, μ2, … , μq )’.

33 RS – 4 – Multivariate Distributions

Review: The Transformation Method

Theorem Let X denote a random variable with probability density function f(x) and U = h(X). Assume that h(x) is either strictly increasing (or decreasing) then the probability density of U is:

dh1 () u dx gu f h1 () u f x du du

The Transformation Method (many variables)

Theorem

Let x1, x2,…, xn denote random variables with joint probability density function

f(x1, x2,…, xn )

Let u1 = h1(x1, x2,…, xn).

u2 = h2(x1, x2,…, xn).

un = hn(x1, x2,…, xn). define an invertible transformation from the x’s to the u’s

34 RS – 4 – Multivariate Distributions

The Transformation Method (many variables)

Then the joint probability density function of u1, u2,…, un is given by:

dx 1 ,, xn  gu11,, unn f x ,, x du1 ,, un

 f  xxJ1 ,, n 

 dx11 dx   du du   1 n  dx 1 ,, xn  where J   det   du1 ,, un    dx dx  nn   du1 dun  Jacobian of the transformation

Example: Distribution of x+y and x-y

Suppose that x1, x2 are independent with density functions f1 (x1) and f2(x2)

Find the distribution of u1 = x1+ x2 and u2 = x1 - x2

Solution: Solving for x1 and x2, we get the inverse transformation: uu uu x  12 x  12 1 2 2 2 The Jacobian of the transformation

 dx11 dx    du12 du dxx 12,   det   J   dx dx  duu, 22 12    du12 du 

35 RS – 4 – Multivariate Distributions

Example: Distribution of x+y and x-y

11 dxx12, 22 11111 J  det  1122222 duu12,  22

The joint density of x1, x2 is

f(x1, x2) = f1 (x1) f2(x2)

Hence the joint density of u1 and u2 is:

g uu12,,  f xx 12 J

uu12 uu 121  ff12 222

Example: Distribution of x+y and x-y

uu12 uu 121 From guu12,  f 1 f 2 222

We can determine the distribution of u1= x1 + x2

 gu guudu, 11  12  2   uu uu1  f 12fdu 12  12 2  222

uu12 uu 12 dv 1 put vuv then 1 , 22du2 2

36 RS – 4 – Multivariate Distributions

Example: Distribution of x+y and x-y

Hence  uu uu1 gu f12 f 12 du 11  1 2 2  222

 f vf u vdv  121   

This is called the of the two densities f1 and f2.

Example (1): Convolution formula -The

Let X and Y be two independent random variables such that X and Y have an exponential distribution with parameter 

We will use the convolution formula to find the distribution of U = X + Y. (We already know the distribution of U: gamma.)

 u -(u-y) -y gU (u)   fU (u  y) fY ( y)dy    e  e dy  0 u   2 e -u dy  2 ue -u 0 This is the gamma distribution when α=2.

37 RS – 4 – Multivariate Distributions

Example (2): The ex-Gaussian distribution

Let X and Y be two independent random variables such that:

1. X has an exponential distribution with parameter  2. Y has a normal (Gaussian) distribution with mean  and .

We will use the convolution formula to find the distribution of U = X + Y.

(This distribution is used in psychology as a model for response time to perform a task.)

Example (2): The ex-Gaussian distribution

  ex  x  0 Now fx1    00x 

x   2  1 2 fy e2  2 2

The density of U = X + Y is:  gu f v f u vdv  12    

2  uv  1 2    eedv  v 2 0 2

38 RS – 4 – Multivariate Distributions

Example (2): The ex-Gaussian distribution

2  uv  v  2 or g uedv  2  2  0

2 2  uv 2 v   2   edv2  2  0

222  vuvu22 v   2   edv2  2  0

2 22 u    vu2  v   22  ee22 dv 2  0

Example (2): The ex-Gaussian distribution

2 22 u    vu2  v   22 or  ee22 dv 2 0

22 2 22  2 2 uu     vuvu 2             22  ee22 dv 2 0

2 2 2 2222  uu vuvu 2        221  eedv22 0 2

2 uu2 2    ePV2 2  0

39 RS – 4 – Multivariate Distributions

Example (2): The ex-Gaussian distribution

Where V has a Normal distribution with mean

2 V u    and variance 2. That is,

2 2 u  u  gu e 2 1    2   

Where Φ(z) is the cdf of the standard Normal distribution

The ex-Gaussian distribution

0.09

g(u) 0.06

0.03

0 0102030

40 RS – 4 – Multivariate Distributions

Distribution of Quadratic Forms We will present different theorems when the RVs are normal variables:

Theorem 7.1. If y ~ N(μy, Σy), then z = Ay ~N(Aμy, A Σy A′), where A is a matrix of constants.

 Theorem 7.2. Let the n×1 vector y ~N(0, In). Then y′y ~n .

2 Theorem 7.3. Let the n×1 vector y ~N(0, σ In) and M be a symmetric idempotent matrix of rank m. Then, 2  y′My/σ ~tr(M) 

Proof: Since M is symmetric it can be diagonalized with an Q. That is, Q′MQ = Λ.(Q′Q=I) Since M is idempotent all these roots are either zero or one. Thus, I 0 Q'MQ      0 0 Note: dim(I) = rank(M) (the number of non-zero roots is the rank of

the matrix). Also, since Σiλi=tr(I), => dim(I)=tr(M). Let v=Q′y. E(v) = Q′E(y)=0 2 2 2 Var(v) = E[vv’]=E[Q′yyQ] =Q′E(σ In)Q = σ Q′InQ = σ In 2 =>v ~ N(0,σ In) Then, tr(M ) tr(M ) 2 y'My v'Q'MQv 1 I 0 1  v    v' v  v2   i  2 2 2 0 0 2  i        i1 i1    Thus, y′My/σ2 is the sum of tr(M) N(0,1) squared variables. It follows  a tr(M) 

41 RS – 4 – Multivariate Distributions

Theorem 7.4. Let the n×1 vector y ~ N(μy, Σy). Then, -1  (y -μy)′ Σy (y -μy) ~n  Proof:

Recall that there exists a non-singular matrix A such that AA′= Σy. -1 Let v = A (y -μy)′ (a of normal variables)

=> v ~ N(0, In) -1  -1 => v′ Σy v ~n (using Theorem 7.3, where n=tr(Σy ).

Theorem 7.5 Let the n×1 vector y ~ N(0, I) and M be an n×n matrix. Then, the characteristic function of y′My is |I-2itM|-1/2 Proof: ity'My 1 ity'My y'y/2 1 y'(I2itM)y/2 y'My  Ey[e ]  e e dx e dx. (2)n/2  (2)n/2  y y This is the normal density with Σ-1=(I-2itM), except for the determinant |I-2itM|-1/2, which should be in the denominator.

Theorem 7.6 Let the n×1 vector y ~ N(0, I), M be an n×n idempotent matrix of rank m, let L be an n×n idempotent matrix of rank s, and suppose ML = 0. Then y ′My and y′Ly are independently distributed  variables.

Proof: By Theorem 7.3 both quadratic forms distributed variables. We only need to prove independence. From Theorem 7.5, we have ity'My 1/2 y'My  Ey[e ] | I 2itM | ity'Ly 1/2 y'Ly  Ey[e ] | I 2itL|

The forms will be independently distributed if φy’(M+L)y = φy’My φy’Ly That is, ity'(ML)y 1/2 1/2 1/2 y'(ML)y  Ey[e ]| I 2it(ML)| | I 2itM| | I 2itL|

Since |ML|=|M||L|, the above result will be true only when ML=0.

42