<<

Properties of the expectation

Ignacio Cascos

2018

Ignacio Cascos Properties of the expectation 2018 1 / 28 Outline

5.1 Expectation and of a of random variables 5.2 5.3 5.4 Moments of a random 5.5 The

Ignacio Cascos Properties of the expectation 2018 2 / 28 Introduction

We will be using the law of iterated expectations and the law of conditional to compute the expectation and variance of the sum of a random number of independent random variables and the expecation and variance of a mixture. Before that, we recall the formulas of the expectation and variance of a linear combination of random variables. The second part of the session is devoted to the moments of a .

Ignacio Cascos Properties of the expectation 2018 3 / 28 5.1 Expectation and variance of a linear combination of random variables

We recall from Session 4 that given d random variables X1, X2,..., Xd and real numbers a1, a2,..., ad , then

" d # d X X E ai Xi = ai E[Xi ] i=1 i=1 " d # d d X X X Var ai Xi = ai aj Cov[Xi , Xj ] i=1 i=1 j=1 d X 2 X = ai Var[Xi ] + 2 ai aj Cov[Xi , Xj ] i=1 i

Ignacio Cascos Properties of the expectation 2018 4 / 28 5.2 Conditional expectation

For any X, Y random variables in the same space, the conditional expectation of X given that Y assumes value y, written as E[X|Y = y], is a number which is computed as P X discrete E[X|Y = y] = x xpX|Y (x|y); R ∞ X continuous E[X|Y = y] = −∞ xfX|Y (x|y)dx.

Nevertheless, E[X|Y ] is a random variable that depends on Y (it is a function of random variable Y ).

Ignacio Cascos Properties of the expectation 2018 5 / 28 Law of iterated expectations

E[X] = E [E[X|Y ]]

Discrete random variables

X X E [E[X|Y ]] = xpX|Y (x|y)pY (y) y x X X X = xpX,Y (x, y) = xpX (x) = E[X]. x y x Continuous random variables

Z ∞ Z ∞ E [E[X|Y ]] = xfX|Y (x|y)fY (y)dxdy −∞ −∞ Z ∞ Z ∞ Z ∞ = xfX,Y (x, y)dydx = xfX (x)dx = E[X]. −∞ −∞ −∞

Ignacio Cascos Properties of the expectation 2018 6 / 28 Conditional expectation

Mixture distribution P If X ∼ F (x) = i∈I Fi (x) and Xi ∼ Fi (x), that is, for some (discrete) r.v. Y , it holds X|Y = i ∼ Fi (x) and then P P E[X] = i∈I pi E[X|Y = i] = i∈I pi E[Xi ]. P P Example. If Xi ∼ N(µi , σi ), then E[X] = i∈I pi E[Xi ] = i∈I pi µi . R If X ∼ F (x) = A ω(a)Fa(x)da and Xa ∼ Fa(x), that is, for some (continuous) r.v. Y , it holds X|Y = a ∼ Fa(x), then R E[X] = A ω(a)E[Xa]da. Example. If X ∼ N(Y , σ) with Y ∼ U(0, 1), then R 1 R 1 E[X] = 0 E[X|Y = y]dy = 0 ydy = 1/2.

Ignacio Cascos Properties of the expectation 2018 7 / 28 Conditional expectation

Sum of a random number of independent random variables

Consider X1, X2,... independent random variables with the distribution of X and N a random natural number independent of X1, X2,..., then

" N # " " N ## X X Xi = Xi N = [N [X]] = [N] [X]. E E E E E E E i=1 i=1 Example. We play 10 times a game. Each time we play the probability we win is 0.5 and the associated monetary prize is N(6, 1) at each game we win, 0 when we loose. The number of victories is N ∼ B(n = 10, p = 1/2) and the prize at our i-th win is Xi ∼ N(6, 1). Our final earnings will be PN Y = i=1 Xi with

E[Y ] = E[N]E[X] = 5 × 6 = 30.

Ignacio Cascos Properties of the expectation 2018 8 / 28 5.3 Conditional variance

For any X, Y random variables in the same , the conditional variance of X given that Y assumes value y, written as Var[X|Y = y], is a number. We can think of it as g(y). As a function of random variable Y , the expression Var[X|Y ], which could be written as g(Y ), is a random variable that depends on Y .

Ignacio Cascos Properties of the expectation 2018 9 / 28 Law of conditional variances

Var[X] = E[Var[X|Y ]] + Var[E[X|Y ]]

2 Var[X] = E[(X − E[X|Y ] + E[X|Y ] − E[X]) ] 2 2 = E[(X − E[X|Y ]) ] + E[(E[X|Y ] − E[X]) ] + 0 h 2 i = E E[(X − E[X|Y ]) ]|Y + Var [E[X|Y ]] = E [Var[X|Y ]] + Var [E[X|Y ]] .

Ignacio Cascos Properties of the expectation 2018 10 / 28 Conditional variance

Mixture distribution (discrete) P Assume X ∼ F (x) = i∈I pi Fi (x) and Xi ∼ Fi (x). This that for some (discrete) r.v. Y it holds X|Y = i ∼ Fi (x) and then

X X 2 X 2 2 Var[X] = pi Var[Xi ] + pi (E[Xi ] − E[X]) = pi E[Xi ] − E[X] . i∈I i∈I i∈I

P 2 2 2 Example. If Xi ∼ N(µi , σi ), then Var[X] = i∈I pi (µi + σi ) − µ .

Ignacio Cascos Properties of the expectation 2018 11 / 28 Conditional variance

Mixture distribution (continuous) R Assume X ∼ F (x) = A ω(a)Fa(x)da and Xa ∼ Fa(x). This means that for some (continuous) r.v. Y , it holds X|Y = a ∼ Fa(x), then

Z Z 2 Var[X] = ω(a)Var[Xa]da + ω(a)(E[Xa] − E[X]) da A A Z 2 2 = ω(a)E[Xa ]da − E[X] . A Example. If X ∼ N(Y , σ) with Y ∼ U(0, 1), then R 1 2 2 2 2 Var[X] = 0 (σ + y )dy − (1/2) = σ + 1/12.

Ignacio Cascos Properties of the expectation 2018 12 / 28 Conditional variance

Sum of a random number of independent random variables

Consider X1, X2,... independent random variables with the distribution of X and N a random natural number independent of X1, X2,..., then

" N # " " N ## " " N ## X X X Var Xi = Var Xi N + Var Xi N E E i=1 i=1 i=1 = E[NVar[X]] + Var[NE[X]] 2 = E[N]Var[X] + E[X] Var[N].

Example. We play 10 times a game. . . Our final earnings will be PN Y = i=1 Xi with variance

2 2 Var[Y ] = E[N]Var[X] + E[X] Var[N] = 5 × 1 + 6 × 10 × 0.5 × 0.5 = 95.

Ignacio Cascos Properties of the expectation 2018 13 / 28 5.4 Moments of a random variable

Moments and centred moments of a random variable k If X is a random variable, and k a positive integer such that E|X| < ∞, then k the k-th moment of X is µk = E[X ]; k the k-the centred moment of X is mk = E[(X − µ1) ].

Ignacio Cascos Properties of the expectation 2018 14 / 28 First moment (mean) location

The first moment of an integrable random variable is its mean (location )

µ1 = µ = E[X], while the first centred moment is 0,

m1 = E[X − µ] = 0.

Ignacio Cascos Properties of the expectation 2018 15 / 28 Second moment (variance) scatter

2 The second moment of a random variable with E|X| < ∞ is

2 µ2 = E[X ], while the second centred moment is its variance (scatter parameter),

2 2 m2 = E[(X − µ) ] = Var[X] = σ .

Ignacio Cascos Properties of the expectation 2018 16 / 28 Third moment () symmetry

The third centred moment of a random variable can be used to obtain about the asymmetry of its distribution

3 m3 = E[(X − µ) ].

The coefficient of skewness is defined as

m [(X − µ)3] Skew = 3 = E . X 3 2 3/2 σX E[(X − µ) ]

Ignacio Cascos Properties of the expectation 2018 17 / 28 Third moment (skewness) symmetry

The skewness of a symmetric distribution is 0.

plot(dnorm,xlim=c(-3,3)) 0.4 0.3 0.2 dnorm 0.1 0.0

−3 −2 −1 0 1 2 3

x

library(moments) .seed(1) skewness(rnorm(1000))

## [1] -0.0191671 Ignacio Cascos Properties of the expectation 2018 18 / 28 Third moment (skewness) positive skew

The skewness of a right-skewed distribution (rigth tail is longer than the left tail) is positive.

dchi=function(x){dchi=dchisq(x,df=3)} plot(dchi,xlim=c(0,10)) 0.25 0.20 0.15 dchi 0.10 0.05 0.00

0 2 4 6 8 10

x

set.seed(1); skewness(rchisq(1000,df=3))

## [1] 1.496328 Ignacio Cascos Properties of the expectation 2018 19 / 28 Third moment (skewness) negative skew

The skewness of a left-skewed distribution (left tail is longer than the right tail) is negative.

dchineg=function(x){dchi=dchisq(-x,df=3)} plot(dchineg,xlim=c(-10,0)) 0.25 0.20 0.15 dchineg 0.10 0.05 0.00

−10 −8 −6 −4 −2 0

x

set.seed(1); skewness(-rchisq(1000,df=3))

## [1] -1.496328 Ignacio Cascos Properties of the expectation 2018 20 / 28 Fourth moment () tails

The fourth centerd moment of a random variable can be used to obtain information about how heavy are the tails of its distribution

4 m4 = E[(X − µ) ].

The kurtosis

4 m4 E[(X − µ) ] KurtX = 4 = 2 2 σX E[(X − µ) ] set.seed(1) kurtosis(rnorm(1000))

## [1] 2.998225

Ignacio Cascos Properties of the expectation 2018 21 / 28 Fourth moment (kurtosis) tails

Excess kurtosis The is often taken as a golden standard and the kurtosis of a random variable is compared with that of a normal random variable by means of the Excess kurtosis

m4 EKurtX = 4 − 3. σX

Ignacio Cascos Properties of the expectation 2018 22 / 28 Fourth moment: zero excess kurtosis (mesokurtic)

A mesokurtic distribution has tails which are as heavy as those of a normal distribution and its excess kurtosis is zero.

set.seed(1); x.binom=rbinom(10000,size=40,prob=0.5) hist(x.binom,probability=T)

Histogram of x.binom 0.12 0.10 0.08 0.06 Density 0.04 0.02 0.00

10 15 20 25 30

x.binom

kurtosis(x.binom)-3

## [1] -0.1317299 Ignacio Cascos Properties of the expectation 2018 23 / 28 Fourth moment: positive excess kurtosis (leptokurtic)

A leptokurtic distribution has heavier tails than a normal distribution and its excess kurtosis is positive.

set.seed(1); x.lap=sample(c(-1,1),1000,replace=T)*rexp(1000) hist(x.lap,probability=T) kurtosis(x.lap)-3

## [1] 2.371918

Histogram of x.lap 0.30 0.25 0.20 0.15 Density 0.10 0.05 0.00

−5 0 5

x.lap

Ignacio Cascos Properties of the expectation 2018 24 / 28 Fourth moment: negative excess kurtosis (platykurtic)

A platykurtic distribution has thinner tails than a normal distribution and its excess kurtosis is negative.

set.seed(1); x.unif=runif(1000) hist(x.unif,probability=T) kurtosis(x.unif)-3

## [1] -1.184201

Histogram of runif(1000) 1.4 1.2 1.0 0.8 Density 0.6 0.4 0.2 0.0

−0.5 0.0 0.5 1.0 1.5

runif(1000)

Ignacio Cascos Properties of the expectation 2018 25 / 28 5.5 The moment generating function

The moment genearting function of random variable X evaluated at s ∈ R is given by tX MX (t) = E[e ] .

The moment generating function completely determines the distribution of the random variable X (inversion property).

Ignacio Cascos Properties of the expectation 2018 26 / 28 Moment generating function

Moment generating function of some random variables tb If Y = aX + b, then MY (t) = e MX (at) If X and Y are independent MX+Y (t) = MX (t)MY (t)

If X ∼ pFX1 + (1 − p)FX2 , then MX (t) = pMX1 (t) + (1 − p)MX2 (t) t If X ∼ B(1, p), then MX (t) = 1 − p + pe t n If X ∼ B(n, p), then MX (t) = (1 − p + pe ) λ(et −1) If X ∼ P(λ), then MX (t) = e λ If X ∼ Exp(λ), then MX (t) = λ−t 2 2 σ t +µt If X ∼ N(µ, σ), then MX (t) = e 2

Ignacio Cascos Properties of the expectation 2018 27 / 28 Moment generating function

Moment generating function and moments The k-th derivative of the moment generating function evaluated at 0 equals the k-th moment of a random variable: 0 tX 0 MX (t) = E[Xe ] and MX (0) = E[X] 00 2 tX 00 2 MX (t) = E[X e ] and MX (0) = E[X ] (k) k tX (k) k MX (t) = E[X e ] and MX (0) = E[X ]

Ignacio Cascos Properties of the expectation 2018 28 / 28