Quick viewing(Text Mode)

SAMPLE MOMENTS 1.1. Moments About the Origin (Raw Moments

SAMPLE MOMENTS 1.1. Moments About the Origin (Raw Moments

MOMENTS

1. POPULATION MOMENTS 1.1. Moments about the origin (raw moments). The rth about the origin of a X, 0 r denoted by µr, is the of X ; symbolically,

0 r µr =E(X ) = X xr f(x) (1) x for r = 0, 1, 2, . . . when X is discrete and

0 r µr =E(X ) ∞ (2) = Z xrf(x) dx −∞ when X is continuous. The rth moment about the origin is only defined if E[ Xr ] exists. A moment about 0 the origin is sometimes called a raw moment. Note that µ1 = E(X)=µX , the of the distribution of X, or simply the mean of X. The rth moment is sometimes written as of θ where θ is a vector of that characterize the distribution of X.

If there is a sequence of random variables, X1,X2,...Xn, we will call the rth population moment of the 0 ith random variable µi, r and define it as

0 r µi,r = E (Xi ) (3)

1.2. Central moments. The rth moment about the mean of a random variable X, denoted by µr , is the r expected value of ( X − µX ) symbolically,

r µr =E[(X − µX ) ] r (4) = X ( x − µX ) f(x) x for r = 0, 1, 2, . . . when X is discrete and

r µr =E[(X − µX ) ] ∞ Z r (5) = (x − µX ) f(x) dx −∞ th r when X is continuous. The r moment about the mean is only defined if E[ (X - µX ) ] exists. The rth moment about the mean of a random variable X is sometimes called the rth of X. The rth r central moment of X about a is defined as E[ (X - a) ]. If a = µX, we have the rth central moment of X about µX.

Note that

Date: December 7, 2005. 1 2 SAMPLE MOMENTS

∞ Z µ1 = E[X − µX ]= (x − µX ) f(x) dx =0 −∞ ∞ (6) 2 Z 2 2 µ2 = E[(X − µX ) ]= (x − µX ) f(x) dx = Var(X)=σ −∞ Also note that all odd moments of X around its mean are zero for symmetrical distributions, provided such moments exist.

th If there is a sequence of random variables, X1,X2, ... Xn, we will call the r central population moment of the ith random variable µi,r and define it as

r 0 r µi,r = E Xi − µi,1 (7) 0 When the variables are identically distributed, we will drop the i subscript and write µ r and µ r .

2. SAMPLE MOMENTS

2.1. Definitions. Assume there is a sequence of random variables, X1,X2,...Xn. The first sample moment, usually called the average is defined by

1 n X¯ = X X (8) n n i i =1

Corresponding to this is its numerical value, x¯n, which is defined by

1 n x¯ = X x (9) n n i i =1

where xi represents the observed value of Xi. The rth sample moment for any t is defined by

1 n X¯ r = X Xr (10) n n i i =1 This too has a numerical counterpart given by

1 n x¯r = X xr (11) n n i i=1 2.2. Properties of Sample Moments.

¯ r 2.2.1. Expected value of Xn . Taking the expected value of equation 10 we obtain

1 n 1 n E X¯ r  = EX¯ r = X EXr = X µ0 (12) n n n i n i,r i=1 i=1 If the X’s are identically distributed, then

n 1 0 0 E X¯ r  = EX¯ r = X µ = µ (13) n n n r r i=1 SAMPLE MOMENTS 3

¯ r 2.2.2. of Xn. First consider the case where we have a sample X1,X2, ... ,Xn.

1 n ! 1 n ! VarX¯ r  = Var X Xr = Var X Xr (14) n n i n2 i i =1 i =1 If the X’s are independent, then

1 n VarX¯ r  = X Var(Xr ) (15) n n2 i i =1 If the X’s are independent and identically distributed, then

1 VarX¯ r  = Var(Xr ) (16) n n where X denotes any one of the random variables (because they are all identical). In the case where r =1, we obtain

2 ¯ 1 σ VarXn = Var( X )= (17) n n

3. SAMPLE CENTRAL MOMENTS

3.1. Definitions. Assume there is a sequence of random variables, X1,X2, ... ,Xn. We define the sample central moments as

n r 1 0 r C = X Xi − µ  ,r =1, 2, 3,..., n n i,1 i=1 n 1 1 0 ⇒ C = X Xi − µ  (18) n n i,1 i=1 n 2 1 0 2 ⇒ C = X Xi − µ  n n i,1 i=1 0 These are only defined if µ i,1 is known.

3.2. Properties of Sample Central Moments.

r r 3.2.1. Expected value of Cn. The expected value of Cn is given by

n n r 1 0 r 1 E (C )= X E Xi − µ  = X µi,r (19) n n i,1 n i=1 i =1 The last equality follows from equation 7.

If the Xi are identically distributed, then

r E ( Cn )=µ r 1 (20) E Cn  =0 4 SAMPLE MOMENTS

r 3.2.2. Variance of C n. First consider the case where we have a sample X1,X2, ... ,Xn.

n n r 1 0 r ! 1 0 r ! Var( C )=Var X Xi − µ  = Var X Xi − µ  (21) n n i,1 n2 i,1 i=1 i=1 If the X’s are independently distributed, then

n r 1 0 r Var( C )= X VarXi − µ   (22) n n2 i,1 i=1 If the X’s are independent and identically distributed, then

1 r Var( Cr )= Var ( X − µ0 )  (23) n n 1 where X denotes any one of the random variables (because they are all identical). In the case where r =1, we obtain

1 0 Var C1 = Var[ X − µ ] n n 1 1 = Var[ X − µ ] n (24) 1 = σ2 − 2 Cov[ X, µ]+Var[ µ ] n 1 = σ2 n

4. SAMPLE ABOUT THE AVERAGE

4.1. Definitions. Assume there is a sequence of random variables, X1,X2,...Xn. Define the rth sample moment about the average as

n r 1 ¯ r M = X Xi − Xn ,r=1, 2, 3,..., (25) n n i=1 This is clearly a statistic of which we can compute a numerical value. We denote the numerical value by, r mn, and define it as

1 n mr = X ( x − x¯ )r (26) n n i n i =1 In the special case where r = 1 we have

n 1 1 ¯ M = X Xi − Xn n n i =1 1 n (27) = X X − X¯ n i n i =1 ¯ ¯ = Xn − Xn =0 4.2. Properties of Sample Moments about the Average when r = 2. SAMPLE MOMENTS 5

r 2 4.2.1. Alternative ways to write Mn. We can write Mn in an alternative useful way by expanding the squared term and then simplifying as follows

n r 1 r M = X Xi − X¯n n n i =1 n 2 1 ¯ 2 ⇒ M = X Xi − Xn n n i=1 n ! 1 2 ¯ ¯ 2 = X X − 2 XiXn + X  n i n i =1 (28) n n ¯ n 1 2 2Xn 1 ¯ 2 = X Xi − X Xi + X Xn n n i =1 n i =1 i =1 1 n = X X2 − 2X¯ 2 + X¯ 2 n i n n i =1

1 n ! = X X 2 − X¯ 2 n i n i =1

2 r 4.2.2. Expected value of Mn. The expected value of Mn is then given by

1 " n # E M 2 = E X X 2 − E X¯ 2  n n i n i =1 n 1 2 ¯ 2 ¯ = X E X  − E Xn − Var(Xn) n i (29) i=1 2 1 n 1 n ! = X µ0 − X µ0 − Var(X¯ ) n i,2 n i,1 n i =1 i =1

The second line follows from the alternative definition of variance

2 Var( X )=E X 2  − [ E ( X )]

2 ⇒ E X 2  =[ E ( X )] + Var( X ) (30)

¯ 2 ¯ 2 ¯ ⇒ E Xn  =  E Xn  + Var(Xn )

and the third line follows from equation 12. If the Xi are independent and identically distributed, then 6 SAMPLE MOMENTS

1 " n # E M 2  = E X X2 − E X¯ 2 n n i n i=1 2 1 n 1 n ! = X µ0 − X µ0 − Var(X¯ ) n i,2 n i,1 n i=1 i=1 (31) σ2 = µ0 − (µ0 )2 − 2 1 n 1 = σ2 − σ2 n n − 1 = σ2 n

0 0 where µ1 and µ2 are the first and second population moments, and µ2 is the second central population moment for the identically distributed variables. Note that this obviously implies

" n # ¯ 2 2 E X Xi − X = nEMn i=1 n − 1 (32) = n   σ2 n =(n − 1) σ2

2 4.2.3. Variance of Mn. By definition,

2 h 2 2i 2 2 VarMn  = E Mn − EMn (33)

The second term on the right on equation 33 is easily obtained by squaring the result in equation 31.

n − 1 E M 2 = σ2 n n (34) 2 2 2 (n − 1) ⇒ E M 2  = EM2 = σ4 n n n2

Now consider the first term on the right hand side of equation 33. Write it as

2  n !  2 2 1 ¯ 2 E hM  i =E X Xi − Xn (35) n  n  i =1

1 n − ¯ 2 Now consider writing n Pi =1 Xi Xn as follows SAMPLE MOMENTS 7

n n 1 ¯ 2 1 ¯ 2 X Xi − X = X (Xi − µ) − (X − µ) n n i =1 i =1 n 1 2 = X Yi − Y¯  n (36) i =1

where Yi =Xi − µ Y¯ =X¯ − µ

Obviously,

n n ¯ 2 ¯ 2 ¯ ¯ X Xi − X = X Yi − Y  , where Yi = Xi − µ,Y = X − µ (37) i =1 i =1

Now consider the properties of the random variable Yi which is a transformation of Xi. First the expected value.

Yi =Xi − µ

E (Yi)=E (Xi) − E ( µ) (38) =µ − µ =0

The variance of Yi is

Yi = Xi − µ

Var(Yi)=Var(Xi) (39) 2 = σ if Xi are independently and identically distributed

4 Also consider E(Yi ). We can write this as

∞ E( Y 4)=Z y4 f ( x ) dx −∞ ∞ (40) = Z ( x − µ)4 f(x) dx −∞

= µ4

Now write equation 35 as follows 8 SAMPLE MOMENTS

2  n !  2 2 1 ¯ 2 E hM  i =E X Xi − Xn  (41a) n  n  i =1

2  n !  1 ¯ 2 =E X Xi − X (41b)  n  i =1

 n 2 1 2 ! =E X Yi − Y¯  (41c)  n  i =1

2  n !  1 ¯ 2 = E X Yi − Y  (41d) n2   i =1

1 Ignoring n2 for now, expand equation 41 as follows

2 2  n !   n !  ¯ 2 2 ¯ ¯ 2 E X Yi − Y  =E X Y − 2 YiY + Y  (42a)    i  i =1 i =1

 n n n 2 2 2! = E X Y − 2 Y¯ X Yi + X Y¯ (42b)  i  i =1 i =1 i =1

2  ( n ) !  = E X Y 2 − 2 n Y¯ 2 + n Y¯ 2 (42c)  i  i =1

2  ( n ) !  = E X Y 2 − nY¯ 2 (42d)  i  i =1

2  n ! n  = E X Y 2 − 2 n Y¯ 2 X Y 2 + n2Y¯ 4 (42e)  i i  i =1 i =1

2  n !  " n # = E X Y 2 − 2 nE Y¯ 2 X Y 2 + n2 E Y¯ 4 (42f)  i  i i =1 i =1

Now consider the first term on the right of 42 which we can write as SAMPLE MOMENTS 9

2  n !   n n  E X Y 2 =E X Y 2 X Y 2 (43a)  i   i j  i =1 i =1 j =1

" n # =E X Y 4 + XX Y 2 Y 2 (43b) i i 6=j i j i =1 n = X EY4 + XX EY2 EY2 (43c) i i 6=j i j i =1 2 =nµ4 + n (n − 1)µ2 (43d)

4 =nµ4 + n (n − 1)σ (43e)

Now consider the second term on the right of 42 (ignoring 2n for now) which we can write as

" n # 1  n n n  E Y¯ 2 X Y 2 = E X Y X Y X Y 2 (44a) i n2  j k i  i =1 j =1 k =1 i =1

    1  n  = E X Y 4 + XX Y 2 Y 2 + XX Y Y X Y 2 2  i i j j k i  (44b) n  i 6=j j 6=k  i =1 i =6 j     i =6 k 

    1  n  = X EY4 + XX EY2 EY2 + XX EYEY X EY2 2  i i j j k i  (44c) n  i 6=j j 6=k  i =1 i =6 j     i =6 k 

1 2 = nµ4 + n (n − 1) µ +0 (44d) n2 2

1 4 = µ4 +(n − 1)σ  (44e) n

The last term on the penultimate line is zero because E(Yj) = E(Yk) = E(Yi)=0. 10 SAMPLE MOMENTS

Now consider the third term on the right side of 42 (ignoring n2 for now) which we can write as

 n n n n  ¯ 4 1 E Y  = E X Yi X Yj X Yk X Y` (45a) n4   i =1 j =1 k =1 ` =1

 n  1 4 2 2 2 2 2 2 = E X Yi + XX Yi Yk + XX Yi Yj + X Yi Yj + ··· (45b) n2  i 6=k i 6=j  i =1 i 6=j

where for the first double sum (i = j =6 k=`), for the second (i = k =6 j=`), and for the last (i = ` =6 j= k) and ... indicates that all other terms include Yi in a non-squared form, the expected value of which will be zero. Given that the Yi are independently and identically distributed, the expected value of each of the double sums is the same, which gives

 n  ¯ 4 1 4 2 2 2 2 2 2 E Y  = E X Yi + XX Yi Yk + XX Yi Yj + X Yi Yj + ··· (46a) n4  i 6=k i 6=j  i =1 i 6=j

n 1 " 4 2 2 # = X EYi +3XX Yi Yj + terms containing EXi (46b) n4 i 6=j i =1 n 1 " 4 2 2# = X EYi +3XX Yi Yj (46c) n4 i 6=j i =1

1 2 = nµ4 +3n (n − 1) ( µ2 )  (46d) n4

1 4 = nµ4 +3n (n − 1) σ  (46e) n4

1 4 = µ4 +3(n − 1) σ  (46f) n3

Now combining the information in equations 44, 45, and 46 we obtain SAMPLE MOMENTS 11

2 2  n !   n !  ¯ 2 2 ¯ ¯ 2 E X Yi − Y  =E X Y − 2 YiY + Y  (47a)    i  i =1 i =1

2  n !  " n # =E X Y 2 − 2 nE Y¯ 2 X Y 2 + n2 E Y¯ 4  (47b)  i  i i =1 i =1

2  1 2  2  1 2  =nµ4 + n(n − 1)µ − 2n  µ4 +(n − 1)µ  + n  µ4 +3(n − 1)µ  2 n 2 n3 2 (47c)

2 2  1 2  =nµ4 + n (n − 1)µ − 2 µ4 +(n − 1) µ  + µ4 +3(n − 1) µ  2 2 n 2 (47d) n2 2 n 1 n2(n − 1) 2 n (n − 1) 3(n − 1) = µ − µ + µ + µ2 − µ2 + µ2 (47e) n 4 n 4 n 4 n 2 n 2 n 2 n2 − 2 n +1 (n − 1) (n2 − 2 n +3) = µ + µ2 (47f) n 4 n 2 n2 − 2 n +1 (n − 1) (n2 − 2 n +3) = µ + σ4 (47g) n 4 n

1 Now rewrite equation 41 including n2 as follows

2  n !  2 2 1 ¯ 2 E h M  i = E X Yi − Y  (48a) n n2   i =1

1 n2 − 2 n +1 (n − 1) (n2 − 2 n +3)  = µ + σ (48b) n2 n 4 n 4 n2 − 2 n +1 (n − 1) (n2 − 2 n +3) = µ + σ (48c) n3 4 n3 4 ( n − 1)2 (n − 1) (n2 − 2 n +3) = µ + σ (48d) n3 4 n3 4

Now substitute equations 34 and 48 into equation 33 to obtain

2 h 2 2 i 2 2 VarMn =E Mn − EMn (49) (n − 1)2 (n − 1) (n2 − 2 n +3) ( n − 1)2 = µ + σ4 − σ4 n3 4 n3 n2

We can simplify this as 12 SAMPLE MOMENTS

2 h 2 2i 2 2 VarMn =E Mn  − EMn (50a)

( n − 1)2 (n − 1) (n2 − 2 n +3) n ( n − 1)2 = µ + σ4 − σ4 (50b) n3 4 n3 n3 2 4 2 µ4 ( n − 1) +  ( n − 1)σ n − 2 n +3− n ( n − 1)  = (50c) n3 2 4 2 2 µ4 ( n − 1) +  ( n − 1)σ n − 2 n +3− n + n  = (50d) n3 2 4 µ4 ( n − 1) +  ( n − 1)σ  (3 − n ) = n3 2 4 µ4 ( n − 1) −  ( n − 1)σ  (n − 3) = (50e) n3 2 4 ( n − 1) µ4 ( n − 1) (n − 3)σ = − (50f) n3 n3

5. SAMPLE VARIANCE 5.1. Definition of sample variance. The sample variance is defined as

n 2 1 ¯ 2 S = X Xi − Xn (51) n n − 1 i =1 We can write this in terms of moments about the mean as

n 2 1 ¯ 2 S = X Xi − Xn  n n − 1 i =1 (52) n n 2 2 1 ¯ 2 = M where M = X Xi − Xn n − 1 n n n i=1

5.2. Expected value of S2. We can compute the expected value of S2 by substituting in from equation 31 as follows

n E S2  = E M 2 n n − 1 n n n − 1 = σ2 (53) n − 1 n = σ2

5.3. Variance of S2. We can compute the variance of S2 by substituting in from equation 50 as follows SAMPLE MOMENTS 13

n2 VarS2  = VarM 2 n ( n − 1)2 n

2 2 4 n  ( n − 1) µ4 ( n − 1) (n − 3) σ  = − (54) ( n − 1)2 n3 n3 µ (n − 3)σ4 = 4 − n n(n − 1)

5.4. Definition of σˆ2. One possible estimate of the population variance is σˆ2 which is given by

n 2 1 ¯ 2 σˆ = X Xi − Xn n i =1 (55) 2 = Mn

5.5. Expected value of σˆ2. We can compute the expected value of σˆ2 by substituting in from equation 31 as follows

2 2 E σˆ  =E Mn n − 1 (56) = σ2 n

5.6. Variance of σˆ2. We can compute the variance of σˆ2 by substituting in from equation 50 as follows

2 2 Varσˆ  = VarMn (n − 1)2µ (n − 1) (n − 3) σ4 = 4 − n3 n3 (57) µ − µ2 2(µ − 2µ2) µ − 3µ2 = 4 2 − 4 2 + 4 2 n n2 n3 We can also write this in an alternative fashion

2 2 Varσˆ  =Var Mn ( n − 1) 2 µ ( n − 1) (n − 3)σ4 = 4 − n3 n3 ( n − 1) 2 µ ( n − 1) (n − 3)µ2 = 4 − 2 n3 n3 n2 µ − 2 nµ + µ n2 µ2 − 4 nµ2 +3µ2 (58) = 4 4 4 − 2 2 2 n3 n3 n2 ( µ − µ2 ) − 2 n ( µ − 2 µ2 )+µ − 3 µ2 = 4 2 4 2 4 2 n3 µ − µ2 2(µ − 2 µ2 ) µ − 3 µ2 = 4 2 − 4 2 + 4 2 n n2 n3 14 SAMPLE MOMENTS

6. NORMAL POPULATIONS 6.1. Central moments of the . For a normal population we can obtain the central mo- ments by differentiating the moment generating function. The moment generating function for the central moments is as follows

t2 σ2 MX (t)=e 2 . (59)

The moments are then as follows. The first central moment is

d t2 σ2 E(X − µ)= e 2  | dt t =0

2 2 2 t σ (60) = tσ e 2  |t =0

=0

The second central moment is

2 2 2 2 d t σ E(X − µ ) = e 2  | dt2 t =0

2 2 d 2 t σ = tσ e 2  |t =0 dt (61) 2 2 2 2 2 4 t σ 2 t σ =  t σ e 2  + σ e 2  |t =0

= σ2

The third central moment is

3 2 2 3 d t σ E(X − µ ) = e 2  | dt3 t =0

2 2 2 2 d 2 4 t σ 2 t σ = t σ e 2  + σ e 2  | dt t =0

2 2 2 2 2 2 3 6 t σ 4 t σ 4 t σ (62) =  t σ e 2  +2tσ e 2  + tσ e 2  |t =0

2 2 2 2 3 6 t σ 4 t σ =  t σ e 2  +3tσ e 2  |t =0

=0

The fourth central moment is SAMPLE MOMENTS 15

4 2 2 4 d t σ E(X − µ ) = e 2  | dt4 t =0

2 2 2 2 d 3 6 t σ 4 t σ = t σ e 2  +3tσ e 2  | dt t =0

2 2 2 2 2 2 2 2 4 8 t σ 2 6 t σ 2 6 t σ 4 t σ (63) =  t σ e 2  +3t σ e 2  +3t σ e 2  +3σ e 2  |t =0

2 2 2 2 2 2 4 8 t σ 2 6 t σ 4 t σ =  t σ e 2  +6t σ e 2  +3σ e 2  |t =0

=3σ4

2 6.2. Variance of S . Let X1,X2,...Xn be a random sample from a normal population with mean µ and variance σ2 < ∞.

We know from equation 54 that

n2 Var S2  = VarM 2 n ( n − 1)2 n

2 2 4 n  ( n − 1) µ4 ( n − 1) (n − 3) σ  = − (64) ( n − 1)2 n3 n3 µ (n − 3)σ4 = 4 − n n ( n − 1)

If we substitute in for µ4 from equation 63 we obtain

4 µ4 (n − 3)σ VarS2  = − n n n (n − 1) 3 σ4 (n − 3)σ4 = − n n ( n − 1) (3(n − 1)− (n − 3))σ4 = n ( n − 1) (65) (3n − 3 − n +3))σ4 = n ( n − 1) 2 nσ4 = n ( n − 1) 2 σ4 = ( n − 1) 6.3. Variance of σˆ2. It is easy to show that

2 σ4 ( n − 1) Var(ˆσ2 )= (66) n2