SAMPLE MOMENTS 1.1. Moments About the Origin (Raw Moments
Total Page:16
File Type:pdf, Size:1020Kb
SAMPLE MOMENTS 1. POPULATION MOMENTS 1.1. Moments about the origin (raw moments). The rth moment about the origin of a random variable X, 0 r denoted by µr, is the expected value of X ; symbolically, 0 r µr =E(X ) = X xr f(x) (1) x for r = 0, 1, 2, . when X is discrete and 0 r µr =E(X ) ∞ (2) = Z xrf(x) dx −∞ when X is continuous. The rth moment about the origin is only defined if E[ Xr ] exists. A moment about 0 the origin is sometimes called a raw moment. Note that µ1 = E(X)=µX , the mean of the distribution of X, or simply the mean of X. The rth moment is sometimes written as function of θ where θ is a vector of parameters that characterize the distribution of X. If there is a sequence of random variables, X1,X2,...Xn, we will call the rth population moment of the 0 ith random variable µi, r and define it as 0 r µi,r = E (Xi ) (3) 1.2. Central moments. The rth moment about the mean of a random variable X, denoted by µr , is the r expected value of ( X − µX ) symbolically, r µr =E[(X − µX ) ] r (4) = X ( x − µX ) f(x) x for r = 0, 1, 2, . when X is discrete and r µr =E[(X − µX ) ] ∞ Z r (5) = (x − µX ) f(x) dx −∞ th r when X is continuous. The r moment about the mean is only defined if E[ (X - µX ) ] exists. The rth moment about the mean of a random variable X is sometimes called the rth central moment of X. The rth r central moment of X about a is defined as E[ (X - a) ]. If a = µX, we have the rth central moment of X about µX. Note that Date: December 7, 2005. 1 2 SAMPLE MOMENTS ∞ Z µ1 = E[X − µX ]= (x − µX ) f(x) dx =0 −∞ ∞ (6) 2 Z 2 2 µ2 = E[(X − µX ) ]= (x − µX ) f(x) dx = Var(X)=σ −∞ Also note that all odd moments of X around its mean are zero for symmetrical distributions, provided such moments exist. th If there is a sequence of random variables, X1,X2, ... Xn, we will call the r central population moment of the ith random variable µi,r and define it as r 0 r µi,r = E Xi − µi,1 (7) 0 When the variables are identically distributed, we will drop the i subscript and write µ r and µ r . 2. SAMPLE MOMENTS 2.1. Definitions. Assume there is a sequence of random variables, X1,X2,...Xn. The first sample moment, usually called the average is defined by 1 n X¯ = X X (8) n n i i =1 Corresponding to this statistic is its numerical value, x¯n, which is defined by 1 n x¯ = X x (9) n n i i =1 where xi represents the observed value of Xi. The rth sample moment for any t is defined by 1 n X¯ r = X Xr (10) n n i i =1 This too has a numerical counterpart given by 1 n x¯r = X xr (11) n n i i=1 2.2. Properties of Sample Moments. ¯ r 2.2.1. Expected value of Xn . Taking the expected value of equation 10 we obtain 1 n 1 n E X¯ r = EX¯ r = X EXr = X µ0 (12) n n n i n i,r i=1 i=1 If the X’s are identically distributed, then n 1 0 0 E X¯ r = EX¯ r = X µ = µ (13) n n n r r i=1 SAMPLE MOMENTS 3 ¯ r 2.2.2. Variance of Xn. First consider the case where we have a sample X1,X2, ... ,Xn. 1 n ! 1 n ! VarX¯ r = Var X Xr = Var X Xr (14) n n i n2 i i =1 i =1 If the X’s are independent, then 1 n VarX¯ r = X Var(Xr ) (15) n n2 i i =1 If the X’s are independent and identically distributed, then 1 VarX¯ r = Var(Xr ) (16) n n where X denotes any one of the random variables (because they are all identical). In the case where r =1, we obtain 2 ¯ 1 σ VarXn = Var( X )= (17) n n 3. SAMPLE CENTRAL MOMENTS 3.1. Definitions. Assume there is a sequence of random variables, X1,X2, ... ,Xn. We define the sample central moments as n r 1 0 r C = X Xi − µ ,r =1, 2, 3,..., n n i,1 i=1 n 1 1 0 ⇒ C = X Xi − µ (18) n n i,1 i=1 n 2 1 0 2 ⇒ C = X Xi − µ n n i,1 i=1 0 These are only defined if µ i,1 is known. 3.2. Properties of Sample Central Moments. r r 3.2.1. Expected value of Cn. The expected value of Cn is given by n n r 1 0 r 1 E (C )= X E Xi − µ = X µi,r (19) n n i,1 n i=1 i =1 The last equality follows from equation 7. If the Xi are identically distributed, then r E ( Cn )=µ r 1 (20) E Cn =0 4 SAMPLE MOMENTS r 3.2.2. Variance of C n. First consider the case where we have a sample X1,X2, ... ,Xn. n n r 1 0 r ! 1 0 r ! Var( C )=Var X Xi − µ = Var X Xi − µ (21) n n i,1 n2 i,1 i=1 i=1 If the X’s are independently distributed, then n r 1 0 r Var( C )= X VarXi − µ (22) n n2 i,1 i=1 If the X’s are independent and identically distributed, then 1 r Var( Cr )= Var ( X − µ0 ) (23) n n 1 where X denotes any one of the random variables (because they are all identical). In the case where r =1, we obtain 1 0 Var C1 = Var[ X − µ ] n n 1 1 = Var[ X − µ ] n (24) 1 = σ2 − 2 Cov[ X, µ]+Var[ µ ] n 1 = σ2 n 4. SAMPLE ABOUT THE AVERAGE 4.1. Definitions. Assume there is a sequence of random variables, X1,X2,...Xn. Define the rth sample moment about the average as n r 1 ¯ r M = X Xi − Xn ,r=1, 2, 3,..., (25) n n i=1 This is clearly a statistic of which we can compute a numerical value. We denote the numerical value by, r mn, and define it as 1 n mr = X ( x − x¯ )r (26) n n i n i =1 In the special case where r = 1 we have n 1 1 ¯ M = X Xi − Xn n n i =1 1 n (27) = X X − X¯ n i n i =1 ¯ ¯ = Xn − Xn =0 4.2. Properties of Sample Moments about the Average when r = 2. SAMPLE MOMENTS 5 r 2 4.2.1. Alternative ways to write Mn. We can write Mn in an alternative useful way by expanding the squared term and then simplifying as follows n r 1 r M = X Xi − X¯n n n i =1 n 2 1 ¯ 2 ⇒ M = X Xi − Xn n n i=1 n ! 1 2 ¯ ¯ 2 = X X − 2 XiXn + X n i n i =1 (28) n n ¯ n 1 2 2Xn 1 ¯ 2 = X Xi − X Xi + X Xn n n i =1 n i =1 i =1 1 n = X X2 − 2X¯ 2 + X¯ 2 n i n n i =1 1 n ! = X X 2 − X¯ 2 n i n i =1 2 r 4.2.2. Expected value of Mn. The expected value of Mn is then given by 1 " n # E M 2 = E X X 2 − E X¯ 2 n n i n i =1 n 1 2 ¯ 2 ¯ = X E X − E Xn − Var(Xn) n i (29) i=1 2 1 n 1 n ! = X µ0 − X µ0 − Var(X¯ ) n i,2 n i,1 n i =1 i =1 The second line follows from the alternative definition of variance 2 Var( X )=E X 2 − [ E ( X )] 2 ⇒ E X 2 =[ E ( X )] + Var( X ) (30) ¯ 2 ¯ 2 ¯ ⇒ E Xn = E Xn + Var(Xn ) and the third line follows from equation 12. If the Xi are independent and identically distributed, then 6 SAMPLE MOMENTS 1 " n # E M 2 = E X X2 − E X¯ 2 n n i n i=1 2 1 n 1 n ! = X µ0 − X µ0 − Var(X¯ ) n i,2 n i,1 n i=1 i=1 (31) σ2 = µ0 − (µ0 )2 − 2 1 n 1 = σ2 − σ2 n n − 1 = σ2 n 0 0 where µ1 and µ2 are the first and second population moments, and µ2 is the second central population moment for the identically distributed variables. Note that this obviously implies " n # ¯ 2 2 E X Xi − X = nEMn i=1 n − 1 (32) = n σ2 n =(n − 1) σ2 2 4.2.3. Variance of Mn. By definition, 2 h 2 2i 2 2 VarMn = E Mn − EMn (33) The second term on the right on equation 33 is easily obtained by squaring the result in equation 31. n − 1 E M 2 = σ2 n n (34) 2 2 2 (n − 1) ⇒ E M 2 = EM2 = σ4 n n n2 Now consider the first term on the right hand side of equation 33. Write it as 2 n ! 2 2 1 ¯ 2 E hM i =E X Xi − Xn (35) n n i =1 1 n − ¯ 2 Now consider writing n Pi =1 Xi Xn as follows SAMPLE MOMENTS 7 n n 1 ¯ 2 1 ¯ 2 X Xi − X = X (Xi − µ) − (X − µ) n n i =1 i =1 n 1 2 = X Yi − Y¯ n (36) i =1 where Yi =Xi − µ Y¯ =X¯ − µ Obviously, n n ¯ 2 ¯ 2 ¯ ¯ X Xi − X = X Yi − Y , where Yi = Xi − µ,Y = X − µ (37) i =1 i =1 Now consider the properties of the random variable Yi which is a transformation of Xi.