The Rth Moment About the Origin of a Random Variable X, Denoted By

The Rth Moment About the Origin of a Random Variable X, Denoted By

SAMPLE MOMENTS 1. POPULATION MOMENTS 1.1. Moments about the origin (raw moments). The rth moment about the origin of a random 0 r variable X, denoted by µr, is the expected value of X ; symbolically, 0 r µr =E(X ) (1) = X xr f(x) (2) x for r = 0, 1, 2, . when X is discrete and 0 r µr =E(X ) v (3) when X is continuous. The rth moment about the origin is only defined if E[ Xr ] exists. A mo- 0 ment about the origin is sometimes called a raw moment. Note that µ1 = E(X)=µX , the mean of the distribution of X, or simply the mean of X. The rth moment is sometimes written as function of θ where θ is a vector of parameters that characterize the distribution of X. If there is a sequence of random variables, X1,X2,...Xn, we will call the rth population moment 0 of the ith random variable µi, r and define it as 0 r µ i,r = E (X i) (4) 1.2. Central moments. The rth moment about the mean of a random variable X, denoted by µr,is r the expected value of ( X − µX ) symbolically, r µr =E[(X − µX ) ] r (5) = X ( x − µX ) f(x) x for r = 0, 1, 2, . when X is discrete and r µr =E[(X − µX ) ] ∞ r (6) =R−∞ (x − µX ) f(x) dx r when X is continuous. The rth moment about the mean is only defined if E[ (X - µX) ] exists. The rth moment about the mean of a random variable X is sometimes called the rth central moment of r X. The rth central moment of X about a is defined as E[ (X - a) ]. If a = µX, we have the rth central moment of X about µX. Date: August 9, 2004. 1 2 SAMPLE MOMENTS 2 Note that µ1 = E[(X - µX)] = 0 and µ2 = E[(X - µX) ] = Var[X]. Also note that all odd moments of X around its mean are zero for symmetrical distributions, provided such moments exist. If there is a sequence of random variables, X1,X2,...Xn, we will call the rth central population moment of the ith random variable µ i,r and define it as r 0 r µi, r = E X i − µ i, 1 (7) 0 When the variables are identically distributed, we will drop the i subscript and write µ r and µ r . 2. SAMPLE MOMENTS 2.1. Definitions. Assume there is a sequence of random variables, X1,X2,...Xn. The first sample moment, usually called the average is defined by 1 n X¯ = X X (8) n n i i =1 Corresponding to this statistic is its numerical value, x¯n, which is defined by 1 n x¯ = X x (9) n n i i =1 where xi represents the observed value of Xi. The rth sample moment for any t is defined by 1 n X¯ r = X Xr (10) n n i i =1 This too has a numerical counterpart given by 1 n x¯r = X xr (11) n n i i =1 2.2. Properties of Sample Moments. ¯ r 2.2.1. Expected value of Xn . Taking the expected value of equation 10 we obtain 1 n 1 n E X¯ r = EX¯ r = X EXr = X µ0 (12) n n n i n i,r i =1 i =1 If the X’s are identically distributed, then 1 n E X¯ r = EX¯ r = X µ0 = µ0 (13) n n n r r i =1 ¯ r 2.2.2. Variance of Xn. n r 1 0 0 E X¯ r = EX¯ = X µ = µ (14) n n n r r i =1 SAMPLE MOMENTS 3 ¯ r 2.2.3. Variance of Xn. First consider the case where we have a sample X1,X2,...Xn. 1 n ! 1 n ! VarX¯ r = Var X Xr = Var X Xr (15) n n i n2 i i =1 i =1 If the X’s are independent, then 1 n VarX¯ r = X Var(Xr ) (16) n n2 i i =1 If the X’s are independent and identically distributed, then 1 VarX¯ r = Var(Xr ) (17) n n where X denotes any one of the random variables (because they are all identical). In the case where r =1, we obtain 2 ¯ 1 σ VarXn = Var( X )= (18) n n 3. SAMPLE CENTRAL MOMENTS 3.1. Definitions. Assume there is a sequence of random variables, X1,X2,...Xn. We define the sample central moments as r 1 n − 0 r ··· Cn = n Pi =1 Xi µ i,1 ,r =1, 2 , 3 , , ⇒ 1 1 n − 0 Cn = n Pi =1 Xi µ i,1 (19) ⇒ 2 1 n − 0 2 Cn = n Pi =1 Xi µ i,1 0 These are only defined if µ i,1 is known. 3.2. Properties of Sample Moments. r r 3.2.1. Expected value of Cn. The expected value of Cn is given by r 1 n − 0 r 1 n E ( Cn )=n Pi =1 Xi µ i,1 = n Pi =1 µ i,r (20) The last equality follows from equation 7. If the Xi are identically distributed, then r E ( Cn )=µ r 1 (21) E Cn =0 4 SAMPLE MOMENTS r 3.2.2. Variance of C n. First consider the case where we have a sample X1,X2,...Xn. n n r 1 0 r ! 1 0 r! Var( C )=Var X Xi − µ = Var X Xi − µ (22) n n i,1 n2 i,1 i =1 i =1 If the X’s are independently distributed, then n r 1 0 r Var( C )= X Var Xi − µ (23) n n2 i,1 i =1 If the X’s are independent and identically distributed, then 1 r Var( Cr )= Var ( X − µ0 ) (24) n n 1 where X denotes any one of the random variables (because they are all identical). In the case where r =1, we obtain r 1 − 0 Var( Cn )=n Var[ X µ 1 ] 1 − = n Var[ X µ ] 1 2 − (25) = n σ 2 Cov[ X, µ]+Var[ µ ] 1 2 = n σ 4. SAMPLE ABOUT THE AVERAGE 4.1. Definitions. Assume there is a sequence of random variables, X1,X2,. Xn. Define the rth sample moment about the average as r 1 n − ¯ r ··· Mn = n Pi =1 Xi Xn ,r =1, 2 , 3 , , (26) This is clearly a statistic of which we can compute a numerical value. We denote the numerical r value by, mn, and define it as 1 n mr = X ( x − x¯ )r (27) n n i n i =1 In the special case where r = 1 we have 1 1 n − ¯ Mn = n Pi =1 Xi Xn 1 n − ¯ = n Pi =1 Xi Xn (28) ¯ ¯ =Xn − Xn =0 4.2. Properties of Sample Moments about the Average when r = 2. SAMPLE MOMENTS 5 r r 4.2.1. Alternative ways to write Mn. We can write Mn in an alternative useful way by expanding the squared term and then simplifying as follows r 1 n − ¯ r Mn = n Pi =1 Xi Xn ⇒ 2 1 n − ¯ 2 Mn = n Pi =1 Xi Xn 1 n 2 − ¯ ¯ 2 = n Pi =1 Xi 2 XiXn + Xn n ¯ n n (29) 1 2 − 2Xn 1 ¯ 2 = n Pi =1 Xi n Pi =1Xi + n Pi =1Xn 1 n 2 − ¯ 2 ¯ 2 = n Pi =1 Xi 2Xn + Xn 1 n 2 − ¯ 2 = n Pi =1 Xi Xn r r 4.2.2. Expected value of Mn. The expected value of Mn is then given by 2 1 n 2 − ¯ 2 E Mn = n E Pi =1 Xi E Xn 1 n 2 − ¯ 2 − ¯ = n Pi =1 E Xi E X n Var(Xn ) (30) 1 n 0 − 1 n 0 2 − ¯ = n Pi =1 µ i,2 n Pi =1 µ i,1 Var(Xn ) The second line follows from the alternative definition of variance 2 Var( X )=E X 2 − [ E ( X )] 2 ⇒ E X 2 =[ E ( X )] + Var( X ) (31) ¯ 2 ¯ 2 ¯ ⇒ E Xn = E Xn + Var(Xn ) and the third line follows from equation 12. If the Xi are independent and identically distributed, then 2 1 n 2 − ¯ 2 E Mn = n E Pi =1 Xi E Xn 1 n 0 − 1 n 0 2 − ¯ = n Pi =1 µ i,2 n Pi =1 µ i,1 Var(Xn ) 0 − 0 2 − σ2 (32) =µ2 ( µ1 ) n 2 − 1 2 =σ n σ n − 1 2 = n σ 0 0 where µ1 and µ2 are the first and second population moments, and µ2 is the second central population moment for the identically distributed variables. Note that this obviously implies h n ¯ 2 i 2 E Pi =1 Xi − X =nEMn n − 1 2 (33) =n n σ =( n − 1) σ2 2 4.2.3. Variance of Mn. By definition, 2 2 2 2 2 VarMn =E hMn i − EMn (34) The second term on the right on equation 34 is easily obtained by squaring the result in equation 32. 2 n − 1 2 E Mn = n σ 2 2 2 (35) ⇒ 2 2 (n − 1) 4 E Mn =EMn = n2 σ 6 SAMPLE MOMENTS Now consider the first term on the right hand side of equation 34. Write it as 2 h 2 2i 1 n − ¯ 2 E Mn =E n Pi =1Xi Xn (36) 1 n − ¯ 2 Now consider writing n Pi =1 Xi Xn as follows 1 n − ¯ 2 1 n − − ¯ − 2 n Pi =1 Xi X = n Pi =1 (Xi µ) (X µ) 2 = 1 n Y − Y¯ n Pi =1 i (37) where Yi =Xi − µ Y¯ =X¯ − µ Obviously, n ¯ 2 n ¯ 2 ¯ ¯ Pi =1 Xi − X =Pi =1 Yi − Y , where Yi = Xi − µ,Y = X − µ (38) Now consider the properties of the random variable Yi which is a transformation of Xi.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us