Chapter 4 Sufficient Statistics 4.1 Statistics and Sampling Distributions Suppose that the data Y1,...,Yn is drawn from some population. The ob- served data is Y1 = y1,...,Yn = yn where y1,....,yn are numbers. Let y = (y1,...,yn). Real valued functions T (y1,...,yn) = T (y) are of interest as are vector valued functions T (y)=(T1(y),...,Tk(y)). Sometimes the data Y 1,..., Y n are random vectors. Again interest is in functions of the data. Typically the data has a joint pdf or pmf f(y ,...,y θ) where the vector of 1 n| unknown parameters is θ =(θ1,...,θk). (In the joint pdf or pmf, the y1,...,yn are dummy variables, not the observed data.) Definition 4.1. A statistic is a function of the data that does not depend on any unknown parameters. The probability distribution of the statistic is called the sampling distribution of the statistic. Let the data Y = (Y1,...,Yn) where the Yi are random variables. If T (y1,...,yn) is a real valued function whose domain includes the sample space of Y , then W = T (Y1,...,Yn) is a statistic provided that T does not dependY on any unknown parameters. The data comes from some probability distribution and the statistic is a random variable and hence also comes from some probability distribution. To avoid confusing the distribution of the statistic with the distribution of the data, the distribution of the statistic is called the sampling distribution of the statistic. If the observed data is Y1 = y1,...,Yn = yn, then the observed value of the statistic is W = w = T (y1,...,yn). Similar remarks apply when the statistic T is vector valued and 106 when the data Y 1,..., Y n are random vectors. Often Y1,...,Yn will be iid and statistics of the form n n aiYi and t(Yi) i=1 i=1 X X are especially important. Chapter 10 and Theorems 2.17, 2.18, 3.6 and 3.7 are useful for finding the sampling distributions of some of these statistics when the Yi are iid from a given brand name distribution that is usually an exponential family. The following example lists some important statistics. Example 4.1. Let the Y1,...,Yn be the data. a) The sample mean n Y Y = i=1 i . (4.1) n P b) The sample variance n (Y Y )2 n Y 2 n(Y )2 S2 S2 = i=1 i − = i=1 i − . (4.2) ≡ n n 1 n 1 P − P − 2 c) The sample standard deviation S Sn = Sn. d) If the data Y ,...,Y is arranged≡ in ascending order from smallest to 1 n p largest and written as Y Y , then Y is the ith order statistic (1) ≤··· ≤ (n) (i) and the Y(i)’s are called the order statistics. e) The sample median MED(n)= Y((n+1)/2) ifnisodd, (4.3) Y + Y MED(n)= (n/2) ((n/2)+1) if n is even. 2 f) The sample median absolute deviation or median deviation is MAD(n) = MED( Y MED(n) , i = 1,...,n). (4.4) | i − | g) The sample maximum max(n) = Y(n) (4.5) and the observed max y(n) is the largest value of the observed data. 107 h) The sample minimum min(n) = Y(1) (4.6) and the observed min y(1) is the smallest value of the observed data. Example 4.2. Usually the term “observed” is dropped. Hence below “data” is “observed data”, “observed order statistics” is “order statistics” and “observed value of MED(n)” is “MED(n).” Let the data be 9, 2, 7, 4, 1, 6, 3, 8, 5 (so Y1 = y1 = 9,...,Y9 = y9 = 5). Then the order statistics are 1, 2, 3, 4, 5, 6, 7, 8, 9. Then MED(n) = 5 and MAD(n)=2=MED 0, 1, 1, 2, 2, 3, 3, 4, 4 . { } 2 Example 4.3. Let the Y1,...,Yn be iid N(µ, σ ). Then n (Y µ)2 T = i=1 i − n n P is a statistic iff µ is known. The following theorem is extremely important and the proof follows Rice (1988, p. 171-173) closely. 2 Theorem 4.1. Let the Y1,...,Yn be iid N(µ, σ ). a) The sample mean Y N(µ, σ2/n). ∼ b) Y and S2 are independent. c) (n 1)S2/σ2 χ2 . Hence n (Y Y )2 σ2χ2 . − ∼ n−1 i=1 i − ∼ n−1 Proof. a) follows from TheoremP 2.17e. b) The moment generating function of (Y , Y Y,...,Y Y ) is 1 − n − m(s,t ,...,t )= E(exp[sY + t (Y Y )+ + t (Y Y )]). 1 n 1 1 − ··· n n − By Theorem 2.22, Y and (Y Y,...,Y Y ) are independent if 1 − n − m(s,t1,...,tn)= mY (s) m(t1,...,tn) where mY (s) is the mgf of Y and m(t1,...,tn) isthemgfof (Y1 Y,...,Yn Y ). Now − − n n n n t (Y Y )= t Y Y nt = t Y tY i i − i i − i i − i i=1 i=1 i=1 i=1 X X X X 108 and thus n n s n sY + t (Y Y )= [ +(t t )]Y = a Y . i i − n i − i i i i=1 i=1 i=1 X X X Now n a = n [ s +(t t )] = s and i=1 i i=1 n i − P n Pn s2 s s2 n a2 = [ + 2 (t t )+(t t )2]= + (t t )2. i n2 n i − i − n i − i=1 i=1 i=1 X X X Hence n n m(s,t ,...,t )= E(exp[sY + t (Y Y )]) = E[exp( a Y )] 1 n i i − i i i=1 i=1 X X n = mY1,...,Yn (a1,...,an)= mYi (ai) i=1 Y since the Yi are independent. Now n n 2 n 2 n σ 2 σ 2 m i (a )= exp µa + a = exp µ a + a Y i i 2 i i 2 i i=1 i=1 i=1 i=1 ! Y Y X X σ2 s2 σ2 n = exp µs + + (t t )2 2 n 2 i − " i=1 # X σ2 σ2 n = exp µs + s2 exp (t t )2 . 2n 2 i − " i=1 # X Now the first factor is the mgf of Y and the second factor is m(t1,...,tn) = m(0,t1,...,tn) since the mgf of the marginal is found from the mgf of the joint distribution by setting all terms not in the marginal to 0 (ie set s = 0 in m(s,t1,...,tn) to find m(t1,...,tn)). Hence the mgf factors and Y (Y Y,...,Y Y ). 1 − n − Since S2 is a function of (Y Y,...,Y Y ), it is also true that Y S2. 1 − n − 109 c) (Y µ)/σ N(0, 1) so (Y µ)2/σ2 χ2 and i − ∼ i − ∼ 1 1 n (Y µ)2 χ2 . σ2 i − ∼ n i=1 X Now n n n (Y µ)2 = (Y Y + Y µ)2 = (Y Y )2 + n(Y µ)2. i − i − − i − − i=1 i=1 i=1 X X X Hence 2 1 n 1 n Y µ W = (Y µ)2 = (Y Y )2 + − = U + V. σ2 i − σ2 i − σ/√n i=1 i=1 X X Since U V by b), m (t)= m (t) m (t). Since W χ2 and V χ2, W U V ∼ n ∼ 1 m (t) (1 2t)−n/2 m (t)= W = − = (1 2t)−(n−1)/2 U m (t) (1 2t)−1/2 − V − 2 which is the mgf of a χn−1 distribution. QED Theorem 4.2. Let the Y1,...,Yn be iid with cdf FY and pdf fY . a) The pdf of T = Y(n) is n−1 fY(n) (t)= n[FY (t)] fY (t). b) The pdf of T = Y(1) is f (t)= n[1 F (t)]n−1f (t). Y(1) − Y Y c) Let 2 r n. Then the joint pdf of Y , Y ,...,Y is ≤ ≤ (1) (2) (r) n! r f (t ,...,t )= [1 F (t )]n−r f (t ). Y(1),...,Y(r) 1 r (n r)! − Y r Y i i=1 − Y Proof of a) and b). a) The cdf of Y(n) is n F (t)= P (Y t)= P (Y t,...,Y t)= P (Y t)=[F (t)]n. Y(n) (n) ≤ 1 ≤ n ≤ i ≤ Y i=1 Y 110 Hence the pdf of Y(n) is d d F (t)= [F (t)]n = n[F (t)]n−1f (t). dt Y(n) dt Y Y Y b) The cdf of Y(1) is F (t)= P (Y t) = 1 P (Y >t) = 1 P (Y > t,...,Y >t) Y(1) (1) ≤ − (1) − 1 n n = 1 P (Y >t) = 1 [1 F (t)]n. − i − − Y i=1 Y Hence the pdf of Y(n) is d d F (t)= (1 [1 F (t)]n)= n[1 F (t)]n−1f (t). QED dt Y(n) dt − − Y − Y Y To see that c) may be true, consider the following argument adapted from Mann, Schafer and Singpurwalla (1974, p. 93). Let ∆ti be a small positive number and notice that P (E) ≡ P (t1 < Y(1) <t1 +∆t1,t2 < Y(2) <t2 +∆t2,...,tr < Y(r) <tr +∆tr) tr+∆tr t1+∆t1 = f (w ,...,w )dw dw ··· Y(1),...,Y(r) 1 r 1 ··· r Ztr Zt1 r f (t ,...,t ) ∆t .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages27 Page
-
File Size-