
Unit 21 Student’s t Distribution in Hypotheses Testing Objectives: • To understand the difference between the standard normal distribution and the Student's t distributions • To understand the difference between a hypothesis test about a population proportion λ and a hypothesis test about a population mean µ Our introduction to hypothesis testing has focused solely on hypothesis tests about a population proportion λ . The z test statistic we use in the hypothesis test H : λ = λ vs. H : λ ≠ λ is 0 0 1 0 p − λ z = 0 λ ()− λ 0 1 0 n Let us now investigate whether we can use the same sort of test statistic in a hypothesis test about a population mean µ . As an illustration, suppose the manager at a certain manufacturing plant would like to see if there is any evidence that the mean amount of cereal per box from an assembly line is different from 12 oz., which is the advertised amount. He chooses a 0.05 significance level to perform a hypothesis test. The following amounts (in ounces) are recorded for a simple random sample of 16 boxes of cereal selected from the assembly line: 11.660 11.550 12.045 11.616 11.924 12.320 11.616 11.913 12.078 11.836 11.847 11.946 12.232 11.968 11.781 11.748 You can verify that the sample mean is x = 11.880 oz., and that the sample standard deviation is s = 0.220 oz. We shall now attempt to perform the four steps of a hypothesis test. The first step is to state the null and alternative hypotheses, and to choose a significance level. Remember that the null hypothesis is what we assume to be true unless there is sufficient evidence against it, and the alternative hypothesis is the statement we are looking for evidence to support; also, recall that a null hypothesis is often a statement involving equality, while the alternative hypothesis is a statement involving inequality. In this hypothesis test, we shall assume that the mean amount of cereal per box is 12 oz. unless we find sufficient evidence otherwise; in other words, 12 oz. is our hypothesized value for µ . We complete the first step of the hypothesis test by writing following: H : µ = 12 vs. H : µ ≠ 12 ( α = 0.05) . 0 1 The second step is to collect data and calculate the value of the test statistic. Recall that our test statistic in a hypothesis test about λ is the z-score of p calculated under the assumption that the hypothesized value of λ is correct. To use the same sort of test statistic in a hypothesis test about a population mean µ , we would calculate the z-score of x under the assumption that the hypothesized value of µ was correct. Letting µ0 represent our hypothesized value for µ , this z-score for x is x − µ x − µ z = x = 0 . σ σ x n Calculating this z-score for x presents a problem because we do not have the value of σ . We know the value of x (since we calculated it from our sample), we know the value of µ0 (since this represents our hypothesized 147 mean), and we know the value of n (since this is of course the sample size of our data); however, we do not know the population standard deviation σ , and consequently we are not able to obtain the value of σ . When x calculating the z-score of p , we do not have this problem, since the value of σ depends only on λ and n. p If it were possible for us to have access to the entire population in order to obtain the population standard deviation σ , then we could also obtain the population mean µ , and there would be no reason for us to be doing a hypothesis test! A very reasonable way to estimate the population standard deviation σ is to use the sample standard deviation s whose value we can easily obtain from our sample. It is then tempting to simply replace σ by s in the z-score which would be our test statistic, and in doing so the denominator of our test statistic would have σ / √n in place of by s / √n . For small sample sizes n, simply using s / √n in place of σ = σ / √n in the z-score for and still x x treating this statistic as a z-score may not give accurate results. This was demonstrated in a paper published in 1908 by W. S. Gosset. Gosset showed that instead of associating this test statistic with the standard normal distribution, the test statistic should be associated with the Student's t distribution with n – 1 degrees of freedom . (In order to prevent the competitors of Gosset's employers from realizing that they could benefit from more accurate statistical methods, Gosset's paper was published anonymously under the name of "Student," because Gosset was studying at the University of London at the time of his discovery.) In the hypothesis test H : µ = µ vs. H : µ ≠ µ , our test statistic shall be 0 0 1 0 x − µ t = 0 , n – 1 s n which we refer to as a t statistic with n – 1 degrees of freedom. Consequently, in order to perform hypothesis tests about µ , we must first study the t distributions. Recall that Table A.2 provides us with information about the standard normal distribution. We shall now be concerned with Table A.3 which provides us with information about the t distributions. The figures displaying density curves at the top of Tables A.2 and A.3 indicate that some similarities exist between the standard normal distribution and the t distributions; specifically, these distributions are symmetric and bell-shaped. The difference between the standard normal distribution and the t distributions is that the t distributions have more variation and a flatter bell-shape than the standard normal distribution. The figures at the top of Tables A.2 and A.3 look alike only because they are not drawn to scale. If a standard normal density curve and a Student's t density curve were to be graphed on the same scale, the Student's t density curve would look flatter and more "spread out." It is important to keep in mind that Table A.2 supplies information about the one and only standard normal density curve, but Table A.3 contains information about several different t distributions. Each different t distribution is identified by something we call degrees of freedom , which we shall abbreviate as df . We refer to our test statistic in a test about a population mean µ as tn – 1 where n – 1 is the corresponding df . It is difficult to explain in detail the concept of df in a general context without venturing beyond the mathematical scope of this text. While we could just think of degrees of freedom ( df ) simply as a way of distinguishing between the different density curves for the various t distributions, there is an intuitive rule of thumb possible to explain degrees of freedom. In any situation where a statistical procedure is applied to data to obtain some information about population parameters, we can identify the number of parameters being estimated and the size of the samples(s) in the data. The rule of thumb that often applies is that the relevant degrees of freedom is equal to the total size of the samples(s) minus the number of parameters being estimated. Suppose a random sample of size n is to be used in a hypothesis test about a population mean µ , then the rule of thumb leads us to believe that the degrees of freedom should be equal to the total sample size n minus the number of parameters being estimated which is one (i.e., the one and only parameter is µ). This justifies the n – 1 degrees of freedom associated with the test statistic tn – 1 defined previously. Each row of Table A.3 is labeled by a value for df . The information contained in each row is exactly the same type of information contained at the bottom of Table A.2. At the bottom of Table A.2, you find values 148 such as z0.05 , z0.025 , and z0.005 , where the amount of area above the z-score is indicated in the subscript. Each row of Table A.3 contains t-scores, where the column heading indicates the amount of area above each t-score under the corresponding t density curve, etc. The notation we shall use to represent the t-scores in each row of Table A.3 will be similar to the notation we use to represent the z-scores at the bottom of Table A.2. For instance, just as we use z0.01 to represent the z-score above which lies 0.01 of the area under the standard normal density curve, we shall use t5;0.01 to represent the t-score above which lies 0.01 of the area under the density curve for the Student's t distribution with df = 5. When designating t-scores in Table A.3, we include in the subscript the degrees of freedom followed by the area above the t-score, separated by a semicolon. Pick any column of Table A.3, and look at how the t-scores change from the row for df = 1 down through the successive rows. You should notice that as df increase, the corresponding t-scores in two successive rows become closer in value.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-