G*Power 3.1 Manual March 1, 2017

Total Page:16

File Type:pdf, Size:1020Kb

G*Power 3.1 Manual March 1, 2017 G*Power 3.1 manual March 1, 2017 This manual is not yet complete. We will be adding help on more tests in the future. If you cannot find help for your test in this version of the manual, then please check the G*Power website to see if a more up-to-date version of the manual has been made available. Contents 21 Wilcoxon signed-rank test: Means - difference from constant (one sample case) 50 1 Introduction2 22 Wilcoxon signed-rank test: (matched pairs) 52 2 The G * Power calculator7 23 Wilcoxon-Mann-Whitney test of a difference be- 3 Exact: Correlation - Difference from constant (one tween two independent means 56 sample case)9 24 t test: Generic case 60 4 Exact: Proportion - difference from constant (one sample case) 11 25 c2 test: Variance - difference from constant (one sample case) 61 5 Exact: Proportion - inequality, two dependent groups (McNemar) 14 26 z test: Correlation - inequality of two independent Pearson r’s 62 6 Exact: Proportions - inequality of two independent groups (Fisher’s exact-test) 17 27 z test: Correlation - inequality of two dependent Pearson r’s 63 7 Exact test: Multiple Regression - random model 18 28 Z test: Multiple Logistic Regression 67 8 Exact: Proportion - sign test 22 29 Z test: Poisson Regression 72 9 Exact: Generic binomial test 23 30 Z test: Tetrachoric Correlation 77 10 F test: Fixed effects ANOVA - one way 24 References 81 11 F test: Fixed effects ANOVA - special, main effects and interactions 26 12 t test: Linear Regression (size of slope, one group) 31 13 F test: Multiple Regression - omnibus (deviation of R2 from zero), fixed model 33 14 F test: Multiple Regression - special (increase of R2), fixed model 36 15 F test: Inequality of two Variances 39 16 t test: Correlation - point biserial model 40 17 t test: Linear Regression (two groups) 42 18 t test: Means - difference between two dependent means (matched pairs) 45 19 t test: Means - difference from constant (one sam- ple case) 47 20 t test: Means - difference between two independent means (two groups) 49 1 1 Introduction Distribution-based approach to test selection First select the family of the test statistic (i.e., exact, F−, t−, c2, or z- G*Power (Fig.1 shows the main window of the program) test) using the Test family menu in the main window. The covers statistical power analyses for many different statisti- Statistical test menu adapts accordingly, showing a list of all cal tests of the tests available for the test family. • F test, Example: For the two groups t-test, first select the test family • t test, based on the t distribution. • c2-test and • z test families and some • exact tests. G*Power provides effect size calculators and graphics options. G * Power supports both a distribution-based and a design-based input mode. It contains also a calculator that supports many central and noncentral probability distribu- tions. G*Power is free software and available for Mac OS X and Windows XP/Vista/7/8. Then select Means: Difference between two independent means 1.1 Types of analysis (two groups) option in the Statictical test menu. G*Power offers five different types of statistical power analysis: 1. A priori (sample size N is computed as a function of power level 1 − b, significance level a, and the to-be- detected population effect size) 2. Compromise (both a and 1 − b are computed as func- tions of effect size, N, and an error probability ratio q = b/a) 3. Criterion (a and the associated decision criterion are computed as a function of 1 − b, the effect size, and N) Design-based approach to the test selection Alterna- tively, one might use the design-based approach. With the 4. Post-hoc (1 − b is computed as a function of a, the pop- Tests pull-down menu in the top row it is possible to select ulation effect size, and N) • the parameter class the statistical test refers to (i.e., 5. Sensitivity (population effect size is computed as a correlations and regression coefficients, means, propor- − function of a, 1 b, and N) tions, or variances), and 1.2 Program handling • the design of the study (e.g., number of groups, inde- pendent vs. dependent samples, etc.). Perform a Power Analysis Using G * Power typically in- volves the following three steps: The design-based approach has the advantage that test op- tions referring to the same parameter class (e.g., means) are 1. Select the statistical test appropriate for your problem. located in close proximity, whereas they may be scattered across different distribution families in the distribution- 2. Choose one of the five types of power analysis available based approach. 3. Provide the input parameters required for the analysis and click "Calculate". Example: In the Tests menu, select Means, then select Two inde- pendent groups" to specify the two-groups t test. Plot parameters In order to help you explore the param- eter space relevant to your power analysis, one parameter (a, power (1 − b), effect size, or sample size) can be plotted as a function of another parameter. 1.2.1 Select the statistical test appropriate for your prob- lem In Step 1, the statistical test is chosen using the distribution- based or the design-based approach. 2 Figure 1: The main window of G * Power 1.2.2 Choose one of the five types of power analysis • the required power level (1 − b), available • the pre-specified significance level a, and In Step 2, the Type of power analysis menu in the center of the main window is used to choose the appropriate analysis • the population effect size to be detected with probabil- type and the input and output parameters in the window ity (1 − b). change accordingly. In a criterion power analysis, a (and the associated deci- Example: If you choose the first item from the Type of power sion criterion) is computed as a function of analysis menu the main window will display input and output • 1-b, parameters appropriate for an a priori power analysis (for t tests for independent groups if you followed the example provided • the effect size, and in Step 1). • a given sample size. In a compromise power analysis both a and 1 − b are computed as functions of • the effect size, • N, and • an error probability ratio q = b/a. In an a priori power analysis, sample size N is computed In a post-hoc power analysis the power (1 − b) is com- as a function of puted as a function of 3 • a, Because Cohen’s book on power analysis Cohen(1988) appears to be well known in the social and behavioral sci- • the population effect size parameter, and ences, we made use of his effect size measures whenever • the sample size(s) used in a study. possible. In addition, wherever available G * Power pro- vides his definitions of "‘small"’, "‘medium"’, and "‘large"’ In a sensitivity power analysis the critical population ef- effects as "‘Tool tips"’. The tool tips may be optained by fect size is computed as a function of moving the cursor over the "‘effect size"’ input parameter field (see below). However, note that these conventions may • a, have different meanings for different tests. •1 − b, and Example: The tooltip showing Cohen’s measures for the effect •N. size d used in the two groups t test 1.2.3 Provide the input parameters required for the anal- ysis In Step 3, you specify the power analysis input parameters in the lower left of the main window. Example: An a priori power analysis for a two groups t test would require a decision between a one-tailed and a two-tailed test, a specification of Cohen’s (1988) effect size measure d un- ( − ) der H1, the significance level a, the required power 1 b of If you are not familiar with Cohen’s measures, if you the test, and the preferred group size allocation ratio n /n . 2 1 think they are inadequate for your test problem, or if you Let us specify input parameters for have more detailed information about the size of the to-be- • a one-tailed t test, expected effect (e.g., the results of similar prior studies), • a medium effect size of d = .5, then you may want to compute Cohen’s measures from • a = .05, more basic parameters. In this case, click on the Determine button to the left the effect size input field. A drawer will • (1 − b) = .95, and open next to the main window and provide access to an • an allocation ratio of n2/n1 = 1 effect size calculator tailored to the selected test. Example: For the two-group t-test users can, for instance, spec- ify the means m1, m2 and the common standard deviation (s = s1 = s2) in the populations underlying the groups to cal- culate Cohen’s d = jm1 − m2j/s. Clicking the Calculate and transfer to main window button copies the computed effect size to the appropriate field in the main window This would result in a total sample size of N = 176 (i.e., 88 observation units in each group). The noncentrality parameter d defining the t distribution under H1, the decision criterion to be used (i.e., the critical value of the t statistic), the degrees of freedom of the t test and the actual power value are also displayed. In addition to the numerical output, G * Power displays Note that the actual power will often be slightly larger the central (H0) and the noncentral (H1) test statistic distri- than the pre-specified power in a priori power analyses.
Recommended publications
  • Binomials, Contingency Tables, Chi-Square Tests
    Binomials, contingency tables, chi-square tests Joe Felsenstein Department of Genome Sciences and Department of Biology Binomials, contingency tables, chi-square tests – p.1/16 Confidence limits on a proportion To work out a confidence interval on binomial proportions, use binom.test. If we want to see whether 0.20 is too high a probability of Heads when we observe 5 Heads out of 50 tosses, we use binom.test(5, 50, 0.2) which gives probability of 5 or fewer Heads as 0.09667, so a test does not exclude 0.20 as the Heads probability. > binom.test(5,50,0.2) Exact binomial test data: 5 and 50 number of successes = 5, number of trials = 50, p-value = 0.07883 alternative hypothesis: true probability of success is not equal to 0.2 95 percent confidence interval: 0.03327509 0.21813537 sample estimates: probability of success 0.1 Binomials, contingency tables, chi-square tests – p.2/16 Confidence intervals and tails of binomials Confidence limits for p if there are 5 Heads out of 50 tosses p = 0.0332750 is 0.025 p = 0.21813537 is 0.025 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Binomials, contingency tables, chi-square tests – p.3/16 Testing equality of binomial proportions How do we test whether two coins have the same Heads probability? This is hard, but there is a good approximation, the chi-square test. You set up a 2 × 2 table of numbers of outcomes: Heads Tails Coin #1 15 25 Coin #2 9 31 In fact the chi-square test can test bigger tables: R rows by C columns.
    [Show full text]
  • 12.6 Sign Test (Web)
    12.4 Sign test (Web) Click on 'Bookmarks' in the Left-Hand menu and then click on the required section. 12.6 Sign test (Web) 12.6.1 Introduction The Sign Test is a one sample test that compares the median of a data set with a specific target value. The Sign Test performs the same function as the One Sample Wilcoxon Test, but it does not rank data values. Without the information provided by ranking, the Sign Test is less powerful (9.4.5) than the Wilcoxon Test. However, the Sign Test is useful for problems involving 'binomial' data, where observed data values are either above or below a target value. 12.6.2 Sign Test The Sign Test aims to test whether the observed data sample has been drawn from a population that has a median value, m, that is significantly different from (or greater/less than) a specific value, mO. The hypotheses are the same as in 12.1.2. We will illustrate the use of the Sign Test in Example 12.14 by using the same data as in Example 12.2. Example 12.14 The generation times, t, (5.2.4) of ten cultures of the same micro-organisms were recorded. Time, t (hr) 6.3 4.8 7.2 5.0 6.3 4.2 8.9 4.4 5.6 9.3 The microbiologist wishes to test whether the generation time for this micro- organism is significantly greater than a specific value of 5.0 hours. See following text for calculations: In performing the Sign Test, each data value, ti, is compared with the target value, mO, using the following conditions: ti < mO replace the data value with '-' ti = mO exclude the data value ti > mO replace the data value with '+' giving the results in Table 12.14 Difference + - + ----- + - + - + + Table 12.14 Signs of Differences in Example 12.14 12.4.p1 12.4 Sign test (Web) We now calculate the values of r + = number of '+' values: r + = 6 in Example 12.14 n = total number of data values not excluded: n = 9 in Example 12.14 Decision making in this test is based on the probability of observing particular values of r + for a given value of n.
    [Show full text]
  • Binomial Test Models and Item Difficulty
    Binomial Test Models and Item Difficulty Wim J. van der Linden Twente University of Technology In choosing a binomial test model, it is impor- have characteristic functions of the Guttman type. tant to know exactly what conditions are imposed In contrast, the stochastic conception allows non- on item difficulty. In this paper these conditions Guttman items but requires that all characteristic are examined for both a deterministic and a sto- functions must intersect at the same point, which chastic conception of item responses. It appears implies equal classically defined difficulty. The that they are more restrictive than is generally beta-binomial model assumes identical char- understood and differ for both conceptions. When acteristic functions for both conceptions, and this the binomial model is applied to a fixed examinee, also implies equal difficulty. Finally, the compound the deterministic conception imposes no conditions binomial model entails no restrictions on item diffi- on item difficulty but requires instead that all items culty. In educational and psychological testing, binomial models are a class of models increasingly being applied. For example, in the area of criterion-referenced measurement or mastery testing, where tests are usually conceptualized as samples of items randomly drawn from a large pool or do- main, binomial models are frequently used for estimating examinees’ mastery of a domain and for de- termining sample size. Despite the agreement among several writers on the usefulness of binomial models, opinions seem to differ on the restrictions on item difficulties implied by the models. Mill- man (1973, 1974) noted that in applying the binomial models, items may be relatively heterogeneous in difficulty.
    [Show full text]
  • Bitest — Binomial Probability Test
    Title stata.com bitest — Binomial probability test Description Quick start Menu Syntax Option Remarks and examples Stored results Methods and formulas Reference Also see Description bitest performs exact hypothesis tests for binomial random variables. The null hypothesis is that the probability of a success on a trial is #p. The total number of trials is the number of nonmissing values of varname (in bitest) or #N (in bitesti). The number of observed successes is the number of 1s in varname (in bitest) or #succ (in bitesti). varname must contain only 0s, 1s, and missing. bitesti is the immediate form of bitest; see [U] 19 Immediate commands for a general introduction to immediate commands. Quick start Exact test for probability of success (a = 1) is 0.4 bitest a = .4 With additional exact probabilities bitest a = .4, detail Exact test that the probability of success is 0.46, given 22 successes in 74 trials bitesti 74 22 .46 Menu bitest Statistics > Summaries, tables, and tests > Classical tests of hypotheses > Binomial probability test bitesti Statistics > Summaries, tables, and tests > Classical tests of hypotheses > Binomial probability test calculator 1 2 bitest — Binomial probability test Syntax Binomial probability test bitest varname== #p if in weight , detail Immediate form of binomial probability test bitesti #N #succ #p , detail by is allowed with bitest; see [D] by. fweights are allowed with bitest; see [U] 11.1.6 weight. Option Advanced £ £detail shows the probability of the observed number of successes, kobs; the probability of the number of successes on the opposite tail of the distribution that is used to compute the two-sided p-value, kopp; and the probability of the point next to kopp.
    [Show full text]
  • Chapter 4: Fisher's Exact Test in Completely Randomized Experiments
    1 Chapter 4: Fisher’s Exact Test in Completely Randomized Experiments Fisher (1925, 1926) was concerned with testing hypotheses regarding the effect of treat- ments. Specifically, he focused on testing sharp null hypotheses, that is, null hypotheses under which all potential outcomes are known exactly. Under such null hypotheses all un- known quantities in Table 4 in Chapter 1 are known–there are no missing data anymore. As we shall see, this implies that we can figure out the distribution of any statistic generated by the randomization. Fisher’s great insight concerns the value of the physical randomization of the treatments for inference. Fisher’s classic example is that of the tea-drinking lady: “A lady declares that by tasting a cup of tea made with milk she can discriminate whether the milk or the tea infusion was first added to the cup. ... Our experi- ment consists in mixing eight cups of tea, four in one way and four in the other, and presenting them to the subject in random order. ... Her task is to divide the cups into two sets of 4, agreeing, if possible, with the treatments received. ... The element in the experimental procedure which contains the essential safeguard is that the two modifications of the test beverage are to be prepared “in random order.” This is in fact the only point in the experimental procedure in which the laws of chance, which are to be in exclusive control of our frequency distribution, have been explicitly introduced. ... it may be said that the simple precaution of randomisation will suffice to guarantee the validity of the test of significance, by which the result of the experiment is to be judged.” The approach is clear: an experiment is designed to evaluate the lady’s claim to be able to discriminate wether the milk or tea was first poured into the cup.
    [Show full text]
  • Pearson-Fisher Chi-Square Statistic Revisited
    Information 2011 , 2, 528-545; doi:10.3390/info2030528 OPEN ACCESS information ISSN 2078-2489 www.mdpi.com/journal/information Communication Pearson-Fisher Chi-Square Statistic Revisited Sorana D. Bolboac ă 1, Lorentz Jäntschi 2,*, Adriana F. Sestra ş 2,3 , Radu E. Sestra ş 2 and Doru C. Pamfil 2 1 “Iuliu Ha ţieganu” University of Medicine and Pharmacy Cluj-Napoca, 6 Louis Pasteur, Cluj-Napoca 400349, Romania; E-Mail: [email protected] 2 University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca, 3-5 M ănăş tur, Cluj-Napoca 400372, Romania; E-Mails: [email protected] (A.F.S.); [email protected] (R.E.S.); [email protected] (D.C.P.) 3 Fruit Research Station, 3-5 Horticultorilor, Cluj-Napoca 400454, Romania * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel: +4-0264-401-775; Fax: +4-0264-401-768. Received: 22 July 2011; in revised form: 20 August 2011 / Accepted: 8 September 2011 / Published: 15 September 2011 Abstract: The Chi-Square test (χ2 test) is a family of tests based on a series of assumptions and is frequently used in the statistical analysis of experimental data. The aim of our paper was to present solutions to common problems when applying the Chi-square tests for testing goodness-of-fit, homogeneity and independence. The main characteristics of these three tests are presented along with various problems related to their application. The main problems identified in the application of the goodness-of-fit test were as follows: defining the frequency classes, calculating the X2 statistic, and applying the χ2 test.
    [Show full text]
  • Jise.Org/Volume31/N1/Jisev31n1p72.Html
    Journal of Information Volume 31 Systems Issue 1 Education Winter 2020 Experiences in Using a Multiparadigm and Multiprogramming Approach to Teach an Information Systems Course on Introduction to Programming Juan Gutiérrez-Cárdenas Recommended Citation: Gutiérrez-Cárdenas, J. (2020). Experiences in Using a Multiparadigm and Multiprogramming Approach to Teach an Information Systems Course on Introduction to Programming. Journal of Information Systems Education, 31(1), 72-82. Article Link: http://jise.org/Volume31/n1/JISEv31n1p72.html Initial Submission: 23 December 2018 Accepted: 24 May 2019 Abstract Posted Online: 12 September 2019 Published: 3 March 2020 Full terms and conditions of access and use, archived papers, submission instructions, a search tool, and much more can be found on the JISE website: http://jise.org ISSN: 2574-3872 (Online) 1055-3096 (Print) Journal of Information Systems Education, Vol. 31(1) Winter 2020 Experiences in Using a Multiparadigm and Multiprogramming Approach to Teach an Information Systems Course on Introduction to Programming Juan Gutiérrez-Cárdenas Faculty of Engineering and Architecture Universidad de Lima Lima, 15023, Perú [email protected] ABSTRACT In the current literature, there is limited evidence of the effects of teaching programming languages using two different paradigms concurrently. In this paper, we present our experience in using a multiparadigm and multiprogramming approach for an Introduction to Programming course. The multiparadigm element consisted of teaching the imperative and functional paradigms, while the multiprogramming element involved the Scheme and Python programming languages. For the multiparadigm part, the lectures were oriented to compare the similarities and differences between the functional and imperative approaches. For the multiprogramming part, we chose syntactically simple software tools that have a robust set of prebuilt functions and available libraries.
    [Show full text]
  • 05 36534Nys130620 31
    Monte Carlos study on Power Rates of Some Heteroscedasticity detection Methods in Linear Regression Model with multicollinearity problem O.O. Alabi, Kayode Ayinde, O. E. Babalola, and H.A. Bello Department of Statistics, Federal University of Technology, P.M.B. 704, Akure, Ondo State, Nigeria Corresponding Author: O. O. Alabi, [email protected] Abstract: This paper examined the power rate exhibit by some heteroscedasticity detection methods in a linear regression model with multicollinearity problem. Violation of unequal error variance assumption in any linear regression model leads to the problem of heteroscedasticity, while violation of the assumption of non linear dependency between the exogenous variables leads to multicollinearity problem. Whenever these two problems exist one would faced with estimation and hypothesis problem. in order to overcome these hurdles, one needs to determine the best method of heteroscedasticity detection in other to avoid taking a wrong decision under hypothesis testing. This then leads us to the way and manner to determine the best heteroscedasticity detection method in a linear regression model with multicollinearity problem via power rate. In practices, variance of error terms are unequal and unknown in nature, but there is need to determine the presence or absence of this problem that do exist in unknown error term as a preliminary diagnosis on the set of data we are to analyze or perform hypothesis testing on. Although, there are several forms of heteroscedasticity and several detection methods of heteroscedasticity, but for any researcher to arrive at a reasonable and correct decision, best and consistent performed methods of heteroscedasticity detection under any forms or structured of heteroscedasticity must be determined.
    [Show full text]
  • The Effects of Simplifying Assumptions in Power Analysis
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Public Access Theses and Dissertations from Education and Human Sciences, College of the College of Education and Human Sciences (CEHS) 4-2011 The Effects of Simplifying Assumptions in Power Analysis Kevin A. Kupzyk University of Nebraska-Lincoln, [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/cehsdiss Part of the Educational Psychology Commons Kupzyk, Kevin A., "The Effects of Simplifying Assumptions in Power Analysis" (2011). Public Access Theses and Dissertations from the College of Education and Human Sciences. 106. https://digitalcommons.unl.edu/cehsdiss/106 This Article is brought to you for free and open access by the Education and Human Sciences, College of (CEHS) at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Public Access Theses and Dissertations from the College of Education and Human Sciences by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. Kupzyk - i THE EFFECTS OF SIMPLIFYING ASSUMPTIONS IN POWER ANALYSIS by Kevin A. Kupzyk A DISSERTATION Presented to the Faculty of The Graduate College at the University of Nebraska In Partial Fulfillment of Requirements For the Degree of Doctor of Philosophy Major: Psychological Studies in Education Under the Supervision of Professor James A. Bovaird Lincoln, Nebraska April, 2011 Kupzyk - i THE EFFECTS OF SIMPLIFYING ASSUMPTIONS IN POWER ANALYSIS Kevin A. Kupzyk, Ph.D. University of Nebraska, 2011 Adviser: James A. Bovaird In experimental research, planning studies that have sufficient probability of detecting important effects is critical. Carrying out an experiment with an inadequate sample size may result in the inability to observe the effect of interest, wasting the resources spent on an experiment.
    [Show full text]
  • Introduction to Hypothesis Testing
    Introduction to Hypothesis Testing OPRE 6301 Motivation . The purpose of hypothesis testing is to determine whether there is enough statistical evidence in favor of a certain belief, or hypothesis, about a parameter. Examples: Is there statistical evidence, from a random sample of potential customers, to support the hypothesis that more than 10% of the potential customers will pur- chase a new product? Is a new drug effective in curing a certain disease? A sample of patients is randomly selected. Half of them are given the drug while the other half are given a placebo. The conditions of the patients are then mea- sured and compared. These questions/hypotheses are similar in spirit to the discrimination example studied earlier. Below, we pro- vide a basic introduction to hypothesis testing. 1 Criminal Trials . The basic concepts in hypothesis testing are actually quite analogous to those in a criminal trial. Consider a person on trial for a “criminal” offense in the United States. Under the US system a jury (or sometimes just the judge) must decide if the person is innocent or guilty while in fact the person may be innocent or guilty. These combinations are summarized in the table below. Person is: Innocent Guilty Jury Says: Innocent No Error Error Guilty Error No Error Notice that there are two types of errors. Are both of these errors equally important? Or, is it as bad to decide that a guilty person is innocent and let them go free as it is to decide an innocent person is guilty and punish them for the crime? Or, is a jury supposed to be totally objective, not assuming that the person is either innocent or guilty and make their decision based on the weight of the evidence one way or another? 2 In a criminal trial, there actually is a favored assump- tion, an initial bias if you will.
    [Show full text]
  • Power of a Statistical Test
    Power of a Statistical Test By Smita Skrivanek, Principal Statistician, MoreSteam.com LLC What is the power of a test? The power of a statistical test gives the likelihood of rejecting the null hypothesis when the null hypothesis is false. Just as the significance level (alpha) of a test gives the probability that the null hypothesis will be rejected when it is actually true (a wrong decision), power quantifies the chance that the null hypothesis will be rejected when it is actually false (a correct decision). Thus, power is the ability of a test to correctly reject the null hypothesis. Why is it important? Although you can conduct a hypothesis test without it, calculating the power of a test beforehand will help you ensure that the sample size is large enough for the purpose of the test. Otherwise, the test may be inconclusive, leading to wasted resources. On rare occasions the power may be calculated after the test is performed, but this is not recommended except to determine an adequate sample size for a follow-up study (if a test failed to detect an effect, it was obviously underpowered – nothing new can be learned by calculating the power at this stage). How is it calculated? As an example, consider testing whether the average time per week spent watching TV is 4 hours versus the alternative that it is greater than 4 hours. We will calculate the power of the test for a specific value under the alternative hypothesis, say, 7 hours: The Null Hypothesis is H0: μ = 4 hours The Alternative Hypothesis is H1: μ = 6 hours Where μ = the average time per week spent watching TV.
    [Show full text]
  • Confidence Intervals and Hypothesis Tests
    Chapter 2 Confidence intervals and hypothesis tests This chapter focuses on how to draw conclusions about populations from sample data. We'll start by looking at binary data (e.g., polling), and learn how to estimate the true ratio of 1s and 0s with confidence intervals, and then test whether that ratio is significantly different from some baseline value using hypothesis testing. Then, we'll extend what we've learned to continuous measurements. 2.1 Binomial data Suppose we're conducting a yes/no survey of a few randomly sampled people 1, and we want to use the results of our survey to determine the answers for the overall population. 2.1.1 The estimator The obvious first choice is just the fraction of people who said yes. Formally, suppose we have samples x1,..., xn that can each be 0 or 1, and the probability that each xi is 1 is p (in frequentist style, we'll assume p is fixed but unknown: this is what we're interested in finding). We'll assume our samples are indendepent and identically distributed (i.i.d.), meaning that each one has no dependence on any of the others, and they all have the same probability p of being 1. Then our estimate for p, which we'll callp ^, or \p-hat" would be n 1 X p^ = x : n i i=1 Notice thatp ^ is a random quantity, since it depends on the random quantities xi. In statistical lingo,p ^ is known as an estimator for p. Also notice that except for the factor of 1=n in front, p^ is almost a binomial random variable (in particular, (np^) ∼ B(n; p)).
    [Show full text]