The Impotency of Post Hoc Power

Total Page:16

File Type:pdf, Size:1020Kb

The Impotency of Post Hoc Power The Impotency of Post Hoc Power By Emma Gunnarsson and Hugo Sebyhed Department of Statistics Uppsala University Supervisor: Per Johansson Autumn 2020 ABSTRACT In this thesis, we hope to dispel some confusion regarding the so-called post hoc ​ power, i.e. power computed making the assumption that the estimated sample effect ​ is equal to the population effect size. In previous research, it has been shown that post hoc power is a function of the p-value, making it redundant as a tool of analysis. We go further, arguing for it to never be reported, since it is a source of confusion and potentially harmful incentives. We also conduct a Monte Carlo simulation to illustrate our points of view. Previous research is confirmed by the results of this study. KEY WORDS: Hypothesis testing, statistical inference, retrospective power, observed power, a posteriori power, a priori power, true power 1 Table of Contents 1. Introduction 3-4 2. Inference and Hypothesis Testing 4-5 2.1 Fisher’s Hypothesis Testing 5-7 2.2 Neyman-Pearson Hypothesis Testing 8-10 3. Statistical Power 11 ​ 3.1. A Priori Power 11-15 ​ 3.2. Post Hoc Power 15-20 4. Simulation Study 20 ​ 4.1. Simulation setup 20-22 ​ 4.2. Results 22-25 ​ 5. Discussion 25-27 ​ References 28-29 2 1. Introduction Hypothesis testing is widely used in many different research fields, where one aspires at making inference. Oftentimes, the testing procedure is oversimplified, and lacking in a rigorous understanding of what can in fact be achieved. There is a wide array of aspects to consider in order to have a valid test, which combined are far beyond the scope of this paper. One of them however, the one of statistical power, will be thoroughly discussed. As defined by Neyman and Pearson, power is the conditional probability of finding a significant effect, given that there is a specific true effect. Since it will be difficult to detect effects having a low power, the researcher must ensure a high enough power before conducting a study, mainly through increasing the sample size. Although power analysis has experienced an upsurge in most inference research (e.g., Bausell and Li, 2002; Murphy and Myors, 2004; Staffa and Zurakowski, 2020), there is a substantial literature suggesting that a large part of published research is underpowered (e.g., Ioannidis et al., 2017; Maxwell, 2004; Mone et al., 1996). Perhaps in response to this, it has been suggested that power be computed post hoc, using the estimated sample effect (e.g., Carle, 2020; Fagley, 1985; Hallahan and Rosenthal, 1996; Onwuegbuzie and Leech, 2004). Post hoc power has indeed been used in applied research. Bababekov and Chang (2019) use it to help interpret non-significant results in surgical science, while for example Gantner et al. (2018) reports it arguing that it will ease the design of future studies in the neurological science field. Moreover, Nadler et al. (2018) use it to argue for over-powerness in cancer medicine research. These research fields share the trait of having difficulties obtaining large enough samples, making power discussions especially important. Although such noble intentions of the advocates, there are nevertheless several inherent flaws of post hoc power. These have been pointed out by statisticians ever since it was first promoted, and are also the main focus of this thesis. Here, we aim at dispelling the confusion concerning post hoc power, partly by a thorough review of previous literature, and partly by a Monte Carlo simulation study. Since the mentioned research, where post hoc power has been proposed as useful, mainly is from the genre of the causal inference, that will be our main focus. We argue for post hoc power not only being redundant, but also confusing and a potential source of harmful incentives among researchers and journals. Therefore, we recommend post hoc power to be conclusively abandoned. Power considerations are indeed important in the hypothesis testing procedure, and should be thoroughly discussed in any given experiment. However, such analysis must be 3 based on hypothesised effect sizes. Meritoriously, researchers could calculate several powers using different hypothesised true effects, thereby providing a range of the plausible power of the study. Furthermore, the practice of applying post hoc power to evaluate non-significant results, is untrue to some of the fundamentals of inference: Power, being a probability, is no longer relevant to discuss post study. There are other tools more apt for evaluating the results found, namely the magnitude of the estimate and its confidence interval. The paper is structured as follows. We begin by a description of hypothesis tests in general, including both the true Fisherian approach and then later addendums by Neyman and Pearson, the inventors of power as defined above. Although Fisher strongly opposed power as defined, we present both testing approaches in order to provide a deeper understanding of the disagreements. Thereafter in Section 3.1, we discuss power as intended by Neyman and Pearson, which we denote “a priori power”. Neyman and Pearson never used that exact terminology, but for the purposes of this study where we wish to distinguish it from post hoc power, we find it suitable. Section 3.2 discusses post hoc power. Section 4 describes the simulation study in depth, and Section 5 concludes. 2. Inference and Hypothesis Testing Statistical Inference refers to methods where one estimates population properties from observed samples, including the procedures of hypothesis testing. It could be for purely descriptive purposes, for example determining the population mean by studying the sample mean. Or, one could have intentions of determining causal effects of some kind of “treatment”. Power considerations are important in all sorts of hypothesis tests, but since post hoc power promoters mainly have been in the field of causal inference, that is our main focus. Of course, the reasoning that we present in this paper can, with some minor modifications, be generalized to all inference research. Before moving on to a discussion on power, we would like to shortly present the process of hypothesis tests. Only by a thorough understanding of this, one can fully understand the concept of power. We will discuss both Fisher’s testing approach, as well as the one of Neyman and Pearson (NP). In today’s hypothesis testing, these two (together with some Basyian elements) are combined into a hybrid, the so called Null Hypothesis Significance 4 Testing (NHST) (Denis, 2004). However, since Fisher and NP in fact are conceptually rather incompatible, we present them separately. By presenting the theories separately, it will also become clear that only in the latter approach, power as we know it in NHST is considered useful. Firstly however, one must note that a statistical test alone cannot be the determinant of statistical inference, no matter approach. First and foremost, one must have a sample that is representative of the population one is interested in. For causal inference, the assumption of a comparable contrafactual must also be satisfied. Neither one of these assumptions can be tested in any hypothesis test. Rather, one must use proper experimental designs and correct sampling techniques to argue for their fulfilment. However, once those requirements are met, hypothesis testing provides a powerful tool for evaluating results seen in the sample data. Moreover, as for the validity of a statistical test, all necessary assumptions of the test in question must be fulfilled, regarding for example distributions or independence of observations. Although important issues, they will not be discussed further. Rather, we will assume that the statistical test at hand is performed in line with all necessary requirements, so that statistical inference is possible. In fact, it is only then it becomes relevant to discuss the power of the test. Finally, one must also understand what the randomness of a sample entails. Different samples will never be exactly the same (at least for continuous data), although sampled from the same population using impeccable sampling techniques. By pure chance, the resulting samples will ​ ​ differ. Due to this random sampling error, the sample estimates are considered random variables with a density distribution. Hypothesis testing really is a tool to evaluate these distributions in order to make inference. 2.1 Fisher’s Hypothesis Testing Fisher is a widely known statistician, for many considered as the father of statistics. He conducted the famous “Lady tasting tea” experiment where he tested whether a lady, as she claimed, could notice if the tea or the milk was the one firstly poured into the cup (Fisher, 1935). This marks the start of the hypothesis testing era. 5 Generally in the Fisher hypothesis testing approach, one formulates a single null hypothesis ​ that will be tested (Christensen, 2005; Fisher, 1935). A test is to be applied in order to ​ ​ evaluate the discrepancy between the observed data and the data under the null. There are many kinds of Fisherian tests, depending on the question of research and the looks of the data. What they have in common is the computation of a p-value; the probability of getting a ​ ​ result as extreme or more extreme than the event you observed, given that the null is true. This p-value is the estimate which is to be evaluated. If the p-value is small, the null hypothesis does not seem to explain the data well. Thus, it is considered a continuous measure of evidence against the stated null: The lower the p-value, the greater evidence against it. Since Fisherian testing is not the main focus of this paper, we confine ourselves to a brief discussion on the topic. Having said that, in order to guarantee a basic understanding of the Fisherian way, we present a simple example; Fisher’s Exact Test.
Recommended publications
  • Tea Tasting and Pascal Triangles Paul Alper University of St
    Humanistic Mathematics Network Journal Issue 21 Article 20 12-1-1999 Tea Tasting and Pascal Triangles Paul Alper University of St. Thomas Yongzhi Yang University of St. Thomas Follow this and additional works at: http://scholarship.claremont.edu/hmnj Part of the Statistics and Probability Commons Recommended Citation Alper, Paul and Yang, Yongzhi (1999) "Tea Tasting and Pascal Triangles," Humanistic Mathematics Network Journal: Iss. 21, Article 20. Available at: http://scholarship.claremont.edu/hmnj/vol1/iss21/20 This Article is brought to you for free and open access by the Journals at Claremont at Scholarship @ Claremont. It has been accepted for inclusion in Humanistic Mathematics Network Journal by an authorized administrator of Scholarship @ Claremont. For more information, please contact [email protected]. Tea Tasting and Pascal Triangles Paul Alper Yongzhi Yang QMCS Department Mathematics Department University of St. Thomas University of St. Thomas St. Paul, MN 55105 St. Paul, MN 55105 e-mail: [email protected] [email protected] INTRODUCTION TO THE PHYSICAL SITUATION “Mathematics of a Lady Tasting Tea.” Fisher uses the R. A. Fisher is generally considered to be the most fa- unlikely example of a woman who claims to be able mous statistician of all time. He is responsible for such “to discriminate whether the milk or the tea infusion familiar statistical ideas as analysis of variance, was first added to the cup” in order to elucidate which Fisher’s exact test, maximum likelihood, design of design principles are essential
    [Show full text]
  • Experimental Methods Randomization Inference
    Lecture 2: Experimental Methods Randomization Inference Raymond Duch May 2, 2018 Director CESS Nuffield/Santiago Lecture 2 • Hypothesis testing • Confidence bounds • Block random assignment 1 Observed Outcomes Local Budget Budget share if village head is male Budget share if village head is female Village 1 ? 15 Village 2 15 ? Village 3 20 ? Village 4 20 ? Village 5 10 ? Village 6 15 ? Village 7 ? 30 2 Potential Outcomes Local Budget Budget share if village head is male Budget share if village head is female Treatment Effect Village 1 10 15 5 Village 2 15 15 0 Village 3 20 30 10 Village 4 20 15 -5 Village 5 10 20 10 Village 6 15 15 0 Village 7 15 30 15 Average 15 30 15 3 Table 3.1 Sampling distribution of estimated ATEs generated when two of the seven villages listed in Table 2.1 are assigned to treatment Estimated ATE Frequency with which an estimate occurs -1 2 0 2 0.5 1 1 2 1.5 2 2.5 1 6.5 1 7.5 3 8.5 3 9 1 9.5 1 10 1 16 1 Average 5 Total 21 4 Example • Based on the numbers in Table 3.1, we calculate the standard error as follows: Sum of squared deviations = (−1 − 5)2 + (−1 − 5)2 + (0 − 5)2 + (0:5 − 5)2 + (1 − 5)2 + (1 − 5)2 + (1:5 − 5)2 + (1:5 − 5)2 + (2:5 − 5)2 + (6:5 − 5)2 + (7:5 − 5)2 + (7:5 − 5)2 + (7:5 − 5)2 + (8:5 − 5)2 + (8:5 − 5)2 + (8:5 − 5)2 + (9 − 5)2 + (9:5 − 5)2 + (10 − 5)2 + (16 − 5)2 = 445 r 1 Square root of the average squared deviation = (445) = 4:60 21 5 s 1 n mVar(Yi (0)) (N − m)Var(Yi (1)) o SE(ATEd ) = + + 2Cov(Yi (0); Yi (1) N − 1 N − m m r 5 (2)(14:29) (5)(42:86) SE(ATEd ) = 6f + + (2)(7:14)g = 4:60 1 5 2 s Vard(Yi
    [Show full text]
  • Another Look at the Lady Tasting Tea and Differences Between Permutation Tests and Randomisation Tests
    International Statistical Review International Statistical Review (2020) doi:10.1111/insr.12431 Another Look at the Lady Tasting Tea and Differences Between Permutation Tests and Randomisation Tests Jesse Hemerik1 and Jelle J. Goeman2 1Biometris, Wageningen University & Research, Droevendaalsesteeg 1, Wageningen, 6708 PB, The Netherlands E-mail: [email protected] 2Biomedical Data Sciences, Leiden University Medical Center, Einthovenweg 20, Leiden, 2333 ZC, The Netherlands Summary The statistical literature is known to be inconsistent in the use of the terms ‘permutation test’ and ‘randomisation test’. Several authors successfully argue that these terms should be used to refer to two distinct classes of tests and that there are major conceptual differences between these classes. The present paper explains an important difference in mathematical reasoning between these classes: a permutation test fundamentally requires that the set of permutations has a group structure, in the algebraic sense; the reasoning behind a randomisation test is not based on such a group structure, and it is possible to use an experimental design that does not correspond to a group. In particular, we can use a randomisation scheme where the number of possible treatment patterns is larger than in standard experimental designs. This leads to exact p values of improved resolution, providing increased power for very small significance levels, at the cost of decreased power for larger significance levels. We discuss applications in randomised trials and elsewhere. Further, we explain that Fisher's famous Lady Tasting Tea experiment, which is commonly referred to as the first permutation test, is in fact a randomisation test. This distinction is important to avoid confusion and invalid tests.
    [Show full text]
  • Introduction to Resampling Methods
    overview Randomisation tests Cross-validation Jackknife Bootstrap Conclusion Introduction to resampling methods Vivien Rossi CIRAD - UMR Ecologie des forêts de Guyane [email protected] Master 2 - Ecologie des Forêts Tropicale AgroParisTech - Université Antilles-Guyane Kourou, novembre 2010 Vivien Rossi resampling overview Randomisation tests Cross-validation Jackknife Bootstrap Conclusion objectives of the course 1 to present resampling technics randomization tests cross-validation jackknife bootstrap 2 to apply with R Vivien Rossi resampling overview Randomisation tests Cross-validation Jackknife Bootstrap Conclusion outlines 1 Principle and mecanism of the resampling methods 2 Randomisation tests 3 Cross-validation 4 Jackknife 5 Bootstrap 6 Conclusion Vivien Rossi resampling overview Randomisation tests Cross-validation Jackknife Bootstrap Conclusion 1 Principle and mecanism of the resampling methods 2 Randomisation tests 3 Cross-validation 4 Jackknife 5 Bootstrap 6 Conclusion Vivien Rossi resampling overview Randomisation tests Cross-validation Jackknife Bootstrap Conclusion Resampling in statistics Description set of statistical inference methods based on new samples drawn from the initial sample Implementation computer simulation of these new samples analysing these new data to refine the inference Classical uses estimation/bias reduction of an estimate (jackknife, bootstrap) estimation of confidence intervalle without normality assumption (bootstrap) exacts tests (permutation tests) model validation (cross validation) Vivien Rossi
    [Show full text]
  • 1 History of Statistics 8. Analysis of Variance and the Design Of
    History of Statistics 8. Analysis of Variance and the Design of Experiments. R. A. Fisher (1890-1962) In the first decades of the twentieth century, the Englishman, Ronald Aylmer Fisher, who always published as R. A. Fisher, radically changed the use of statistics in research. He invented the technique called Analysis of Variance and founded the entire modern field called Design of Experiments. During his active years he was acknowledged as the greatest statistician in the world. He was knighted by Queen Elizabeth in 1952. Fisher’s graduate work in mathematics focused on statistics in the physical sciences, including astronomy. That’s not as surprising as it might seem. Almost all of modern statistical theory is based on fundamentals originally developed for use in astronomy. The progression from astronomy to the life sciences is a major thread in the history of statistics. Two major areas that were integral to the widening application of statistics were agriculture and genetics, and Fisher made major contributions to both. The same basic issue appears in all these areas of research: how to account for the variability in a set of observations. In astronomy the variability is caused mainly by measurement erro r, as when the position of a planet is obtained from different observers using different instruments of unequal reliability. It is assumed, for example, that a planet has a specific well-defined orbit; it’s just that our observations are “off” for various reasons. In biology the variability in a set of observations is not the result of measurement difficulty but rather by R.
    [Show full text]
  • Hypotheses and Its Testing
    Dr. Nilesh B. Gajjar / International Journal for Research in Vol. 2, Issue:5, May 2013 Education (IJRE) ISSN:2320-091X Hypotheses and its Testing DR. NILESH B. GAJJAR M.Com. (A/c), M.Com. (Mgt.), M.Ed., M. Phil., G.Set. Ph. D. Assistant Professor, S. V. S. Education College, (M.Ed.) P. G. Dept., Nagalpur, Mehsana, Gujarat (India) [email protected] [email protected] [email protected] http://www.raijmr.com Abstract: A statistical hypothesis test is a method of making decisions using data from a scientific study. In statistics, a result is called statistically significant if it has been predicted as unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by statistician Ronald Fisher. These tests are used in determining what outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance; this can help to decide whether results contain enough information to cast doubt on conventional wisdom, given that conventional wisdom has been used to establish the null hypothesis. The critical region of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the alternative hypothesis. Statistical hypothesis testing is sometimes called confirmatory data analysis, in contrast to exploratory data analysis, which may not have pre-specified hypotheses. Statistical hypothesis testing is a key technique of frequentist inference. Here the author of this article wants to introduce the testing of hypotheses. Keywords: Alternative hypotheses, Null Hypotheses, One tail & two tail test, Statistics, Type I & II error 2.
    [Show full text]
  • Permutation and Bootstrap, Slide 0
    ECON 626: Applied Microeconomics Lecture 8: Permutations and Bootstraps Professors: Pamela Jakiela and Owen Ozier Part I: Randomization Inference Randomization Inference vs Confidence Intervals • See Imbens and Rubin, Causal Inference, first chapters. • 100 years ago, Fisher was after a \sharp null," where Neyman and Gosset (Student) were concerned with average effects. UMD Economics 626: Applied Microeconomics Lecture 8: Permutation and Bootstrap, Slide 3 Begin with idea of a sharp null: Y1i = Y0i 8i. (Gerber and Green, p.62) • If Y1i = Y0i 8i, then if we observe either, we have seen both. • All possible treatment arrangements would yield the same Y values. • We could then calculate all possible treatment effect estimates under the sharp null. • The distribution of these possible treatment effects allows us to compute p-values: The probability that, under the null, something this large or larger would occur at random. (For the two sided test, \large" means in absolute value terms.) • This extends naturally to the case where treatment assignments are restricted in some way. Recall, for example, the Bruhn and McKenzie (2009) discussion of the many different restrictions that can be used to yield balanced randomization. Randomization Inference How can we do hypothesis testing without asymptotic approximations? UMD Economics 626: Applied Microeconomics Lecture 8: Permutation and Bootstrap, Slide 4 • If Y1i = Y0i 8i, then if we observe either, we have seen both. • All possible treatment arrangements would yield the same Y values. • We could then calculate all possible treatment effect estimates under the sharp null. • The distribution of these possible treatment effects allows us to compute p-values: The probability that, under the null, something this large or larger would occur at random.
    [Show full text]
  • Latin Squares: Some History, with an Emphasis on Their Use in Designed Experiments
    Latin squares: Some history, with an emphasis on their use in designed experiments R. A. Bailey University of St Andrews QMUL (emerita) British Mathematical Colloquium, St Andrews, June 2018 Bailey Latin squares 1/37 Abstract What is a Latin square? In the 1920s, R. A. Fisher, at Rothamsted Experimental Station in Harpenden, recommended Latin squares for agricultural crop experiments. At about the same time, Jerzy Neyman developed the same idea during his doctoral study at the University of Warsaw. However, there is evidence of their Definition much earlier use in experiments. Let n be a positive integer. Euler had made his famous conjecture about Graeco-Latin A Latin square of order n is an n n array of cells in which × squares in 1782. There was a spectacular refutation in 1960. n symbols are placed, one per cell, in such a way that each I shall say something about the different uses of Latin squares symbol occurs once in each row and once in each column. in designed experiments. This needs methods of construction, of counting, and of randomization. The symbols may be letters, numbers, colours, . Fisher and Neyman had a famous falling out over Latin squares in 1935 when Neyman proved that use of Latin squares in experiments gives biased results. A six-week international workshop in Boulder, Colorado in 1957 resolved this, but the misunderstanding surfaced again in a Statistics paper Baileypublished in 2017. Latin squares 2/37 Bailey Latin squares 3/37 A Latin square of order 8 A Latin square of order 6 E B F A C D B C D E F A A E C B D F F D E C A B D A B F E C C F A D B E Bailey Latin squares 4/37 Bailey Latin squares 5/37 A stained glass window in Caius College, Cambridge And on the opposite side of the hall R.
    [Show full text]
  • Observational Studies
    1 Observational Studies 1.1 What Are Observational Studies? William G. Cochran first presented "observational studies" as a topic de- fined by principles and methods of statistics. Cochran had been an au- thor of the 1964 United States Surgeon General's Advisory Committee Report, Smoking and Health, which reviewed a vast literature and con- cluded: "Cigarette smoking is causally related to lung cancer in men; the magnitude of the effect of cigarette smoking far outweighs all other factors. The data for women, though less extensive, point in the same direction (p. 37)." Though there had been some experiments confined to laboratory animals, the direct evidence linking smoking with human health came from observational or nonexperimental studies. In a later review, Cochran (1965) defined an observational study as an empiric investigation in which: . .. the objective is to elucidate cause-and-effect relationships [in which] it is not feasible to use controlled experimen- tation, in the sense of being able to impose the procedures or treatments whose effects it is desired to discover, or to assign subjects at random to different procedures. Features of this definition deserve emphasis. An observational study con- cerns treatments, interventions, or policies and the effects they cause, and in this respect it resembles an experiment. A study without a treatment is neither an experiment nor an observational study. Most public opinion 2 1. Observational Studies polls, most forecasting efforts, most studies of fairness and discrimination, and many other important empirical studies are neither experiments nor observational studies. In an experiment, the assignment of treatments to subjects is controlled by the experimenter, who ensures that subjects receiving different treat- ments are comparable.
    [Show full text]
  • Another Look at the Lady Tasting Tea and Differences Between Permutation Tests and Randomization Tests
    Another look at the Lady Tasting Tea and differences between permutation tests and randomization tests Jesse Hemerik∗† and Jelle Goeman‡ December 23, 2020 Summary The statistical literature is known to be inconsistent in the use of the terms “permutation test” and “randomization test”. Several authors succesfully argue that these terms should be used to refer to two distinct classes of tests and that there are major conceptual differences between these classes. The present paper explains an important difference in mathematical reasoning between these classes: a permutation test fundamentally requires that the set of permutations has a group structure, in the algebraic sense; the reasoning behind a randomization test is not based on such a group structure and it is possible to use an experimental design that does not correspond to a group. In particular, we can use a randomization scheme where the number of possible treatment patterns is larger than in standard experimental designs. This leads to exact p-values of improved resolution, providing increased power for very small significance levels, at the cost of decreased power for larger significance levels. We discuss applications in randomized trials and elsewhere. Further, we explain that Fisher’s famous Lady Tasting Tea experiment, which is commonly referred to as the first permutation test, is in fact a randomization test. This distinction is important to avoid confusion and invalid tests. keywords: Permutation test; Group invariance test; Randomization test; Lady Tasting Tea; 1 Introduction arXiv:1912.02633v2 [stat.ME] 6 Oct 2020 The statistical literature is very inconsistent in the use of the terms “permutation tests” and “randomization tests” (Onghena, 2018; Rosenberger et al., 2019).
    [Show full text]
  • Statistical Hypothesis Tests
    Statistical Hypothesis Tests Kosuke Imai Department of Politics, Princeton University March 24, 2013 In this lecture note, we discuss the fundamentals of statistical hypothesis tests. Any statistical hypothesis test, no matter how complex it is, is based on the following logic of stochastic proof by contradiction. In mathematics, proof by contradiction is a proof technique where we begin by assuming the validity of a hypothesis we would like to disprove and thenp derive a contradiction under the same hypothesis. For example, here is a well-know proof that 2 is an irrational number, i.e., a number that cannot be expressed as a fraction. p pWe begin by assuming that 2 is a rational number and therefore can be expressed as 2 = a=b where a and b are integers and their greatest common divisor is 1. Squaring both sides, we have 2 = a2=b2, which implies 2b2 = a2. This means that a2 is an even number and therefore a is also even (since if a is odd, so is a2). Since a is even, we can write a = 2c for a constant c. Substituting this into 2b2 = a2, we have b2 = 2c2, which implies b is also even. Therefore, a and b share a common divisor of two. However, this contradicts the assumption that the greatest common divisor of a and b is 1. p That is, we begin by assuming the hypothesis that 2 is a rational number. We then show that under this hypothesis we are led to a contradiction and therefore conclude that the hypothesis must be wrong.
    [Show full text]
  • Lecture 1: Selection Bias and the Experimental Ideal, Slide 0
    ECON 626: Applied Microeconomics Lecture 1: Selection Bias and the Experimental Ideal Professors: Pamela Jakiela and Owen Ozier Department of Economics University of Maryland, College Park Potential Outcomes Do Hospitals Make People Healthier? Your health status is: excellent, very good, good, fair, or poor? Hospital No Hospital Difference Health status 3.21 3.93 −0.72∗∗∗ (0.014) (0.003) Observations 7,774 90,049 A simple comparison of means suggests that going to the hospital makes people worse off: those who had a hospital stay in the last 6 months are, on average, less healthy than those that were not admitted to the hospital • What’s wrong with this picture? ECON 626: Applied Microeconomics Lecture 1: Selection Bias and the Experimental Ideal, Slide 3 Potential Outcomes We are interested in the relationship between “treatment”and some outcome that may be impacted by the treatment (eg. health) Outcome of interest: • Y = outcome we are interested in studying (e.g. health) • Yi = value of outcome of interest for individual i For each individual, there are two potential outcomes: • Y0,i = i’s outcome if she doesn’t receive treatment • Y1,i = i’s outcome if she does receive treatment ECON 626: Applied Microeconomics Lecture 1: Selection Bias and the Experimental Ideal, Slide 4 Potential Outcomes For any individual, we can only observe one potential outcome: Y0i ifDi =0 Yi = Y1i ifDi =1 where Di is a treatment indicator (equal to 1 if i was treated) • Each individual either participates in the program or not • The causal impact of program (D)oni is: Y1i − Y0i We observe i’s actual outcome: − Yi = Y0i +(Y1i Y0i) Di impact ECON 626: Applied Microeconomics Lecture 1: Selection Bias and the Experimental Ideal, Slide 5 Establishing Causality In an ideal world (research-wise), we could clone each treated individual and observe the impacts of the treatment on their lives vs.
    [Show full text]