To Pretest Or Not to Pretest

Total Page:16

File Type:pdf, Size:1020Kb

To Pretest Or Not to Pretest DOI: 10.26717/BJSTR.2018.05.001185 Robert Pasnak. Biomed J Sci & Tech Res ISSN: 2574-1241 Opinion Open Access To Pretest or Not to Pretest Robert Pasnak* Department of Psychology, George Mason University, USA Received: April 23, 2018; Published: June 08, 2018 *Corresponding author: Robert Pasnak,George Mason University,Fairfax VA 22030 Abstract Pretests decrease the internal validity of experiments (i. e., render them incapable of completely proving what they are intended to prove). The loss of internal validity may be great in some cases, and little or no loss in others, but just how much has been lost cannot be proven. Pretests are sometimes credited with demonstrating that randomization has produced reasonably equivalent groups, but that is indexed by p, with equal accuracy whether or not pretests are employed. There are usually many sources of differences between subjects which could result in different outcomes on the dependent variable(s). The only valid index of whether an initial inequality of groups has produced a spurious by a failure of randomization to create equality in whatever pretests might measure and any other source of differences. Pretests can increase theresult power is p. Thisof experiments value specifies to detect the probability small differences that differences in outcomes, in the but outcome only if for pretest various and levels posttest of the scores independent are strongly variable(s) correlated. were Conditions produced underKeywords: which Pretests; accepting Internal the confound Validity; introduced External Validity; by pretesting Power is most likely to be profitable are identified. Introduction properly, there will be no confounds whose effects could be mis- There are three great goals for all research: internal validity, ex- takenly attributed to the independent variable. A common cause of ternal validity, and power. An unfortunate misunderstanding that mistakes is the failure of random assignment of participants to lev- has become increasingly prevalent is the conception that pretest- els of the independent variable to accomplish what it is supposed to ing increases internal validity. In fact, pretesting has the opposite accomplish - equality in all causal variables, known and unknown. effect; it reduces internal validity. What pretesting does increase is The extent to which to which a failure of randomization could have power if pretest and posttest scores are strongly correlated. Small produced the differences observed on the dependent variable is effects may be detected if pretests are administered, although there expressed by p. Pretests neither increase nor decrease the valid- ity or accuracy of p as a measure of the failure of randomization displacement of conventional methodology courses by courses in are issues in statistical analysis after pretesting. This may reflect to produce initial equality in the characteristics of participants in advanced correlational methods, which have great value in their educational research. In essence a p of 02 means exactly the same own right and have engaged a generation of young scholars. This thing whether or not there were pretests. brief treatise is an effort to clarify the purpose and interpretation of pretests. To make it easier to follow, the discussion will be in terms of one independent and one dependent variable, although good re- that the difference observed was not due to an initial inequality We can have equal confidence that there is a 2 per cent chance search often involves more than one of each, andthe measurements of the participants, whether there was or whether there was not a pretest. The only gain for a researcher who administered a pre- although the same considerations hold for other measurements. test is that p may be smaller than it would otherwise be, because taken for the dependent variable will defined as participants scores, Internal Validity differences between individuals can be removed. Absent the pre- test, p might have been .03 or even >.05. In the latter case, the effect Internal validity expresses the extent to which we can be certain of the independent variable would not be recognized, by conven- that, within the conditions of the experiment, the effects attributed tional standards. However, the statistical analysis required when to the independent variable are indeed due to that variable and not pretests are administered is less powerful than when they are not. some other. Any variable whose effects might be mistaken for those This is because the pretest and posttest scores are correlated, and of the independent variable is termed a “confound” or confound- accounting for the correlation, which is a statistical necessity, sac- ing variable. Confounds are sometimes inherent in the design of an experiment – as when pretests are employed – or are sometimes scores, rendering the statistical analysis less powerful and there- produced when an experiment is not conducted properly. When rifices degrees of freedom, typically one for each pair of correlated fore less able to detect effect of the independent variable. This is unconfounded designs are used and experiments are conducted Cite this article: Robert Pasnak. To Pretest or Not to Pretest. Biomed J Sci &Tech Res 5(2)- 2018. BJSTR. MS.ID.001185. DOI: 10.26717/ BJSTR.2018.05.001185. 4471 Robert Pasnak. Biomed J Sci & Tech Res Volume 5- Issue 2: 2018 not often understood, because the error term (measure of chance validity. However, such experiments may be useful for testing theo- differences) is smaller. This decrease may or may not be offset by ry, or for identifying variables that affect scores, even if those effects the loss of degrees of freedom, depending on whether pretest and are small. However, pretesting, the main subject of this pretest, in- posttest scores are strongly or weakly correlated. volves a different compromise. Pretests may increase power, but do so at the cost of reducing the internal validity of experiments. That Note that for any data set, applying an analysis that ignore is to say, pretests can improve our ability to detect small effects of correlation between correlated scores will produce a statistical the independent variable, but reduce our ability to be certain that the effects we identify are actually those of the independent varia- advantage of pretesting when there is one is that effects of the inde- analysis that is more significant, but spurious. Note also that the ble, because they create a confound. pendent variable are more likely to be detected, not that spurious Pretest-treatment Interactions actually comes at the cost of an increased likelihood of spurious findings are more likely to be avoided. This advantage in detection that confound long ago: the test-treatment or sensitization-treat- A classic methodology text (Kerlinger 1964, pp. 339) identified ment interaction. The pretest may cause the experimental manip- findings,External as Validity we shall see. ulation to produce an effect it would otherwise not have produced. External validity expresses the extent to which the results of an Or it may cause different levels of the independent variable to have experiment can be generalized beyond the conditions of the exper- effects greater or smaller than they would have had if not preceded iment. This usually means to different populations measured under by a pretest. This is not the effect of testing itself; if all participants - are pretested, the effect of the pretest per se will on average be the tions of the variables. It would logically be impossible to have high- different conditions, sometimes with different operational defini same for all. Pretesting will be a constant like all the other things er external validity than internal validity. In fact, there is little or no held constant in a good experiment, not a variable whose effects - could be mistaken for those of the independent variable. The con- jority of experiments. If a sample were randomly and independent- scientific basis for estimating the external validity of the great ma found is the interaction of the pretest with experimental manip- ulation (the different levels of the independent variable). It is not results to that population, but such sampling is seldom done, except ly drawn from a specified population, one could lawfully generalize in survey research. (Random sampling is not to be confused with more responsive or less responsive to a manipulation. Children pre- random assignment to experimental conditions.) Generalization to difficult to recognize that being pretested could make a participant tested on mathematics, even if given no feedback, may recognize unmeasured populations and conditions is usually done on faith what they need to learn in orderto do better next time, and conse- alone. Fortunately, it seems to work pretty well. If manipulations quently respond well to experimental instruction. “For instance, in designed to improve health, for example, are basically similar, they education a pretest can be the impetus for learning the correct an- usually produce effects in the same direction for 12-year-old city swers to items and thus increase the posttest level of performance” girls as for 12-year-old rural girls, although very often to a greater (Cook and Campbell 1979, pp.102). or lesser extent, and there are plenty of exceptions. This is true whether or not the pretest has two different forms. It is indeed fortunate that many experiments in many disci- Children in a control group, however, have no opportunity to apply - the insight they gained from the pretest. Even though the pretest erally replicable with different populations and subtle or not so plines have sufficient, albeit unknown, external validity to be gen is a constant, it can potentiate responses at different levels of the independent variable differently, because those levels are different. research in these disciplines would have little value. High external subtle differences in experimental conditions; otherwise, scientific That is the nature of an interaction between variables. The inter- validity is often the result of [1] robust experimental manipulations action could also be negative.
Recommended publications
  • Internal Validity Is About Causal Interpretability
    Internal Validity is about Causal Interpretability Before we can discuss Internal Validity, we have to discuss different types of Internal Validity variables and review causal RH:s and the evidence needed to support them… Every behavior/measure used in a research study is either a ... Constant -- all the participants in the study have the same value on that behavior/measure or a ... • Measured & Manipulated Variables & Constants Variable -- when at least some of the participants in the study • Causes, Effects, Controls & Confounds have different values on that behavior/measure • Components of Internal Validity • “Creating” initial equivalence and every behavior/measure is either … • “Maintaining” ongoing equivalence Measured -- the value of that behavior/measure is obtained by • Interrelationships between Internal Validity & External Validity observation or self-report of the participant (often called “subject constant/variable”) or it is … Manipulated -- the value of that behavior/measure is controlled, delivered, determined, etc., by the researcher (often called “procedural constant/variable”) So, every behavior/measure in any study is one of four types…. constant variable measured measured (subject) measured (subject) constant variable manipulated manipulated manipulated (procedural) constant (procedural) variable Identify each of the following (as one of the four above, duh!)… • Participants reported practicing between 3 and 10 times • All participants were given the same set of words to memorize • Each participant reported they were a Psyc major • Each participant was given either the “homicide” or the “self- defense” vignette to read From before... Circle the manipulated/causal & underline measured/effect variable in each • Causal RH: -- differences in the amount or kind of one behavior cause/produce/create/change/etc.
    [Show full text]
  • Comorbidity Scores
    Bias Introduction of issue and background papers Sebastian Schneeweiss, MD, ScD Professor of Medicine and Epidemiology Division of Pharmacoepidemiology and Pharmacoeconomics, Dept of Medicine, Brigham & Women’s Hospital/ Harvard Medical School 1 Potential conflicts of interest PI, Brigham & Women’s Hospital DEcIDE Center for Comparative Effectiveness Research (AHRQ) PI, DEcIDE Methods Center (AHRQ) Co-Chair, Methods Core of the Mini Sentinel System (FDA) Member, national PCORI Methods Committee No paid consulting or speaker fees from pharmaceutical manufacturers Consulting in past year: . WHISCON LLC, Booz&Co, Aetion Investigator-initiated research grants to the Brigham from Pfizer, Novartis, Boehringer-Ingelheim Multiple grants from NIH 2 Objective of Comparative Effectiveness Research Efficacy Effectiveness* (Can it work?) (Does it work in routine care?) Placebo Most RCTs comparison for drug (or usual care) approval Active Goal of comparison (head-to-head) CER Effectiveness = Efficacy X Adherence X Subgroup effects (+/-) RCT Reality of routine care 3 * Cochrane A. Nuffield Provincial Trust, 1972 CER Baseline Non- randomization randomized Primary Secondary Primary Secondary data data data data 4 Challenges of observational research Measurement /surveillance-related biases . Informative missingness/ misclassification Selection-related biases . Confounding . Informative treatment changes/discontinuations Time-related biases . Immortal time bias . Temporality . Effect window (Multiple comparisons) 5 Informative missingness:
    [Show full text]
  • Experimental Design
    UNIVERSITY OF BERGEN Experimental design Sebastian Jentschke UNIVERSITY OF BERGEN Agenda • experiments and causal inference • validity • internal validity • statistical conclusion validity • construct validity • external validity • tradeoffs and priorities PAGE 2 Experiments and causal inference UNIVERSITY OF BERGEN Some definitions a hypothesis often concerns a cause- effect-relationship PAGE 4 UNIVERSITY OF BERGEN Scientific revolution • discovery of America → french revolution: renaissance and enlightenment – Copernicus, Galilei, Newton • empiricism: use observation to correct errors in theory • scientific experimentation: taking a deliberate action [manipulation, vary something] followed by systematic observation of what occured afterwards [effect] controlling extraneous influences that might limit or bias observation: random assignment, control groups • mathematization, institutionalization PAGE 5 UNIVERSITY OF BERGEN Causal relationships Definitions and some philosophy: • causal relationships are recognized intuitively by most people in their daily lives • Locke: „A cause is which makes any other thing, either simple idea, substance or mode, begin to be; and an effect is that, which had its beginning from some other thing“ (1975, p. 325) • Stuart Mill: A causal relationship exists if (a) the cause preceded the effect, (b) the cause was related to the effect, (c) we can find no plausible alternative explanantion for the effect other than the cause. Experiments: (a) manipulate the presumed cause, (b) assess whether variation in the
    [Show full text]
  • Understanding Replication of Experiments in Software Engineering: a Classification Omar S
    Understanding replication of experiments in software engineering: A classification Omar S. Gómez a, , Natalia Juristo b,c, Sira Vegas b a Facultad de Matemáticas, Universidad Autónoma de Yucatán, 97119 Mérida, Yucatán, Mexico Facultad de Informática, Universidad Politécnica de Madrid, 28660 Boadilla del Monte, Madrid, Spain c Department of Information Processing Science, University of Oulu, Oulu, Finland abstract Context: Replication plays an important role in experimental disciplines. There are still many uncertain-ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment-ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi-fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera-ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment.
    [Show full text]
  • Introduction to Difference in Differences (DID) Analysis
    Introduction to Difference in Differences (DID) Analysis Hsueh-Sheng Wu CFDR Workshop Series June 15, 2020 1 Outline of Presentation • What is Difference-in-Differences (DID) analysis • Threats to internal and external validity • Compare and contrast three different research designs • Graphic presentation of the DID analysis • Link between regression and DID • Stata -diff- module • Sample Stata codes • Conclusions 2 What Is Difference-in-Differences Analysis • Difference-in-Differences (DID) analysis is a statistic technique that analyzes data from a nonequivalence control group design and makes a casual inference about an independent variable (e.g., an event, treatment, or policy) on an outcome variable • A non-equivalence control group design establishes the temporal order of the independent variable and the dependent variable, so it establishes which variable is the cause and which one is the effect • A non-equivalence control group design does not randomly assign respondents to the treatment or control group, so treatment and control groups may not be equivalent in their characteristics and reactions to the treatment • DID is commonly used to evaluate the outcome of policies or natural events (such as Covid-19) 3 Internal and External Validity • When designing an experiment, researchers need to consider how extraneous variables may threaten the internal validity and external validity of an experiment • Internal validity refers to the extent to which an experiment can establish the causal relation between the independent variable and
    [Show full text]
  • Experimental Design and Data Analysis in Computer Simulation
    Journal of Modern Applied Statistical Methods Volume 16 | Issue 2 Article 2 12-1-2017 Experimental Design and Data Analysis in Computer Simulation Studies in the Behavioral Sciences Michael Harwell University of Minnesota - Twin Cities, [email protected] Nidhi Kohli University of Minnesota - Twin Cities Yadira Peralta University of Minnesota - Twin Cities Follow this and additional works at: http://digitalcommons.wayne.edu/jmasm Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons Recommended Citation Harwell, M., Kohli, N., & Peralta, Y. (2017). Experimental Design and Data Analysis in Computer Simulation Studies in the Behavioral Sciences. Journal of Modern Applied Statistical Methods, 16(2), 3-28. doi: 10.22237/jmasm/1509494520 This Invited Article is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized editor of DigitalCommons@WayneState. Journal of Modern Applied Statistical Methods Copyright © 2017 JMASM, Inc. November 2017, Vol. 16, No. 2, 3-28. ISSN 1538 − 9472 doi: 10.22237/jmasm/1509494520 Experimental Design and Data Analysis in Computer Simulation Studies in the Behavioral Sciences Michael Harwell Nidhi Kohli Yadira Peralta University of Minnesota, University of Minnesota, University of Minnesota, Twin Cities Twin Cities Twin Cities Minneapolis, MN Minneapolis, MN Minneapolis, MN Treating computer simulation studies as statistical sampling experiments subject to established principles of experimental design and data analysis should further enhance their ability to inform statistical practice and a program of statistical research. Latin hypercube designs to enhance generalizability and meta-analytic methods to analyze simulation results are presented.
    [Show full text]
  • Single-Case Designs Technical Documentation
    What Works Clearinghouse SINGLE‐CASE DESIGN TECHNICAL DOCUMENTATION Developed for the What Works Clearinghouse by the following panel: Kratochwill, T. R. Hitchcock, J. Horner, R. H. Levin, J. R. Odom, S. L. Rindskopf, D. M Shadish, W. R. June 2010 Recommended citation: Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf. 1 SINGLE-CASE DESIGNS TECHNICAL DOCUMENTATION In an effort to expand the pool of scientific evidence available for review, the What Works Clearinghouse (WWC) assembled a panel of national experts in single-case design (SCD) and analysis to draft SCD Standards. In this paper, the panel provides an overview of SCDs, specifies the types of questions that SCDs are designed to answer, and discusses the internal validity of SCDs. The panel then proposes SCD Standards to be implemented by the WWC. The Standards are bifurcated into Design and Evidence Standards (see Figure 1). The Design Standards evaluate the internal validity of the design. Reviewers assign the categories of Meets Standards, Meets Standards with Reservations and Does not Meet Standards to each study based on the Design Standards. Reviewers trained in visual analysis will then apply the Evidence Standards to studies that meet standards (with or without reservations), resulting in the categorization of each outcome variable as demonstrating Strong Evidence, Moderate Evidence, or No Evidence. A. OVERVIEW OF SINGLE-CASE DESIGNS SCDs are adaptations of interrupted time-series designs and can provide a rigorous experimental evaluation of intervention effects (Horner & Spaulding, in press; Kazdin, 1982, in press; Kratochwill, 1978; Kratochwill & Levin, 1992; Shadish, Cook, & Campbell, 2002).
    [Show full text]
  • Internal Validity Page 1
    Internal Validity page 1 Internal Validity If a study has internal validity, then you can be confident that the dependent variable was caused by the independent variable and not some other variable. Thus, a study with high internal validity permits conclusions of cause and effect. The major threat to internal validity is a confounding variable: a variable other than the independent variable that 1) often co-varies with the independent variable and 2) may be an alternative cause of the dependent variable. For example, if you want to investigate whether smoking cigarettes causes cancer, you will need to somehow control for other possible causes of cancer that tend to vary with smoking. Each of these other possible causes is a potential confounding variable. If people who smoke more also tend to experience greater stress, and stress is known to contribute to cancer, then stress could be a confounding variable. Not every variable that tends to co-vary with the independent variable is an equally plausible confounding variable. People who smoke may also strike more matches, but if striking matches is not plausibly linked to cancer in any way, then it is generally not worth considering as a serious threat to internal validity. Random assignment. The most common procedure to reduce the risk of confounding variables (and thus increase internal validity) is random assignment of participants to levels of the independent variable. This means that each participant in a study has an equal probability of being assigned to any of the levels of the independent variable. For the smoking study, random assignment would require some participants to be randomly assigned to smoke more than others.
    [Show full text]
  • Internal Validity: a Must in Research Designs
    Vol. 10(2), pp. 111-118, 23 January, 2015 DOI: 10.5897/ERR2014.1835 Article Number: 01AA00749743 ISSN 1990-3839 Educational Research and Reviews Copyright © 2015 Author(s) retain the copyright of this article http://www.academicjournals.org/ERR Full Length Research Paper Internal validity: A must in research designs Cahit Kaya Department of Psychological Counseling and Guidence, University of Wisconsin- Madison, United State. Received 2 May, 2014; Accepted 31 December, 2014 In experimental research, internal validity refers to what extent researchers can conclude that changes in the dependent variable (that is, outcome) are caused by manipulations to the independent variable. This causal inference permits researchers to meaningfully interpret research results. This article discusses internal validity threats in social and educational research using examples from the contemporary literature, and research designs in terms of their ability to control internal validity threats. An Eric and psychINFO search was performed to the current review of internal validity. The review indicated that appropriate research designs that control possible extraneous variables are needed to be able to meaningfully interpret research results in education and social sciences. Although, pretest- posttest experimental-control group design controls most of the internal validity threats, the most appropriate research design would vary based on the research questions or goals. Key words: Internal validity, education, confounding variables, research design. INTRODUCTION One of the fundamental purposes of educational research concept of conservation at follow-up stage for students is to determine possible relationships between variables. exposed to the popular television exemplars. However, Experimental designs provide the best possible there might be many other factors that can possibly mechanism to determine whether there is a causal influence the learning outcomes of the students.
    [Show full text]
  • Experimental and Quasi-Experimental Designs for Research
    CHAPTER 5 Experimental and Quasi-Experimental Designs for Research l DONALD T. CAMPBELL Northwestern University JULIAN C. STANLEY Johns Hopkins University In this chapter we shall examine the validity (1960), Ferguson (1959), Johnson (1949), of 16 experimental designs against 12 com­ Johnson and Jackson (1959), Lindquist mon threats to valid inference. By experi­ (1953), McNemar (1962), and Winer ment we refer to that portion of research in (1962). (Also see Stanley, 19S7b.) which variables are manipulated and their effects upon other variables observed. It is well to distinguish the particular role of this PROBLEM AND chapter. It is not a chapter on experimental BACKGROUND design in the Fisher (1925, 1935) tradition, in which an experimenter having complete McCall as a Model mastery can schedule treatments and meas~ In 1923, W. A. McCall published a book urements for optimal statistical efficiency, entitled How to Experiment in Education. with complexity of design emerging only The present chapter aspires to achieve an up­ from that goal of efficiency. Insofar as the to-date representation of the interests and designs discussed in the present chapter be­ considerations of that book, and for this rea­ come complex, it is because of the intransi­ son will begin with an appreciation of it. gency of the environment: because, that is, In his preface McCall said: "There afe ex­ of the experimenter's lack of complete con­ cellent books and courses of instruction deal­ trol. While contact is made with the Fisher ing with the statistical manipulation of ex; tradition at several points, the exposition of perimental data, but there is little help to be that tradition is appropriately left to full­ found on the methods of securing adequate length presentations, such as the books by and proper data to which to apply statis­ Brownlee (1960), Cox (1958), Edwards tical procedure." This sentence remains true enough today to serve as the leitmotif of 1 The preparation of this chapter bas been supported this presentation also.
    [Show full text]
  • Validity 1 VALIDITY of QUANTITATIVE RESEARCH Recall
    Validity 1 VALIDITY OF QUANTITATIVE RESEARCH Recall “the basic aim of science is to explain natural phenomena. Such explanations are called theories” (Kerlinger, 1986, p. 8). Theories have varying degrees of truth. Validity is the best approximation to the truth or falsity of propositions (Cook & Campbell, 1979). Validity is at best approximate or tentative “since one can never know what is true. At best, one can know what has not yet been ruled out as false” (Cook & Campbell, 1979, p. 37). And, as we have seen in examining the logic of hypothesis testing, statistical power, and the validity of outcome measures, we don’t really prove that something is false. In other words, we never really prove a null hypothesis; we only fail to reject it. “Experimental results never ‘confirm’ or ‘prove’ a theory -- rather the successful theory is tested and escapes being disconfirmed” (Campbell & Stanley, 1963, p. 35). In other words, we fail to reject the null hypothesis. “Varying degrees of ‘confirmation’ are conferred upon a theory through the number of plausible rival hypotheses available to account for the data. The fewer such plausible rival hypotheses remaining, the greater the degree of ‘confirmation’” (Campbell & Stanley, 1963, p. 36). Thus, research is a field of varying degrees of certainty. [Analogy: Monet’s garden.] And, “continuous, multiple experimentation is more typical of science than once-and-for- all definitive experiments” (Campbell & Stanley, 1963, p. 3). [Analogy: Brush strokes in the painting.] Threats to validity are plausible rival hypotheses (i.e., other ways of explaining the results rather than the author(s) hypothesis).
    [Show full text]
  • An Introduction to Randomization*
    An Introduction to Randomization* TAF – CEGA Impact Evaluation Workshop Day 1 * http://ocw.mit.edu (J-PAL Impact Evaluation Workshop materials) Outline Background What is randomized evaluation? Advantages and limitations of experiments Conclusions An example: The “Vote 2002” campaign Arceneaux, Gerber and Green (2006) Intervention: get-out-the-vote phone calls to increase voter turnout in Iowa and Michigan, 2002 midterm elections Treatment group = 60,000 individuals (35,000 actually reached by phone) Control group = >2,000,000 individuals Main outcome: turnout (did the individual vote?) Effect sizes using experimental v. non-experimental methods How to measure impact? What would have happened in the absence of the intervention program? { Since the counterfactual is not observable, key goal of all impact evaluation methods is to construct of “mimic” the counterfactual Constructing the counterfactual Counterfactual is often constructed by selecting a group not affected by the program Randomized: { Use random assignment of the program to create a control group which mimics the counterfactual. Non-randomized: { Argue that a certain excluded group mimics the counterfactual. Validity A tool to assess credibility of a study Internal validity { Relates to ability to draw causal inference, i.e. can we attribute our impact estimates to the program and not to something else External validity { Relates to ability to generalize to other settings of interest, i.e. can we generalize our impact estimates from this program to other populations,
    [Show full text]