A Primer on Experimental and Quasi-Experimental Design
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
UNIT 3 QUASI EXPERIMENTAL DESIGN Factorial Design
UNIT 3 QUASI EXPERIMENTAL DESIGN Factorial Design Structure 3.0 Introduction 3.1 Objectives 3.2 Meaning of Quasi Experimental Design 3.3 Difference Between Quasi Experimental Design and True Experimental Design 3.4 Types of Quasi Experimental Design 3.4.1 Non-Equivalent Group Posttest only Design 3.4.2 Non-Equivalent Control Group Design 3.4.3 The Separate Pretest-post Test Sample Design 3.4.4 The Double Pretest Design 3.4.5 The Switching Replications Design 3.4.6 Mixed Factorial Design 3.4.7 Interrupted Time Series Design 3.4.8 Multiple Time Series Design 3.4.9 Repeated Treatment Design 3.4.10 Counter Balanced Design 3.5 Advantages and Disadvantages of Quasi Experimental Design 3.6 Let Us Sum Up 3.7 Unit End Questions 3.8 Glossary 3.9 Suggested Readings 3.0 INTRODUCTION For most of the history of scientific psychology, it has been accepted that experimental research, with its twin assets of random assignment and manipulation of the independent variable by the researcher, is the ideal method for psychological research. Some researchers believe this so strongly that they avoid studying important questions about human personality, sex differences in behaviour, and other subjects that do not lend themselves to experimental research. A few decades ago researchers in psychology were interested in applied psychology issues conducting research on how students learnt in school, how social factors influenced the behaviour of an individual, how to motivate factory workers to perform at a higher level etc. These research questions cannot be answered by lab experiments as one has to go to the field and the real life situation like the classroom etc., to find answers to the research issues mentioned above. -
Internal and External Validity: Can You Apply Research Study Results to Your Patients? Cecilia Maria Patino1,2,A, Juliana Carvalho Ferreira1,3,B
J Bras Pneumol. 2018;44(3):183-183 CONTINUING EDUCATION: http://dx.doi.org/10.1590/S1806-37562018000000164 SCIENTIFIC METHODOLOGY Internal and external validity: can you apply research study results to your patients? Cecilia Maria Patino1,2,a, Juliana Carvalho Ferreira1,3,b CLINICAL SCENARIO extent to which the results of a study are generalizable to patients in our daily practice, especially for the population In a multicenter study in France, investigators conducted that the sample is thought to represent. a randomized controlled trial to test the effect of prone vs. supine positioning ventilation on mortality among patients Lack of internal validity implies that the results of the with early, severe ARDS. They showed that prolonged study deviate from the truth, and, therefore, we cannot prone-positioning ventilation decreased 28-day mortality draw any conclusions; hence, if the results of a trial are [hazard ratio (HR) = 0.39; 95% CI: 0.25-0.63].(1) not internally valid, external validity is irrelevant.(2) Lack of external validity implies that the results of the trial may not apply to patients who differ from the study STUDY VALIDITY population and, consequently, could lead to low adoption The validity of a research study refers to how well the of the treatment tested in the trial by other clinicians. results among the study participants represent true fi ndings among similar individuals outside the study. INCREASING VALIDITY OF RESEARCH This concept of validity applies to all types of clinical STUDIES studies, including those about prevalence, associations, interventions, and diagnosis. The validity of a research To increase internal validity, investigators should ensure study includes two domains: internal and external validity. -
Research Methods in Social Psychology 2 Antony S.R
9781405124003_4_002.qxd 10/31/07 2:54 PM Page 20 Research Methods in Social Psychology 2 Antony S.R. Manstead KEY CONCEPTS confederate confounding construct construct validity control group convergent validity cover story debriefing demand characteristics dependent variable discourse analysis experiment experimental group experimental scenario experimenter expectancy effects external validity factorial experiment field experiment Hawthorne effect hypothesis implicit measures independent variable interaction effect interaction process analysis (IPA) internal validity Internet experiments main effect manipulation check mediating variable meta-analysis 9781405124003_4_002.qxd 10/31/07 2:54 PM Page 21 one-shot case study operationalization participant observation post-experimental enquiry CHAPTER OUTLINE post-test only control group design quasi-experiment This chapter provides an overview of research methods in social psychology, from the develop- quota sample random allocation ment of theory to the collection of data. After describing three quantitative research strategies reactivity (survey research, experiments and quasi-experiments), the chapter briefly discusses qualitative reliability approaches, focusing on discourse analysis. There follows a description of the key elements of sampling experiments and of threats to validity in experimental research, and a discussion of problems simple random sample with experimental research in social psychology. The final section of the chapter contains a social desirability description of three methods of data collection (observation, self-report and implicit measures). survey research theory triangulation true randomized experiment unobtrusive measures validity variable Introduction How do social psychologists develop their theories? How do social psychologists go about testing their theories? Methods provide a means of translating a researcher’s ideas into actions. These ideas usually revolve around one or more questions about a phenomenon. -
As an "Odds Ratio", the Ratio of the Odds of Being Exposed by a Case Divided by the Odds of Being Exposed by a Control
STUDY DESIGNS IN BIOMEDICAL RESEARCH VALIDITY & SAMPLE SIZE Validity is an important concept; it involves the assessment against accepted absolute standards, or in a milder form, to see if the evaluation appears to cover its intended target or targets. INFERENCES & VALIDITIES Two major levels of inferences are involved in interpreting a study, a clinical trial The first level concerns Internal validity; the degree to which the investigator draws the correct conclusions about what actually happened in the study. The second level concerns External Validity (also referred to as generalizability or inference); the degree to which these conclusions could be appropriately applied to people and events outside the study. External Validity Internal Validity Truth in Truth in Findings in The Universe The Study The Study Research Question Study Plan Study Data A Simple Example: An experiment on the effect of Vitamin C on the prevention of colds could be simply conducted as follows. A number of n children (the sample size) are randomized; half were each give a 1,000-mg tablet of Vitamin C daily during the test period and form the “experimental group”. The remaining half , who made up the “control group” received “placebo” – an identical tablet containing no Vitamin C – also on a daily basis. At the end, the “Number of colds per child” could be chosen as the outcome/response variable, and the means of the two groups are compared. Assignment of the treatments (factor levels: Vitamin C or Placebo) to the experimental units (children) was performed using a process called “randomization”. The purpose of randomization was to “balance” the characteristics of the children in each of the treatment groups, so that the difference in the response variable, the number of cold episodes per child, can be rightly attributed to the effect of the predictor – the difference between Vitamin C and Placebo. -
Introduction to Difference in Differences (DID) Analysis
Introduction to Difference in Differences (DID) Analysis Hsueh-Sheng Wu CFDR Workshop Series June 15, 2020 1 Outline of Presentation • What is Difference-in-Differences (DID) analysis • Threats to internal and external validity • Compare and contrast three different research designs • Graphic presentation of the DID analysis • Link between regression and DID • Stata -diff- module • Sample Stata codes • Conclusions 2 What Is Difference-in-Differences Analysis • Difference-in-Differences (DID) analysis is a statistic technique that analyzes data from a nonequivalence control group design and makes a casual inference about an independent variable (e.g., an event, treatment, or policy) on an outcome variable • A non-equivalence control group design establishes the temporal order of the independent variable and the dependent variable, so it establishes which variable is the cause and which one is the effect • A non-equivalence control group design does not randomly assign respondents to the treatment or control group, so treatment and control groups may not be equivalent in their characteristics and reactions to the treatment • DID is commonly used to evaluate the outcome of policies or natural events (such as Covid-19) 3 Internal and External Validity • When designing an experiment, researchers need to consider how extraneous variables may threaten the internal validity and external validity of an experiment • Internal validity refers to the extent to which an experiment can establish the causal relation between the independent variable and -
Randomized Controlled Trials, Development Economics and Policy Making in Developing Countries
Randomized Controlled Trials, Development Economics and Policy Making in Developing Countries Esther Duflo Department of Economics, MIT Co-Director J-PAL [Joint work with Abhijit Banerjee and Michael Kremer] Randomized controlled trials have greatly expanded in the last two decades • Randomized controlled Trials were progressively accepted as a tool for policy evaluation in the US through many battles from the 1970s to the 1990s. • In development, the rapid growth starts after the mid 1990s – Kremer et al, studies on Kenya (1994) – PROGRESA experiment (1997) • Since 2000, the growth have been very rapid. J-PAL | THE ROLE OF RANDOMIZED EVALUATIONS IN INFORMING POLICY 2 Cameron et al (2016): RCT in development Figure 1: Number of Published RCTs 300 250 200 150 100 50 0 1975 1980 1985 1990 1995 2000 2005 2010 2015 Publication Year J-PAL | THE ROLE OF RANDOMIZED EVALUATIONS IN INFORMING POLICY 3 BREAD Affiliates doing RCT Figure 4. Fraction of BREAD Affiliates & Fellows with 1 or more RCTs 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1980 or earlier 1981-1990 1991-2000 2001-2005 2006-today * Total Number of Fellows and Affiliates is 166. PhD Year J-PAL | THE ROLE OF RANDOMIZED EVALUATIONS IN INFORMING POLICY 4 Top Journals J-PAL | THE ROLE OF RANDOMIZED EVALUATIONS IN INFORMING POLICY 5 Many sectors, many countries J-PAL | THE ROLE OF RANDOMIZED EVALUATIONS IN INFORMING POLICY 6 Why have RCT had so much impact? • Focus on identification of causal effects (across the board) • Assessing External Validity • Observing Unobservables • Data collection • Iterative Experimentation • Unpack impacts J-PAL | THE ROLE OF RANDOMIZED EVALUATIONS IN INFORMING POLICY 7 Focus on Identification… across the board! • The key advantage of RCT was perceived to be a clear identification advantage • With RCT, since those who received a treatment are randomly selected in a relevant sample, any difference between treatment and control must be due to the treatment • Most criticisms of experiment also focus on limits to identification (imperfect randomization, attrition, etc. -
Research Methods in Social Psychology 2 Antony S.R
9781405124003_4_002.qxd 10/31/07 2:54 PM Page 20 Research Methods in Social Psychology 2 Antony S.R. Manstead KEY CONCEPTS confederate confounding construct construct validity control group convergent validity cover story debriefing demand characteristics dependent variable discourse analysis experiment experimental group experimental scenario experimenter expectancy effects external validity factorial experiment field experiment Hawthorne effect hypothesis implicit measures independent variable interaction effect interaction process analysis (IPA) internal validity Internet experiments main effect manipulation check mediating variable meta-analysis 9781405124003_4_002.qxd 10/31/07 2:54 PM Page 21 one-shot case study operationalization participant observation post-experimental enquiry CHAPTER OUTLINE post-test only control group design quasi-experiment This chapter provides an overview of research methods in social psychology, from the develop- quota sample random allocation ment of theory to the collection of data. After describing three quantitative research strategies reactivity (survey research, experiments and quasi-experiments), the chapter briefly discusses qualitative reliability approaches, focusing on discourse analysis. There follows a description of the key elements of sampling experiments and of threats to validity in experimental research, and a discussion of problems simple random sample with experimental research in social psychology. The final section of the chapter contains a social desirability description of three methods of data collection (observation, self-report and implicit measures). survey research theory triangulation true randomized experiment unobtrusive measures validity variable Introduction How do social psychologists develop their theories? How do social psychologists go about testing their theories? Methods provide a means of translating a researcher’s ideas into actions. These ideas usually revolve around one or more questions about a phenomenon. -
Psychology and Its History
2 CHAPTER 1 INTRODUCING PSYCHOLOGY'S HISTORY • Distinguish between primary and secondary sources of historical information and describe the kinds of primary source information typically found by historians in archives • Explain how the process of doing history can produce some degree of confidence that truth has been attained PSYCHOLOGY AND ITS HISTORY One hundred is a nice round number and a one-hundredth anniversary is ample cause for celebration. In recent years, psychologists with a sense of history have celebrated often. The festivities began back in 1979, with the centennial of the founding of Wilhelm Wundt's laboratory at Leipzig, Germany. In 1992, the American Psychological Associ ation (APA) created a yearlong series of events to commemorate the centennial of its founding in G. Stanley Hall's study at Clark University on July 8, 1892. During the cen tennial year, historical articles appeared in all of the APA's journals and a special issue of American Psychologist focused on history; several books dealing with APA's history were commissioned (e.g., Evans, Sexton, & Cadwallader, 1992); regional conventions had historical themes; and the annual convention in Washington featured events ranging from the usual symposia and invited addresses on history to a fancy dress ball at Union Station featuring period (1892) costumes and a huge APA birthday cake. Interest in psychology's history has not been limited to centennial celebrations, of course. Histories of psychology were written soon after psychology itself appeared on the academic scene (e.g., Baldwin, 1913), and at least two of psychology's most famous books, E. G. -
Psychological Research
Center for Open Access in Science Open Journal for Psychological Research 2019 ● Volume 3 ● Number 1 https://doi.org/10.32591/coas.ojpr.0301 ISSN (Online) 2560-5372 OPEN JOURNAL FOR PSYCHOLOGICAL RESEARCH (OJPR) ISSN (Online) 2560-5372 https://www.centerprode.com/ojpr.html [email protected] Publisher: Center for Open Access in Science (COAS) Belgrade, SERBIA https://www.centerprode.com [email protected] Editorial Board: Nikolaos Makris (PhD), Professor Demokritus University of Thrace, School of Education, Alexandroupolis, GREECE Nikolina Kenig (PhD), Professor Ss. Cyril and Methodius University of Skopje, Faculty of Philosophy, NORTH MACEDONIA Ljupčo Kevereski (PhD), Professor University “St. Kliment Ohridski” - Bitola, Faculty of Education, NORTH MACEDONIA Marius Drugas (PhD), Associate Professor University of Oradea, Faculty of Social and Humanistic Sciences, ROMANIA Teuta Danuza (MA), Professor University of Prishtina, Faculty of Education, KOSOVO Valbona Habili Sauku (PhD), Senior Lecturer University of Tirana, Faculty of Social Sciences, ALBANIA Silva Ibrahimi (PhD), Lecturer Albanian University, Faculty of Social Sciences, Tirana, ALBANIA Responsible Editor: Goran Pešić Center for Open Access in Science, Belgrade Open Journal for Psychological Research, 2019, 3(1), 1-30. ISSN (Online) 2560-5372 __________________________________________________________________ CONTENTS 1 Correlates of Job Satisfaction Among Civil Servants Close to Retirement in Lagos State Adedoyin-Alao Olusola & Ajibola Abdulrahamon Ishola 11 Nonattachment, Trait Mindfulness, Meditation Practice and Life Satisfaction: Testing the Buddhism Psychology Model in Indonesia Yohanes Budiarto 23 Importance of Sex Education from the Adolescents’ Perspective: A Study in Indonesia Siti Maimunah Open Journal for Psychological Research, 2019, 3(1), 1-30. ISSN (Online) 2560-5372 __________________________________________________________________ Center for Open Access in Science ▪ https://www.centerprode.com/ojpr.html Open Journal for Psychological Research, 2019, 3(1), 1-10. -
The Research Process
2 THE RESEARCH PROCESS Learning Objectives: Fast Facts Performance-Based Measures Testing Before Learning Qualitative Research Can Money Buy You Happiness? Literature Review Research Approach Evaluating and Critiquing Research Research Strategies Scientific Skepticism Research Toolbox Peer Review Randomized “True” Experiment Reliability of Measurementsdistribute Quasi-Experiments Validity and Generalizability Correlational Research Multicultural Analysis Sampling and Survey Design Stat Corner: Sample Size Matters Research in the News: Building Conclusionor Psychological Knowledge With Survey Research LEARNING OBJECTIVES:post, FAST FACTS 2.1 The two major research approaches—nonexperimental and experimental— offer different yet complementary methods to investigate a question, to evaluate a theory, and to test a hypothesis. 2.2 Both approaches can call upon a toolbox of methods and study designs, and all are evaluated on the basis of reliability and validity. 2.3 Sciencecopy, focuses only on peer-reviewed research, which is evaluated using principles of reliability and validity. 2.4 Cross-cultural research reinforces the critical importance of reliability and validity in evaluating evidence and in optimizing the generalizability of notpsychological knowledge. Do 26 Copyright ©2019 by SAGE Publications, Inc. This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher. Chapter 2 • The Research Process 27 TESTING BEFORE LEARNING Here is the answer. You come up with the question. 1. One or more independent variables are 4. The best research design to test cause-and- deliberately manipulated to assess their effect relationships. effects on one or more dependent variables. a. What is a quasi-experiment? a. What is a survey study? b. -
Research Methods in Social Psychology
Research Methods in Social Psychology By Rajiv Jhangiani Kwantlen Polytechnic University Social psychologists are interested in the ways that other people affect thought, emotion, and behavior. To explore these concepts requires special research methods. Following a brief overview of traditional research designs, this module introduces how complex experimental designs, field experiments, naturalistic observation, experience sampling techniques, survey research, subtle and nonconscious techniques such as priming, and archival research and the use of big data may each be adapted to address social psychological questions. This module also discusses the importance of obtaining a representative sample along with some ethical considerations that social psychologists face. Learning Objectives • Describe the key features of basic and complex experimental designs. • Describe the key features of field experiments, naturalistic observation, and experience sampling techniques. • Describe survey research and explain the importance of obtaining a representative sample. • Describe the implicit association test and the use of priming. • Describe use of archival research techniques. • Explain five principles of ethical research that most concern social psychologists. Introduction Are you passionate about cycling? Norman Triplett certainly was. At the turn of last century he studied the lap times of cycling races and noticed a striking fact: riding in competitive races appeared to improve riders’ times by about 20-30 seconds every mile compared to when they rode the same courses alone. Triplett suspected that the riders’ enhanced performance could not be explained simply by the slipstream caused by other cyclists blocking the wind. To test his hunch, he designed what is widely described as the first experimental study in social psychology (published in 1898!)—in this case, having children reel in a length of fishing line as fast as they could. -
Why Internal Validity Is Not Prior to External Validity
Contributed Paper PSA 2012 (draft), p. 1 Why internal validity is not prior to external validity Johannes Persson & Annika Wallin Lund University, Sweden Corresponding author: [email protected] [Abstract: We show that the common claim that internal validity should be understood as prior to external validity has, at least, three epistemologically problematic aspects: experimental artefacts, the implications of causal relations, and how the mechanism is measured. Each aspect demonstrates how important external validity is for the internal validity of the experimental result.] 1) Internal and external validity: perceived tension and claimed priority Donald T. Campbell introduced the concepts internal and external validity in the 1950s. Originally designed for research related to personality and personality change, the use of this conceptual pair was soon extended to educational and social research. Since then it has spread to many more disciplines. Without a doubt the concepts captures two features of research scientists are aware of in their daily practice. Researchers aim to make correct inferences both about that which is actually studied (internal validity), for instance in an experiment, and about what the results ‘generalize to’ (external validity). Whether or not the language of internal and external validity is used in their disciplines, the tension between these two kinds of inference is often experienced. In addition, it is often claimed that one of the two is prior to the other. And the sense in which internal validity is often claimed to be prior to external validity is both temporal and epistemic, at least. For instance, Francisco Guala claims that: “Problems of internal validity are chronologically and epistemically antecedent to problems of external validity: it does not make much sense to ask whether a result is valid outside the experimental circumstances unless we are confident that it does therein” (Guala, 2003, 1198).