1 Common Methodological Problems in Randomized Controlled Trials Of

Total Page:16

File Type:pdf, Size:1020Kb

1 Common Methodological Problems in Randomized Controlled Trials Of 1 Common Methodological Problems in Randomized Controlled Trials of Preventive Interventions Christine M. Steeger†, PhD, [email protected] Pamela R. Buckley†, PhD, [email protected] Fred C. Pampel, PhD, [email protected] Charleen J. Gust, MA, [email protected] Karl G. Hill, PhD, [email protected] Institute of Behavioral Science, University of Colorado Boulder, 1440 15th St., Boulder, CO 80309 † These authors contributed equally to this work. Correspondence concerning this article should be addressed to Christine Steeger, Institute of Behavioral Science, University of Colorado Boulder, 1440 15th St., Boulder, CO 80309; phone: 303-735-7146; [email protected] Acknowledgements: The authors would like to thank Abigail Fagan, Delbert Elliott, Denise Gottfredson, and Amanda Ladika for their comments and critical read of the manuscript, Sharon Mihalic for paper concepts, and Jennifer Balliet for participating in data entry and data coding. Declarations Funding: This study was funded by Arnold Ventures. Conflicts of interest/competing interests: The authors declare that they are members of the Blueprints for Healthy Youth Development staff and that they have no financial or other conflict of interest with respect to any of the specific interventions, policies or procedures discussed in this article. Ethics approval/consent: This paper does not contain research with human participants or animals. Data: Available from authors upon request. Materials and/or Code availability: n/a Author contributions: Concepts and design (CS; PB; FP); data entry, coding, management, and analysis (CS; PB; FP; CG); drafting of manuscript (CS; PB; FP); intellectual contributions, reviewing, and critical editing of manuscript content (CS; PB; FP; CG; KH). All authors have read and approved the final manuscript. 2 Abstract Objective. Randomized controlled trials (RCTs) are often considered the gold standard in evaluating whether intervention results are in line with causal claims of beneficial effects. However, given that poor design and incorrect analysis may lead to biased outcomes, simply employing an RCT is not enough to say an intervention “works.” This paper applies a subset of the Society for Prevention Research (SPR) Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research, with a focus on internal validity (making causal inferences) to determine the degree to which RCTs of preventive interventions are well-designed and analyzed, and whether authors provide a clear description of the methods used to report their study findings. Methods. We conducted a descriptive analysis of 851 RCTs published from 2010-2020 and reviewed by the Blueprints for Healthy Youth Development web-based registry of scientifically- proven and scalable interventions. We used Blueprints’ evaluation criteria that correspond to a subset of SPR’s standards of evidence. Results. Only 22% of the sample satisfied important criteria for minimizing biases that threaten internal validity. Overall, we identified an average of 1-2 methodological weaknesses per RCT. The most frequent sources of bias were problems related to baseline non-equivalence (i.e., differences between conditions at randomization) or differential attrition (i.e., differences between completers versus attritors or differences between study conditions that may compromise the randomization). Additionally, over half the sample (51%) had missing or incomplete tests to rule out these potential sources of bias. Conclusions. Most preventive intervention RCTs need improvement in rigor to permit causal inference claims that an intervention is effective. Researchers also must improve reporting of methods and results to fully assess methodological quality. These advancements will increase the usefulness of preventive interventions by ensuring the credibility and usability of RCT findings. Keywords: Randomized controlled trial, RCT, preventive interventions, internal validity, CONSORT 3 4 Introduction Randomized controlled trials (RCTs) are often considered the gold standard for determining experimental validity and the causal effects of preventive interventions (Shadish, Cook, & Campbell, 2002; West & Thoemmes, 2010). With high-quality implementation, RCTs allow for causal inferences and estimates of average treatment effects that are more reliable and credible than those from other empirical methods (Deaton & Cartwright, 2018). Despite the strength and appropriateness of the RCT to evaluate an intervention (i.e., program, practice, or policy), simply using an RCT design and reporting results is not sufficient to determine whether an intervention “works.” Given that poorly implemented RCTs may produce biased outcomes (Schulz, Altman, Moher, & Group, 2010), an RCT must be correctly designed, implemented, and analyzed in order to make causal inferences and claim beneficial effects of an intervention. That is, RCTs must be internally valid to minimize several sources of bias (systematic error). When feasible and appropriate, randomization is necessary to ensure sound causal conclusions of positive intervention effects, which inform policy and practice decisions for communities (Montgomery et al., 2018). Social and psychological interventions, however, are often complex and contextually dependent upon the difficult-to-control environments in which they are delivered (e.g., schools, correctional facilities, health care settings; Bonell, 2002; Grant, Montgomery, et al., 2013; Grant, Mayo-Wilson, Melendez-Torres, & Montgomery, 2013). Understanding RCTs therefore requires a detailed, transparent description of the interventions tested and the methods used to evaluate them (Grant, Montgomery, et al., 2013). Transparent reporting is crucial for assessing the validity and efficacy or effectiveness of intervention studies used to inform evidence-based decision making and policymaking. To guide social science researchers in the requisite methodological criteria for establishing efficacy (i.e., the extent to which an intervention does more good than harm when 5 delivered under optimal conditions) and effectiveness (i.e., intervention effects when delivered in real-world conditions; Flay et al., 2005), two seminal papers on the methodological standards of evidence were developed by prevention scientists and endorsed by the Society for Prevention Research (SPR). With a goal of increasing consistency in reviews of prevention research, these standards originated in Flay et al. (2005) and were updated in Gottfredson et al. (2015) as the prevention science field has progressed in the number and quality of preventive interventions for reducing youth problem behaviors. How well researchers apply and report the SPR standards of evidence criteria for high- quality RCTs in the prevention science field, however, is unknown. This paper uses a subset of the SPR standards of evidence that must be met for preventive interventions to be judged “tested and efficacious” or “tested and effective” (Flay et al., 2005; Gottfredson et al., 2015). We focus on threats to internal validity to determine whether RCTs of preventive interventions are well- implemented and well-reported – and if not, what are the most common design and analysis flaws? And what information on methods is missing? This study’s larger goal is to improve the design, analysis, and reporting of potential threats to internal validity of intervention research that uses an experimental design. To answer these research questions, we present findings from a large-scale descriptive analysis of RCTs testing intervention program efficacy or effectiveness using the Blueprints for Healthy Youth Development online clearinghouse database. Blueprints identifies scientifically proven and scalable interventions that prevent or reduce the likelihood of antisocial behavior and promote a healthy course of youth development (Buckley, Fagan, Pampel, & Hill, 2020; Fagan & Buchanan, 2016; Mihalic & Elliott, 2015). The Status of RCTs in the Prevention Science Field Increased public and private funder investments in experimental studies of social programs have led to a higher volume of preventive intervention research over the past several 6 decades (Bastian, Glasziou, & Chalmers, 2010). Along with a greater number of published intervention studies, there is some evidence (up to 2010) that the methodological rigor in designing and evaluating RCTs has improved over time for trials in the medical and child health fields (Falagas, Grigori, & Ioannidou, 2009; Thomson et al., 2010). Still, more recent publications have highlighted that many RCTs in the social sciences have design and/or analysis flaws (Ioannidis, 2018), which contribute to sources of bias that weaken internal validity and question causal claims of intervention effectiveness. In addition, the use of RCT findings for informing policy and practice decisions is hindered by poor or incomplete reporting of study design, procedures, and analysis (Montgomery et al., 2018; Walleser, Hill, & Bero, 2011). Grant, Mayo-Wilson, and colleagues (2013) were the first (to our knowledge) to conduct a comprehensive review of the reporting quality of social programs. They identified a sample of 239 RCTs among 40 high impact factor academic journals publishing complex interventions in 2010 in the fields of clinical psychology, criminology, education, and social work. Findings revealed that many standards concerning randomization procedures were poorly reported, such as participant allocation to conditions,
Recommended publications
  • Internal Validity Is About Causal Interpretability
    Internal Validity is about Causal Interpretability Before we can discuss Internal Validity, we have to discuss different types of Internal Validity variables and review causal RH:s and the evidence needed to support them… Every behavior/measure used in a research study is either a ... Constant -- all the participants in the study have the same value on that behavior/measure or a ... • Measured & Manipulated Variables & Constants Variable -- when at least some of the participants in the study • Causes, Effects, Controls & Confounds have different values on that behavior/measure • Components of Internal Validity • “Creating” initial equivalence and every behavior/measure is either … • “Maintaining” ongoing equivalence Measured -- the value of that behavior/measure is obtained by • Interrelationships between Internal Validity & External Validity observation or self-report of the participant (often called “subject constant/variable”) or it is … Manipulated -- the value of that behavior/measure is controlled, delivered, determined, etc., by the researcher (often called “procedural constant/variable”) So, every behavior/measure in any study is one of four types…. constant variable measured measured (subject) measured (subject) constant variable manipulated manipulated manipulated (procedural) constant (procedural) variable Identify each of the following (as one of the four above, duh!)… • Participants reported practicing between 3 and 10 times • All participants were given the same set of words to memorize • Each participant reported they were a Psyc major • Each participant was given either the “homicide” or the “self- defense” vignette to read From before... Circle the manipulated/causal & underline measured/effect variable in each • Causal RH: -- differences in the amount or kind of one behavior cause/produce/create/change/etc.
    [Show full text]
  • Validity and Reliability of the Questionnaire for Compliance with Standard Precaution for Nurses
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Cadernos Espinosanos (E-Journal) Rev Saúde Pública 2015;49:87 Original Articles DOI:10.1590/S0034-8910.2015049005975 Marília Duarte ValimI Validity and reliability of the Maria Helena Palucci MarzialeII Miyeko HayashidaII Questionnaire for Compliance Fernanda Ludmilla Rossi RochaIII with Standard Precaution Jair Lício Ferreira SantosIV ABSTRACT OBJECTIVE: To evaluate the validity and reliability of the Questionnaire for Compliance with Standard Precaution for nurses. METHODS: This methodological study was conducted with 121 nurses from health care facilities in Sao Paulo’s countryside, who were represented by two high-complexity and by three average-complexity health care facilities. Internal consistency was calculated using Cronbach’s alpha and stability was calculated by the intraclass correlation coefficient, through test-retest. Convergent, discriminant, and known-groups construct validity techniques were conducted. RESULTS: The questionnaire was found to be reliable (Cronbach’s alpha: 0.80; intraclass correlation coefficient: (0.97) In regards to the convergent and discriminant construct validity, strong correlation was found between compliance to standard precautions, the perception of a safe environment, and the smaller perception of obstacles to follow such precautions (r = 0.614 and r = 0.537, respectively). The nurses who were trained on the standard precautions and worked on the health care facilities of higher complexity were shown to comply more (p = 0.028 and p = 0.006, respectively). CONCLUSIONS: The Brazilian version of the Questionnaire for I Departamento de Enfermagem. Centro Compliance with Standard Precaution was shown to be valid and reliable.
    [Show full text]
  • Comorbidity Scores
    Bias Introduction of issue and background papers Sebastian Schneeweiss, MD, ScD Professor of Medicine and Epidemiology Division of Pharmacoepidemiology and Pharmacoeconomics, Dept of Medicine, Brigham & Women’s Hospital/ Harvard Medical School 1 Potential conflicts of interest PI, Brigham & Women’s Hospital DEcIDE Center for Comparative Effectiveness Research (AHRQ) PI, DEcIDE Methods Center (AHRQ) Co-Chair, Methods Core of the Mini Sentinel System (FDA) Member, national PCORI Methods Committee No paid consulting or speaker fees from pharmaceutical manufacturers Consulting in past year: . WHISCON LLC, Booz&Co, Aetion Investigator-initiated research grants to the Brigham from Pfizer, Novartis, Boehringer-Ingelheim Multiple grants from NIH 2 Objective of Comparative Effectiveness Research Efficacy Effectiveness* (Can it work?) (Does it work in routine care?) Placebo Most RCTs comparison for drug (or usual care) approval Active Goal of comparison (head-to-head) CER Effectiveness = Efficacy X Adherence X Subgroup effects (+/-) RCT Reality of routine care 3 * Cochrane A. Nuffield Provincial Trust, 1972 CER Baseline Non- randomization randomized Primary Secondary Primary Secondary data data data data 4 Challenges of observational research Measurement /surveillance-related biases . Informative missingness/ misclassification Selection-related biases . Confounding . Informative treatment changes/discontinuations Time-related biases . Immortal time bias . Temporality . Effect window (Multiple comparisons) 5 Informative missingness:
    [Show full text]
  • Statistical Analysis 8: Two-Way Analysis of Variance (ANOVA)
    Statistical Analysis 8: Two-way analysis of variance (ANOVA) Research question type: Explaining a continuous variable with 2 categorical variables What kind of variables? Continuous (scale/interval/ratio) and 2 independent categorical variables (factors) Common Applications: Comparing means of a single variable at different levels of two conditions (factors) in scientific experiments. Example: The effective life (in hours) of batteries is compared by material type (1, 2 or 3) and operating temperature: Low (-10˚C), Medium (20˚C) or High (45˚C). Twelve batteries are randomly selected from each material type and are then randomly allocated to each temperature level. The resulting life of all 36 batteries is shown below: Table 1: Life (in hours) of batteries by material type and temperature Temperature (˚C) Low (-10˚C) Medium (20˚C) High (45˚C) 1 130, 155, 74, 180 34, 40, 80, 75 20, 70, 82, 58 2 150, 188, 159, 126 136, 122, 106, 115 25, 70, 58, 45 type Material 3 138, 110, 168, 160 174, 120, 150, 139 96, 104, 82, 60 Source: Montgomery (2001) Research question: Is there difference in mean life of the batteries for differing material type and operating temperature levels? In analysis of variance we compare the variability between the groups (how far apart are the means?) to the variability within the groups (how much natural variation is there in our measurements?). This is why it is called analysis of variance, abbreviated to ANOVA. This example has two factors (material type and temperature), each with 3 levels. Hypotheses: The 'null hypothesis' might be: H0: There is no difference in mean battery life for different combinations of material type and temperature level And an 'alternative hypothesis' might be: H1: There is a difference in mean battery life for different combinations of material type and temperature level If the alternative hypothesis is accepted, further analysis is performed to explore where the individual differences are.
    [Show full text]
  • Validity and Reliability in Quantitative Studies Evid Based Nurs: First Published As 10.1136/Eb-2015-102129 on 15 May 2015
    Research made simple Validity and reliability in quantitative studies Evid Based Nurs: first published as 10.1136/eb-2015-102129 on 15 May 2015. Downloaded from Roberta Heale,1 Alison Twycross2 10.1136/eb-2015-102129 Evidence-based practice includes, in part, implementa- have a high degree of anxiety? In another example, a tion of the findings of well-conducted quality research test of knowledge of medications that requires dosage studies. So being able to critique quantitative research is calculations may instead be testing maths knowledge. 1 School of Nursing, Laurentian an important skill for nurses. Consideration must be There are three types of evidence that can be used to University, Sudbury, Ontario, given not only to the results of the study but also the demonstrate a research instrument has construct Canada 2Faculty of Health and Social rigour of the research. Rigour refers to the extent to validity: which the researchers worked to enhance the quality of Care, London South Bank 1 Homogeneity—meaning that the instrument mea- the studies. In quantitative research, this is achieved University, London, UK sures one construct. through measurement of the validity and reliability.1 2 Convergence—this occurs when the instrument mea- Correspondence to: Validity sures concepts similar to that of other instruments. Dr Roberta Heale, Validity is defined as the extent to which a concept is Although if there are no similar instruments avail- School of Nursing, Laurentian accurately measured in a quantitative study. For able this will not be possible to do. University, Ramsey Lake Road, example, a survey designed to explore depression but Sudbury, Ontario, Canada 3 Theory evidence—this is evident when behaviour is which actually measures anxiety would not be consid- P3E2C6; similar to theoretical propositions of the construct ered valid.
    [Show full text]
  • Experimental Design
    UNIVERSITY OF BERGEN Experimental design Sebastian Jentschke UNIVERSITY OF BERGEN Agenda • experiments and causal inference • validity • internal validity • statistical conclusion validity • construct validity • external validity • tradeoffs and priorities PAGE 2 Experiments and causal inference UNIVERSITY OF BERGEN Some definitions a hypothesis often concerns a cause- effect-relationship PAGE 4 UNIVERSITY OF BERGEN Scientific revolution • discovery of America → french revolution: renaissance and enlightenment – Copernicus, Galilei, Newton • empiricism: use observation to correct errors in theory • scientific experimentation: taking a deliberate action [manipulation, vary something] followed by systematic observation of what occured afterwards [effect] controlling extraneous influences that might limit or bias observation: random assignment, control groups • mathematization, institutionalization PAGE 5 UNIVERSITY OF BERGEN Causal relationships Definitions and some philosophy: • causal relationships are recognized intuitively by most people in their daily lives • Locke: „A cause is which makes any other thing, either simple idea, substance or mode, begin to be; and an effect is that, which had its beginning from some other thing“ (1975, p. 325) • Stuart Mill: A causal relationship exists if (a) the cause preceded the effect, (b) the cause was related to the effect, (c) we can find no plausible alternative explanantion for the effect other than the cause. Experiments: (a) manipulate the presumed cause, (b) assess whether variation in the
    [Show full text]
  • Understanding Replication of Experiments in Software Engineering: a Classification Omar S
    Understanding replication of experiments in software engineering: A classification Omar S. Gómez a, , Natalia Juristo b,c, Sira Vegas b a Facultad de Matemáticas, Universidad Autónoma de Yucatán, 97119 Mérida, Yucatán, Mexico Facultad de Informática, Universidad Politécnica de Madrid, 28660 Boadilla del Monte, Madrid, Spain c Department of Information Processing Science, University of Oulu, Oulu, Finland abstract Context: Replication plays an important role in experimental disciplines. There are still many uncertain-ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment-ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi-fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera-ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment.
    [Show full text]
  • Introduction to Difference in Differences (DID) Analysis
    Introduction to Difference in Differences (DID) Analysis Hsueh-Sheng Wu CFDR Workshop Series June 15, 2020 1 Outline of Presentation • What is Difference-in-Differences (DID) analysis • Threats to internal and external validity • Compare and contrast three different research designs • Graphic presentation of the DID analysis • Link between regression and DID • Stata -diff- module • Sample Stata codes • Conclusions 2 What Is Difference-in-Differences Analysis • Difference-in-Differences (DID) analysis is a statistic technique that analyzes data from a nonequivalence control group design and makes a casual inference about an independent variable (e.g., an event, treatment, or policy) on an outcome variable • A non-equivalence control group design establishes the temporal order of the independent variable and the dependent variable, so it establishes which variable is the cause and which one is the effect • A non-equivalence control group design does not randomly assign respondents to the treatment or control group, so treatment and control groups may not be equivalent in their characteristics and reactions to the treatment • DID is commonly used to evaluate the outcome of policies or natural events (such as Covid-19) 3 Internal and External Validity • When designing an experiment, researchers need to consider how extraneous variables may threaten the internal validity and external validity of an experiment • Internal validity refers to the extent to which an experiment can establish the causal relation between the independent variable and
    [Show full text]
  • Experimental Design and Data Analysis in Computer Simulation
    Journal of Modern Applied Statistical Methods Volume 16 | Issue 2 Article 2 12-1-2017 Experimental Design and Data Analysis in Computer Simulation Studies in the Behavioral Sciences Michael Harwell University of Minnesota - Twin Cities, [email protected] Nidhi Kohli University of Minnesota - Twin Cities Yadira Peralta University of Minnesota - Twin Cities Follow this and additional works at: http://digitalcommons.wayne.edu/jmasm Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons Recommended Citation Harwell, M., Kohli, N., & Peralta, Y. (2017). Experimental Design and Data Analysis in Computer Simulation Studies in the Behavioral Sciences. Journal of Modern Applied Statistical Methods, 16(2), 3-28. doi: 10.22237/jmasm/1509494520 This Invited Article is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized editor of DigitalCommons@WayneState. Journal of Modern Applied Statistical Methods Copyright © 2017 JMASM, Inc. November 2017, Vol. 16, No. 2, 3-28. ISSN 1538 − 9472 doi: 10.22237/jmasm/1509494520 Experimental Design and Data Analysis in Computer Simulation Studies in the Behavioral Sciences Michael Harwell Nidhi Kohli Yadira Peralta University of Minnesota, University of Minnesota, University of Minnesota, Twin Cities Twin Cities Twin Cities Minneapolis, MN Minneapolis, MN Minneapolis, MN Treating computer simulation studies as statistical sampling experiments subject to established principles of experimental design and data analysis should further enhance their ability to inform statistical practice and a program of statistical research. Latin hypercube designs to enhance generalizability and meta-analytic methods to analyze simulation results are presented.
    [Show full text]
  • Relations Between Inductive Reasoning and Deductive Reasoning
    Journal of Experimental Psychology: © 2010 American Psychological Association Learning, Memory, and Cognition 0278-7393/10/$12.00 DOI: 10.1037/a0018784 2010, Vol. 36, No. 3, 805–812 Relations Between Inductive Reasoning and Deductive Reasoning Evan Heit Caren M. Rotello University of California, Merced University of Massachusetts Amherst One of the most important open questions in reasoning research is how inductive reasoning and deductive reasoning are related. In an effort to address this question, we applied methods and concepts from memory research. We used 2 experiments to examine the effects of logical validity and premise– conclusion similarity on evaluation of arguments. Experiment 1 showed 2 dissociations: For a common set of arguments, deduction judgments were more affected by validity, and induction judgments were more affected by similarity. Moreover, Experiment 2 showed that fast deduction judgments were like induction judgments—in terms of being more influenced by similarity and less influenced by validity, compared with slow deduction judgments. These novel results pose challenges for a 1-process account of reasoning and are interpreted in terms of a 2-process account of reasoning, which was implemented as a multidimensional signal detection model and applied to receiver operating characteristic data. Keywords: reasoning, similarity, mathematical modeling An important open question in reasoning research concerns the arguments (Rips, 2001). This technique can highlight similarities relation between induction and deduction. Typically, individual or differences between induction and deduction that are not con- studies of reasoning have focused on only one task, rather than founded by the use of different materials (Heit, 2007). It also examining how the two are connected (Heit, 2007).
    [Show full text]
  • Single-Case Designs Technical Documentation
    What Works Clearinghouse SINGLE‐CASE DESIGN TECHNICAL DOCUMENTATION Developed for the What Works Clearinghouse by the following panel: Kratochwill, T. R. Hitchcock, J. Horner, R. H. Levin, J. R. Odom, S. L. Rindskopf, D. M Shadish, W. R. June 2010 Recommended citation: Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf. 1 SINGLE-CASE DESIGNS TECHNICAL DOCUMENTATION In an effort to expand the pool of scientific evidence available for review, the What Works Clearinghouse (WWC) assembled a panel of national experts in single-case design (SCD) and analysis to draft SCD Standards. In this paper, the panel provides an overview of SCDs, specifies the types of questions that SCDs are designed to answer, and discusses the internal validity of SCDs. The panel then proposes SCD Standards to be implemented by the WWC. The Standards are bifurcated into Design and Evidence Standards (see Figure 1). The Design Standards evaluate the internal validity of the design. Reviewers assign the categories of Meets Standards, Meets Standards with Reservations and Does not Meet Standards to each study based on the Design Standards. Reviewers trained in visual analysis will then apply the Evidence Standards to studies that meet standards (with or without reservations), resulting in the categorization of each outcome variable as demonstrating Strong Evidence, Moderate Evidence, or No Evidence. A. OVERVIEW OF SINGLE-CASE DESIGNS SCDs are adaptations of interrupted time-series designs and can provide a rigorous experimental evaluation of intervention effects (Horner & Spaulding, in press; Kazdin, 1982, in press; Kratochwill, 1978; Kratochwill & Levin, 1992; Shadish, Cook, & Campbell, 2002).
    [Show full text]
  • Questionnaire Validity and Reliability
    Questionnaire Validity and reliability Department of Social and Preventive Medicine, Faculty of Medicine Outlines Introduction What is validity and reliability? Types of validity and reliability. How do you measure them? Types of Sampling Methods Sample size calculation G-Power ( Power Analysis) Research • The systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions • In the broadest sense of the word, the research includes gathering of data in order to generate information and establish the facts for the advancement of knowledge. ● Step I: Define the research problem ● Step 2: Developing a research plan & research Design ● Step 3: Define the Variables & Instrument (validity & Reliability) ● Step 4: Sampling & Collecting data ● Step 5: Analysing data ● Step 6: Presenting the findings A questionnaire is • A technique for collecting data in which a respondent provides answers to a series of questions. • The vehicle used to pose the questions that the researcher wants respondents to answer. • The validity of the results depends on the quality of these instruments. • Good questionnaires are difficult to construct; • Bad questionnaires are difficult to analyze. •Identify the goal of your questionnaire •What kind of information do you want to gather with your questionnaire? • What is your main objective? • Is a questionnaire the best way to go about collecting this information? Ch 11 6 How To Obtain Valid Information • Ask purposeful questions • Ask concrete questions • Use time periods
    [Show full text]