Introduction to Adaptive Interventions and SMART Study Design Principles

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Adaptive Interventions and SMART Study Design Principles Introduction to Adaptive Interventions and SMART Study Design Principles Daniel Almirall1;2 Linda M Collins2;3;5 Susan A Murphy1;2;4 1Institute for Social Research, University of Michigan 2The Methodology Center, Penn State University 3Human Development and Family Studies, Pennsylvania State University 4Department of Statistics, University of Michigan 5Department of Statistics, Pennsylvania State University COPTR Workshop on Adaptive Interventions; June 29, 2011 Almirall, Collins, Murphy Building Adaptive Interventions 1 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) SMART Design Principles Discussion Outline Adaptive Interventions What? Why? Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What are SMARTs? SMART Design Principles Keep it Simple Choosing Primary and Secondary Hypotheses Discussion Almirall, Collins, Murphy Building Adaptive Interventions 2 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What? Why? SMART Design Principles Discussion ADAPTIVE INTERVENTIONS Almirall, Collins, Murphy Building Adaptive Interventions 3 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What? Why? SMART Design Principles Discussion Definition: An Adaptive Intervention is I a sequence of individually tailored decision rules I that specify whether, how, and/or when I to alter the intensity, type, dosage, or delivery of treatment I at critical decision points in the course of care. Adaptive Interventions operationalize sequential decision making with the aim of improving clinical practice. Almirall, Collins, Murphy Building Adaptive Interventions 4 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What? Why? SMART Design Principles Discussion Concrete Example of Adaptive Intervention Pediatric Anxiety Example (SAD, GAD, SoP) First-line Txt Tailoring Variable Second-line Txt Maintain: Responder CBT s CBT Add Treatment: Non-Responders CBT + MED I Goal is to minimize the child’s symptom profile/trajectory. Almirall, Collins, Murphy Building Adaptive Interventions 5 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What? Why? SMART Design Principles Discussion What makes up an Adaptive Intervention? 1. Critical decisions: treatment options and more 2. Tailoring variables: to decide how to adapt treatment 3. FirstDecision-line Txt rules: inputsTailoring tailoring Variable variable, outputsSecond one or-line Txt more recommended treatments Maintain: Responder CBT s CBT Add Treatment: Non-Responders CBT + MED Adaptive interventions AKA: dynamic treatment regimes, adaptive treatment strategies, treatment algorithms, structured treatment interruptions, practice parameters, ASAM criteria... Almirall, Collins, Murphy Building Adaptive Interventions 6 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What? Why? SMART Design Principles Discussion Why Adaptive Interventions? Necessary because... I Chronic nature of substance use/mental health disorders I Waxing and waning course (multiple relapse, recurrence) I Genetic and non-genetic factors influence course I Co-occuring disorders may arise I High patient heterogeneity in response to treatment I Within person (over time) differential response to treatment I Between person differential response to treatment All require sequences of treatment decisions. Almirall, Collins, Murphy Building Adaptive Interventions 7 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) SMART Design Principles Discussion EVALUATING VERSUS BUILDING VERSUS GENERATING HYPOTHESES ABOUT ADAPTIVE INTERVENTIONS? Almirall, Collins, Murphy Building Adaptive Interventions 8 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) SMART Design Principles Discussion 3 Different Research Questions/Aims = 3 Different Research Designs I Aim 1: When evaluating a particular Adaptive Intervention: e.g. Does the AI have a statistically significant effect as compared to control intervention? I Aim 2: When building an Adaptive Intervention: e.g, What are the best tailoring variables and/or decision rules? I Aim 3: When generating hypotheses to build an Adaptive Intervention: e.g., Does augmenting txt (as observed in a previous trial) for non-responders correlate with better outcomes? Almirall, Collins, Murphy Building Adaptive Interventions 9 / 42 3 Different Research Questions/Aims = 3 Different Research Designs Ex. Q1: Does the Adaptive Intervention have a statistically significant effect as compared to control intervention? Ex. Q2: What are the best tailoring variables or decision rules? Ex. Q3: Does augmenting txt for non-responders (as observed in a previous trial) correlate with better outcomes? Analysis of RCT SMART Previous RCT Question Aim (Experimental) (Observational) p 1 Evaluating XX p 2 Building X ∼ p 3 Hypothesis Gen. X X Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What are SMARTs? SMART Design Principles Discussion SEQUENTIAL MULTIPLE ASSIGNMENT RANDOMIZED TRIALS (SMARTs) Almirall, Collins, Murphy Building Adaptive Interventions 11 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What are SMARTs? SMART Design Principles Discussion What is a Sequential Multiple Assignment Randomized Trial (SMART)? I Multi-stage trials; same participants throughout I Each stage corresponds to a critical decision point I At each stage, subjects randomized to set of treatment options I The goal of a SMART is to inform the development of adaptive interventions. I will give you an example SMART, but first... Almirall, Collins, Murphy Building Adaptive Interventions 12 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Sequential Multiple Assignment Randomized Trial (SMART) What are SMARTs? SMART Design Principles Discussion Motivation for an Example SMART Child-Adolescent Anxiety Multi-modal Study (CAMS) I CAMS: acute-phase, efficacy, RCT for child anxiety I CBT+MED > MED ≈ CBT > Placebo I However, some families and clinicians remain concerned about the use of MED in this population I So an important next question for clinical practice is “Can we delay the use of MED?” I Some children may do fine w/ CBT only and not need MED. Almirall, Collins, Murphy Building Adaptive Interventions 13 / 42 Concrete Example of a SMART: Pediatric Anxiety Courtesy of Scott N Compton, Duke University Medical Center O2 + Primary O1 First-line Txt Second-line Txt Y Tailoring Variable Add Treatment: Non-Responders CBT + MED + FT CBT + MED Maintain: CBT + MED Responders R Step Down: CBT Only R Maintain: Responders CBT CBT Add Treatment: CBT + MED Non-Responders R Switch Treatment: MED One Adaptive Intervention Within the SMART O2 + Primary O1 First-line Txt Second-line Txt Y Tailoring Variable Add Treatment: Non-Responders CBT + MED + FT CBT + MED Maintain: CBT + MED Responders R Step Down: CBT Only R Maintain: Responders CBT CBT Add Treatment: CBT + MED Non-Responders R Switch Treatment: MED Another Adaptive Intervention Within the SMART O2 + Primary O1 First-line Txt Second-line Txt Y Tailoring Variable Add Treatment: Non-Responders CBT + MED + FT CBT + MED Maintain: CBT + MED Responders R Step Down: CBT Only R Maintain: Responder CBT s CBT Add Treatment: CBT + MED Non-Responders R Switch Treatment: MED 4 Embedded Adaptive Interventions in this SMART Add Treatment: Non‐Responders CBT + MED + FT 1 CBT + MED AI Responders Step Down: CBT Boost Add Treatment: Non‐Responders CBT + MED + FT 2 CBT + MED AI Responders Maintain: CBT + MED Maintain: Responders CBT Boost 3 CBT AI Non‐Responders Add Treatment: CBT + MED Maintain: Responders CBT Boost 4 CBT AI Non‐Responders Switch Treatment: MED Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Keep it Simple Sequential Multiple Assignment Randomized Trial (SMART) Choosing Primary and Secondary Hypotheses SMART Design Principles Discussion SMART DESIGN PRINCIPLES Almirall, Collins, Murphy Building Adaptive Interventions 18 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Keep it Simple Sequential Multiple Assignment Randomized Trial (SMART) Choosing Primary and Secondary Hypotheses SMART Design Principles Discussion SMART Design Principles I KISS Principle: Keep It Simple, Straightforward I Power for simple important primary hypotheses I Take Appropriate steps to develop an more deeply-individualized Adaptive Intervention Almirall, Collins, Murphy Building Adaptive Interventions 19 / 42 Adaptive Interventions Evaluating versus Building an Adaptive Intervention? Keep it Simple Sequential Multiple Assignment Randomized Trial (SMART) Choosing Primary and Secondary Hypotheses SMART Design Principles Discussion Keep It Simple, Straightforward Overarching Principle At each stage, or critical decision point,... I Use low dimensional summary to
Recommended publications
  • Observational Clinical Research
    E REVIEW ARTICLE Clinical Research Methodology 2: Observational Clinical Research Daniel I. Sessler, MD, and Peter B. Imrey, PhD * † Case-control and cohort studies are invaluable research tools and provide the strongest fea- sible research designs for addressing some questions. Case-control studies usually involve retrospective data collection. Cohort studies can involve retrospective, ambidirectional, or prospective data collection. Observational studies are subject to errors attributable to selec- tion bias, confounding, measurement bias, and reverse causation—in addition to errors of chance. Confounding can be statistically controlled to the extent that potential factors are known and accurately measured, but, in practice, bias and unknown confounders usually remain additional potential sources of error, often of unknown magnitude and clinical impact. Causality—the most clinically useful relation between exposure and outcome—can rarely be defnitively determined from observational studies because intentional, controlled manipu- lations of exposures are not involved. In this article, we review several types of observa- tional clinical research: case series, comparative case-control and cohort studies, and hybrid designs in which case-control analyses are performed on selected members of cohorts. We also discuss the analytic issues that arise when groups to be compared in an observational study, such as patients receiving different therapies, are not comparable in other respects. (Anesth Analg 2015;121:1043–51) bservational clinical studies are attractive because Group, and the American Society of Anesthesiologists they are relatively inexpensive and, perhaps more Anesthesia Quality Institute. importantly, can be performed quickly if the required Recent retrospective perioperative studies include data O 1,2 data are already available.
    [Show full text]
  • Observational Studies and Bias in Epidemiology
    The Young Epidemiology Scholars Program (YES) is supported by The Robert Wood Johnson Foundation and administered by the College Board. Observational Studies and Bias in Epidemiology Manuel Bayona Department of Epidemiology School of Public Health University of North Texas Fort Worth, Texas and Chris Olsen Mathematics Department George Washington High School Cedar Rapids, Iowa Observational Studies and Bias in Epidemiology Contents Lesson Plan . 3 The Logic of Inference in Science . 8 The Logic of Observational Studies and the Problem of Bias . 15 Characteristics of the Relative Risk When Random Sampling . and Not . 19 Types of Bias . 20 Selection Bias . 21 Information Bias . 23 Conclusion . 24 Take-Home, Open-Book Quiz (Student Version) . 25 Take-Home, Open-Book Quiz (Teacher’s Answer Key) . 27 In-Class Exercise (Student Version) . 30 In-Class Exercise (Teacher’s Answer Key) . 32 Bias in Epidemiologic Research (Examination) (Student Version) . 33 Bias in Epidemiologic Research (Examination with Answers) (Teacher’s Answer Key) . 35 Copyright © 2004 by College Entrance Examination Board. All rights reserved. College Board, SAT and the acorn logo are registered trademarks of the College Entrance Examination Board. Other products and services may be trademarks of their respective owners. Visit College Board on the Web: www.collegeboard.com. Copyright © 2004. All rights reserved. 2 Observational Studies and Bias in Epidemiology Lesson Plan TITLE: Observational Studies and Bias in Epidemiology SUBJECT AREA: Biology, mathematics, statistics, environmental and health sciences GOAL: To identify and appreciate the effects of bias in epidemiologic research OBJECTIVES: 1. Introduce students to the principles and methods for interpreting the results of epidemio- logic research and bias 2.
    [Show full text]
  • Study Types Transcript
    Study Types in Epidemiology Transcript Study Types in Epidemiology Welcome to “Study Types in Epidemiology.” My name is John Kobayashi. I’m on the Clinical Faculty at the Northwest Center for Public Health Practice, at the School of Public Health and Community Medicine, University of Washington in Seattle. From 1982 to 2001, I was the state epidemiologist for Communicable Diseases at the Washington State Department of Health. Since 2001, I’ve also been the foreign adviser for the Field Epidemiology Training Program of Japan. About this Module The modules in the epidemiology series from the Northwest Center for Public Health Practice are intended for people working in the field of public health who are not epidemiologists, but who would like to increase their understanding of the epidemiologic approach to health and disease. This module focuses on descriptive and analytic epide- miology and their respective study designs. Before you go on with this module, we recommend that you become familiar with the following modules, which you can find on the Center’s Web site: What is Epidemiology in Public Health? and Data Interpretation for Public Health Professionals. We introduce a number of new terms in this module. If you want to review their definitions, the glossary in the attachments link at the top of the screen may be useful. Objectives By now, you should be familiar with the overall approach of epidemiology, the use of various kinds of rates to measure disease frequency, and the various ways in which epidemiologic data can be presented. This module offers an overview of descriptive and analytic epidemiology and the types of studies used to review and investigate disease occurrence and causes.
    [Show full text]
  • Chapter 5 Experiments, Good And
    Chapter 5 Experiments, Good and Bad Point of both observational studies and designed experiments is to identify variable or set of variables, called explanatory variables, which are thought to predict outcome or response variable. Confounding between explanatory variables occurs when two or more explanatory variables are not separated and so it is not clear how much each explanatory variable contributes in prediction of response variable. Lurking variable is explanatory variable not considered in study but confounded with one or more explanatory variables in study. Confounding with lurking variables effectively reduced in randomized comparative experiments where subjects are assigned to treatments at random. Confounding with a (only one at a time) lurking variable reduced in observational studies by controlling for it by comparing matched groups. Consequently, experiments much more effec- tive than observed studies at detecting which explanatory variables cause differences in response. In both cases, statistically significant observed differences in average responses implies differences are \real", did not occur by chance alone. Exercise 5.1 (Experiments, Good and Bad) 1. Randomized comparative experiment: effect of temperature on mice rate of oxy- gen consumption. For example, mice rate of oxygen consumption 10.3 mL/sec when subjected to 10o F. temperature (Fo) 0 10 20 30 ROC (mL/sec) 9.7 10.3 11.2 14.0 (a) Explanatory variable considered in study is (choose one) i. temperature ii. rate of oxygen consumption iii. mice iv. mouse weight 25 26 Chapter 5. Experiments, Good and Bad (ATTENDANCE 3) (b) Response is (choose one) i. temperature ii. rate of oxygen consumption iii.
    [Show full text]
  • Analytic Epidemiology Analytic Studies: Observational Study
    Analytic Studies: Observational Study Designs Cohort Studies Analytic Epidemiology Prospective Cohort Studies Retrospective (historical) Cohort Studies Part 2 Case-Control Studies Nested case-control Case-cohort studies Dr. H. Stockwell Case-crossover studies Determinants of disease: Analytic Epidemiology Epidemiology: Risk factors Identifying the causes of disease A behavior, environmental exposure, or inherent human characteristic that is associated with an important health Testing hypotheses using epidemiologic related condition* studies Risk factors are associated with an increased probability of disease but may Goal is to prevent disease (deterrents) not always cause the diseases *Last, J. Dictionary of Epidemiology Analytic Studies: Cohort Studies Panel Studies Healthy subjects are defined by their Combination of cross-sectional and exposure status and followed over time to cohort determine the incidence of disease, symptoms or death Same individuals surveyed at several poiiiints in time Subjects grouped by exposure level – exposed and unexposed (f(reference group, Can measure changes in individuals comparison group) Analytic Studies: Case-Control Studies Nested case-control studies A group of individuals with a disease Case-control study conducted within a (cases) are compared with a group of cohort study individuals without the disease (controls) Advantages of both study designs Also called a retrospective study because it starts with people with disease and looks backward for ppprevious exposures which might be
    [Show full text]
  • 1 Sensitivity Analysis in Observational Research: Introducing the E-‐Value
    Sensitivity Analysis in Observational Research: Introducing the E-value Tyler J. VanderWeele, Ph.D., Departments of Epidemiology and Biostatistics, Harvard T.H. Chan School of Public Health Peng Ding, Ph.D., Department of Statistics, University of California, Berkeley This is the prepublication, author-produced version of a manuscript accepted for publication in Annals of Internal Medicine. This version does not include post-acceptance editing and formatting. The American College of Physicians, the publisher of Annals of Internal Medicine, is not responsible for the content or presentation of the author-produced accepted version of the manuscript or any version that a third party derives from it. Readers who wish to access the definitive published version of this manuscript and any ancillary material related to this manuscript (e.g., correspondence, corrections, editorials, linked articles) should go to Annals.org or to the print issue in which the article appears. Those who cite this manuscript should cite the published version, as it is the official version of record: VanderWeele, T.J. and Ding, P. (2017). Sensitivity analysis in observational research: introducing the E-value. Annals of Internal Medicine, 167(4):268-274. 1 Abstract. Sensitivity analysis can be useful in assessing how robust associations are to potential unmeasured or uncontrolled confounding. In this paper we introduce a new measure that we call the “E-value,” a measure related to the evidence for causality in observational studies, when they are potentially subject to confounding. The E-value is defined as the minimum strength of association on the risk ratio scale that an unmeasured confounder would need to have with both the treatment and the outcome to fully eXplain away a specific treatment-outcome association, conditional on the measured covariates.
    [Show full text]
  • Chapter 20 Design and Analysis of Experiments and Observational
    Chapter 20 Design and Analysis of Experiments and Observational Studies Point of both observational studies and designed experiments is to identify variable or set of variables, called explanatory variables, which are thought to predict outcome or response variable. Observational studies are an investigation of the association between treatment and response, where subject who decide whether or not they take a treatment. In experimental designs, experimenter decides who is given a treat- ment and who is to be a control, the experimenter alters levels of treatments when investigating how these alterations cause change in the response. 20.1 Observational Studies Exercise 20.1 (Observational Studies) 1. Observational Study versus Designed Experiment. (a) Effect of air temperature on rate of oxygen consumption (ROC) of four mice is investigated. ROC of one mouse at 0o F is 9.7 mL/sec for example. temperature (Fo) 0 10 20 30 ROC (mL/sec) 9.7 10.3 11.2 14.0 Since experimenter (not a mouse!) decides which mice are subjected to which temperature, this is (choose one) observational study / designed experiment. (b) Indiana police records from 1999–2001 on six drivers are analyzed to de- termine if there is an association between drinking and traffic accidents. One heavy drinker had 6 accidents for example. 169 170Chapter 20. Design and Analysis of Experiments and Observational Studies (lecture notes 10) drinking → heavy light 3 1 6 2 2 1 This is an observational study because (choose one) i. police decided who was going to drink and drive and who was not. ii. drivers decided who was going to drink and drive and who was not.
    [Show full text]
  • Meta-Analysis of Observational Studies in Epidemiology (MOOSE)
    CONSENSUS STATEMENT Meta-analysis of Observational Studies in Epidemiology A Proposal for Reporting Donna F. Stroup, PhD, MSc Objective Because of the pressure for timely, informed decisions in public health and Jesse A. Berlin, ScD clinical practice and the explosion of information in the scientific literature, research Sally C. Morton, PhD results must be synthesized. Meta-analyses are increasingly used to address this prob- lem, and they often evaluate observational studies. A workshop was held in Atlanta, Ingram Olkin, PhD Ga, in April 1997, to examine the reporting of meta-analyses of observational studies G. David Williamson, PhD and to make recommendations to aid authors, reviewers, editors, and readers. Drummond Rennie, MD Participants Twenty-seven participants were selected by a steering committee, based on expertise in clinical practice, trials, statistics, epidemiology, social sciences, and biomedi- David Moher, MSc cal editing. Deliberations of the workshop were open to other interested scientists. Fund- Betsy J. Becker, PhD ing for this activity was provided by the Centers for Disease Control and Prevention. Theresa Ann Sipe, PhD Evidence We conducted a systematic review of the published literature on the con- duct and reporting of meta-analyses in observational studies using MEDLINE, Educa- Stephen B. Thacker, MD, MSc tional Research Information Center (ERIC), PsycLIT, and the Current Index to Statistics. for the Meta-analysis Of We also examined reference lists of the 32 studies retrieved and contacted experts in Observational Studies in the field. Participants were assigned to small-group discussions on the subjects of bias, Epidemiology (MOOSE) Group searching and abstracting, heterogeneity, study categorization, and statistical methods.
    [Show full text]
  • Statistical Overview for Clinical Trials Basics of Design and Analysis of Controlled Clinical Trials
    Statistical Overview for Clinical Trials Basics of Design and Analysis of Controlled Clinical Trials Presented by: Behrang Vali M.S., CDER/OTS/OB/DB3 Special Thanks to: LaRee Tracy, Mike Welch, Ruthanna Davi, and Janice Derr 1 Disclaimer The views expressed in this presentation do not necessarily represent those of the U.S. Food and Drug Administration. 2 Common Study Designs • Case-Control Study – Retrospective Observational Study • Cohort Study – Prospective Observational Study • e.g. Natural History Studies • Controlled Clinical Trial (focus of this portion of the presentation) – Prospective Experimental Study – Best design to study causality – Utilized for design of adequate and well- controlled trials 3 Study Objective/Research Question Study Population of Interest and Design Endpoint and Measurement of Outcome Test Drug Control Drug Statistical Comparisons Interpretation Extrapolation 4 Study Objective/Research Question 5 STUDY OBJECTIVE • What do you want to show with this trial? • What is the population of interest? • What trial design will be used? • What are the specific study hypotheses? 6 What do you want to show with this trial? Efficacy • New drug is more efficacious than current standard therapy - active controlled superiority trial • New drug is more efficacious than placebo treatment - placebo controlled superiority trial • New drug is not worse (in terms of efficacy), by a pre-specified and clinically relevant amount, than the current therapy - active controlled non- inferiority trial – Note that there are no placebo
    [Show full text]
  • CONCEPTUALISING NATURAL and QUASI EXPERIMENTS in PUBLIC HEALTH Frank De Vocht1,2,3, Srinivasa Vittal Katikireddi4, Cheryl Mcquir
    CONCEPTUALISING NATURAL AND QUASI EXPERIMENTS IN PUBLIC HEALTH Frank de Vocht1,2,3, Srinivasa Vittal Katikireddi4, Cheryl McQuire1,2, Kate Tilling1,5, Matthew Hickman1, Peter Craig4 1 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK 2 NIHR School for Public Health Research, Newcastle, UK 3 NIHR Applied Research Collaboration West, Bristol, UK 5 MRC IEU, University of Bristol, Bristol, UK 4 MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, Bristol, UK Corresponding author: Dr Frank de Vocht. Population Health Sciences, Bristol Medical School, University of Bristol. Canynge Hall, 39 Whatley Road, Bristol, United Kingdom. BS8 2PS. Email: [email protected]. Tel: +44.(0)117.928.7239. Fax: N/A 1 ABSTRACT Background Natural or quasi experiments are appealing for public health research because they enable the evaluation of events or interventions that are difficult or impossible to manipulate experimentally, such as many policy and health system reforms. However, there remains ambiguity in the literature about their definition and how they differ from randomised controlled experiments and from other observational designs. Methods We conceptualise natural experiments in in the context of public health evaluations, align the study design to the Target Trial Framework, and provide recommendation for improvement of their design and reporting. Results Natural experiment studies combine features of experiments and non-experiments. They differ from RCTs in that exposure allocation is not controlled by researchers while they differ from other observational designs in that they evaluate the impact of event or exposure changes. As a result they are, in theory, less susceptible to bias than other observational study designs.
    [Show full text]
  • 1.3 Types of Studies: Observational Studies and Experiments
    1.3 Types of studies: Observational Studies and Experiments 1 Observational Studies ! Simply observing what happens " An opinion sample survey is an observational study. " Researcher does not ‘impose’ a treatment or randomly assign subjects to a group. 2 Study: Tanning and Skin Cancer ! The observational study involved 1,500 people. ! Selected a group of people who had skin cancer and another group of people who did not have skin cancer " Group membership based on innate characteristics in the subjects (they were not assigned to a group). ! Asked all participants whether they used tanning beds. " Wanted to see if there was an association between tanning beds and skin cancer prevalence. 3 Study: Sodium and Blood Pressure ! Enroll 100 individuals in the observational study. ! Give them diet diaries where they record everything they eat each day. From this the amount of sodium in the diet is found. ! Not assigning subjects to any certain amount of sodium (they choose it on their own). ! Measure their blood pressure. 4 Types of observational studies ! Retrospective – look at past records and historical data. " Tanning and Skin Cancer " Can be a Case-Control study ! Prospective – identify subjects and collect data as events unfold. " Sodium and Blood Pressure " If you collect data at multiple time points, it is a longitudinal study. ! Observational studies are often used in marketing or health studies. 5 Case-Control study ! The conclusions for observational studies are not as strong as experiments (which use randomization), but sometimes we can not ethically perform an experiment. " Can you randomly assign individuals to be in a cigarette smoking group? No.
    [Show full text]
  • Defining the Study Population for an Observational Study to Ensure
    Statistics in the Biosciences manuscript No. (will be inserted by the editor) Defining the Study Population for an Observational Study to Ensure Sufficient Overlap: A Tree Approach Mikhail Traskin · Dylan S. Small Received: date / Accepted: date Abstract The internal validity of an observational study is enhanced by only comparing sets of treated and control subjects which have sufficient overlap in their covariate distributions. Methods have been developed for defining the study population using propensity scores to ensure sufficient overlap. However, a study population defined by propensity scores is difficult for other investi- gators to understand. We develop a method of defining a study population in terms of a tree which is easy to understand and display, and that has similar internal validity as that of the study population defined by propensity scores. Keywords Observational study · Propensity score · Overlap 1 Introduction An observational study attempts to draw inferences about the effects caused by a treatment when subjects are not randomly assigned to treatment or control as they would be in a randomized trial. A typical approach to an observational study is to attempt to measure the covariates that affect the outcome, and then adjust for differences between the treatment and control groups in these covariates via propensity score methods [25,16], matching methods [21,13] or regression; see [18], [22] and [27] for surveys. Mikhail Traskin Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104, USA Tel.: +215-898-8231 Fax: +215-898-1280 E-mail: [email protected] Dylan S. Small Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104, USA E-mail: [email protected] 2 A quantity that is often of central interest in an observational study is the average treatment effect, the average effect of treatment over the whole population.
    [Show full text]