Chapter 8 Threats to Your Experiment

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 8 Threats to Your Experiment Chapter 8 Threats to Your Experiment Planning to avoid criticism. One of the main goals of this book is to encourage you to think from the point of view of an experimenter, because other points of view, such as that of a reader of scientific articles or a consumer of scientific ideas, are easy to switch to after the experimenter's point of view is understood, but the reverse is often not true. In other words, to enhance the usability of what you learn, you should pretend that you are a researcher, even if that is not your ultimate goal. As a researcher, one of the key skills you should be developing is to try, in advance, to think of all of the possible criticisms of your experiment that may arise from the reviewer of an article you write or the reader of an article you publish. This chapter discusses possible complaints about internal validity, external validity, construct validity, Type 1 error, and power. We are using \threats" to mean things that will reduce the impact of your study results on science, particularly those things that we have some control over. 191 192 CHAPTER 8. THREATS TO YOUR EXPERIMENT 8.1 Internal validity In a well-constructed experiment in its simplest form we manipulate variable X and observe the effects on variable Y. For example, outcome Y could be number of people who purchase a particular item in a store over a certain week, and X 8.1. INTERNAL VALIDITY 193 could be some characteristics of the display for that item, such as use of pictures of people of different \status" for an in-store advertisement (e.g., a celebrity vs. an unknown model). Internal validity is the degree to which we can appropriately conclude that the changes in X caused the changes in Y. The study of causality goes back thousands of years, but there has been a resur- gence of interest recently. For our purposes we can define causality as the state of nature in which an active change in one variable directly changes the probability distribution of another variable. It does not mean that a particular \treatment" is always followed by a particular outcome, but rather that some probability is changed, e.g. a higher outcome is more likely with a particular treatment com- pared to without. A few ideas about causality are worth thinking about now. First, association, which is equivalent to non-zero correlation (see section 3.6.1) in statistical terms, means that we observe that when one variable changes, an- other one tends to change. We cannot have causation without association, but just finding an association is not enought to justify a claim of causation. Association does not necessarily imply causation. If variables X and Y (e.g., the number of televisions (X) in various countries and the infant mortality rate (Y) of those countries) are found to be associated, then there are three basic possibilities. First X could be causing Y (televisions lead to more health awareness, which leads to better prenatal care) or Y could be causing X (high infant mortality leads to attraction of funds from richer countries, which leads to more televisions) or unknown factor Z could be causing both X and Y (higher wealth in a country leads to more televisions and more prenatal care clinics). It is worth memorizing these three cases, because they should always be considered when association is found in an observational study as opposed to a randomized experiment. (It is also possible that X and Y are related in more complicated ways including in large networks of variables with feedback loops.) Causation (\X causes Y") can be logically claimed if X and Y are associated, and X precedes Y, and no plausible alternative explanations can be found, par- ticularly those of the form \X just happens to vary along with some real cause of changes in Y" (called confounding). Returning to the advertisement example, one stupid thing to do is to place all of the high status pictures in only the wealthiest neighborhoods or the largest stores, 194 CHAPTER 8. THREATS TO YOUR EXPERIMENT while the low status pictures are only shown in impoverished neighborhoods or those with smaller stores. In that case a higher average number of items purchased for the stores with high status ads may be either due to the effect of socio-economic status or store size or perceived status of the ad. When more than one thing is different on average between the groups to be compared, the problem is called confounding and confounding is a fatal threat to internal validity. Notice that the definition of confounding mentions “different on average". This is because it is practically impossible to have no differences between the subjects in different groups (beyond the differences in treatment). So our realistic goal is to have no difference on average. For example if we are studying both males and females, we would like the gender ratio to be the same in each treatment group. For the store example, we want the average pre-treatment total sales to be the same in each treatment group. And we want the distance from competitors to be the same, and the socio-economic status (SES) of the neighborhood, and the racial makeup, and the age distribution of the neighborhood, etc., etc. Even worse, we want all of the unmeasured variables, both those that we thought of and those we didn't think of, to be similar in each treatment group. The sine qua non of internal validity is random assignment of treatment to experimental units (different stores in our ad example). Random treatment assignment (also called randomization) is usually the best way to assure that all of the potential confounding variables are equal on average (also called balanced) among the treatment groups. Non-random assignment will usually lead to either consciously or unconsciously unbalanced groups. If one or a few variables, such as gender or SES, are known to be critical factors affecting outcome, a good al- ternative is block randomization, in which randomization among treatments is performed separately for each level of the critical (non-manipulated) explanatory factor. This helps to assure that the level of this explanatory factor is balanced (not confounded) across the levels of the treatment variable. In current practice randomization is normally done using computerized random number generators. Ideally all subjects are identified before the experiment begins and assigned numbers from 1 to N (the total number of subjects), and then a computer's random number generator is used to assign treatments to the subjects via these numbers. For block randomization this can be done separately for each block. If all subjects cannot be identified before the experiment begins, some way must be devised to assure that each subject has an equal chance of getting each treatment (if equal assignment is desired). One way to do this is as follows. If 8.1. INTERNAL VALIDITY 195 there are k levels of treatment, then collect the subjects until k (or 2k or 3k, etc) are available, then use the computer to randomly assign treatments among the available subjects. It is also acceptable to have the computer individually generate a random number from 1 to k for each subject, but it must be assured that the subject and/or researcher cannot re-run the process if they don't like the assignment. Confounding can occur because we purposefully, but stupidly, design our exper- iment such that two or more things differ at once, or because we assign treatments non-randomly, or because the randomization \failed". As an example of designed confounding, consider the treatments \drug plus psychotherapy" vs. \placebo" for treating depression. If a difference is found, then we will not know whether the success of the treatment is due to the drug, the psychotherapy or the combination. If no difference is found, then that may be due to the effect of drug canceling out the effect of the psychotherapy. If the drug and the psychotherapy are known to individually help patients with depression and we really do want to study the combination, it would probably better to have a study with the three treatment arms of drug, psychotherapy, and combination (with or without the placebo), so that we could assess the specific important questions of whether drug adds a ben- efit to psychotherapy and vice versa. As another example, consider a test of the effects of a mixed herbal supplement on memory. Again, a success tells us that something in the mix helps memory, but a follow-up trial is needed to see if all of the components are necessary. And again we have the possibility that one compo- nent would cancel another out causing a \no effect” outcome when one component really is helpful. But we must also consider that the mix itself is effective while the individual components are not, so this might be a good experiment. In terms of non-random assignment of treatment, this should only be done when necessary, and it should be recognized that it strongly, often fatally, harms the internal validity of the experiment. If you assign treatment in some pseudo- random way, e.g. alternating treatment levels, you or the subjects may purposely or inadvertently introduce confounding factors into your experiment. Finally, it must be stated that although randomization cannot perfectly balance all possible explanatory factors, it is the best way to attempt this, particularly for unmeasured or unimagined factors that might affect the outcome. Although there is always a small chance that important factors are out of balance after random treatment assignment (i.e., failed randomization), the degree of imbalance is generally small, and gets smaller as the sample size gets larger.
Recommended publications
  • Internal Validity Is About Causal Interpretability
    Internal Validity is about Causal Interpretability Before we can discuss Internal Validity, we have to discuss different types of Internal Validity variables and review causal RH:s and the evidence needed to support them… Every behavior/measure used in a research study is either a ... Constant -- all the participants in the study have the same value on that behavior/measure or a ... • Measured & Manipulated Variables & Constants Variable -- when at least some of the participants in the study • Causes, Effects, Controls & Confounds have different values on that behavior/measure • Components of Internal Validity • “Creating” initial equivalence and every behavior/measure is either … • “Maintaining” ongoing equivalence Measured -- the value of that behavior/measure is obtained by • Interrelationships between Internal Validity & External Validity observation or self-report of the participant (often called “subject constant/variable”) or it is … Manipulated -- the value of that behavior/measure is controlled, delivered, determined, etc., by the researcher (often called “procedural constant/variable”) So, every behavior/measure in any study is one of four types…. constant variable measured measured (subject) measured (subject) constant variable manipulated manipulated manipulated (procedural) constant (procedural) variable Identify each of the following (as one of the four above, duh!)… • Participants reported practicing between 3 and 10 times • All participants were given the same set of words to memorize • Each participant reported they were a Psyc major • Each participant was given either the “homicide” or the “self- defense” vignette to read From before... Circle the manipulated/causal & underline measured/effect variable in each • Causal RH: -- differences in the amount or kind of one behavior cause/produce/create/change/etc.
    [Show full text]
  • Comorbidity Scores
    Bias Introduction of issue and background papers Sebastian Schneeweiss, MD, ScD Professor of Medicine and Epidemiology Division of Pharmacoepidemiology and Pharmacoeconomics, Dept of Medicine, Brigham & Women’s Hospital/ Harvard Medical School 1 Potential conflicts of interest PI, Brigham & Women’s Hospital DEcIDE Center for Comparative Effectiveness Research (AHRQ) PI, DEcIDE Methods Center (AHRQ) Co-Chair, Methods Core of the Mini Sentinel System (FDA) Member, national PCORI Methods Committee No paid consulting or speaker fees from pharmaceutical manufacturers Consulting in past year: . WHISCON LLC, Booz&Co, Aetion Investigator-initiated research grants to the Brigham from Pfizer, Novartis, Boehringer-Ingelheim Multiple grants from NIH 2 Objective of Comparative Effectiveness Research Efficacy Effectiveness* (Can it work?) (Does it work in routine care?) Placebo Most RCTs comparison for drug (or usual care) approval Active Goal of comparison (head-to-head) CER Effectiveness = Efficacy X Adherence X Subgroup effects (+/-) RCT Reality of routine care 3 * Cochrane A. Nuffield Provincial Trust, 1972 CER Baseline Non- randomization randomized Primary Secondary Primary Secondary data data data data 4 Challenges of observational research Measurement /surveillance-related biases . Informative missingness/ misclassification Selection-related biases . Confounding . Informative treatment changes/discontinuations Time-related biases . Immortal time bias . Temporality . Effect window (Multiple comparisons) 5 Informative missingness:
    [Show full text]
  • Assessment Center Structure and Construct Validity: a New Hope
    University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2015 Assessment Center Structure and Construct Validity: A New Hope Christopher Wiese University of Central Florida Part of the Psychology Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Wiese, Christopher, "Assessment Center Structure and Construct Validity: A New Hope" (2015). Electronic Theses and Dissertations, 2004-2019. 733. https://stars.library.ucf.edu/etd/733 ASSESSMENT CENTER STRUCTURE AND CONSTRUCT VALIDITY: A NEW HOPE by CHRISTOPHER W. WIESE B.S., University of Central Florida, 2008 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Psychology in the College of Sciences at the University of Central Florida Orlando, Florida Summer Term 2015 Major Professor: Kimberly Smith-Jentsch © 2015 Christopher Wiese ii ABSTRACT Assessment Centers (ACs) are a fantastic method to measure behavioral indicators of job performance in multiple diverse scenarios. Based upon a thorough job analysis, ACs have traditionally demonstrated very strong content and criterion-related validity. However, researchers have been puzzled for over three decades with the lack of evidence concerning construct validity. ACs are designed to measure critical job dimensions throughout multiple situational exercises. However, research has consistently revealed that different behavioral ratings within these scenarios are more strongly related to one another (exercise effects) than the same dimension rating across scenarios (dimension effects).
    [Show full text]
  • Construct Validity and Reliability of the Work Environment Assessment Instrument WE-10
    International Journal of Environmental Research and Public Health Article Construct Validity and Reliability of the Work Environment Assessment Instrument WE-10 Rudy de Barros Ahrens 1,*, Luciana da Silva Lirani 2 and Antonio Carlos de Francisco 3 1 Department of Business, Faculty Sagrada Família (FASF), Ponta Grossa, PR 84010-760, Brazil 2 Department of Health Sciences Center, State University Northern of Paraná (UENP), Jacarezinho, PR 86400-000, Brazil; [email protected] 3 Department of Industrial Engineering and Post-Graduation in Production Engineering, Federal University of Technology—Paraná (UTFPR), Ponta Grossa, PR 84017-220, Brazil; [email protected] * Correspondence: [email protected] Received: 1 September 2020; Accepted: 29 September 2020; Published: 9 October 2020 Abstract: The purpose of this study was to validate the construct and reliability of an instrument to assess the work environment as a single tool based on quality of life (QL), quality of work life (QWL), and organizational climate (OC). The methodology tested the construct validity through Exploratory Factor Analysis (EFA) and reliability through Cronbach’s alpha. The EFA returned a Kaiser–Meyer–Olkin (KMO) value of 0.917; which demonstrated that the data were adequate for the factor analysis; and a significant Bartlett’s test of sphericity (χ2 = 7465.349; Df = 1225; p 0.000). ≤ After the EFA; the varimax rotation method was employed for a factor through commonality analysis; reducing the 14 initial factors to 10. Only question 30 presented commonality lower than 0.5; and the other questions returned values higher than 0.5 in the commonality analysis. Regarding the reliability of the instrument; all of the questions presented reliability as the values varied between 0.953 and 0.956.
    [Show full text]
  • Removing Hidden Confounding by Experimental Grounding
    Removing Hidden Confounding by Experimental Grounding Nathan Kallus Aahlad Manas Puli Uri Shalit Cornell University and Cornell Tech New York University Technion New York, NY New York, NY Haifa, Israel [email protected] [email protected] [email protected] Abstract Observational data is increasingly used as a means for making individual-level causal predictions and intervention recommendations. The foremost challenge of causal inference from observational data is hidden confounding, whose presence cannot be tested in data and can invalidate any causal conclusion. Experimental data does not suffer from confounding but is usually limited in both scope and scale. We introduce a novel method of using limited experimental data to correct the hidden confounding in causal effect models trained on larger observational data, even if the observational data does not fully overlap with the experimental data. Our method makes strictly weaker assumptions than existing approaches, and we prove conditions under which it yields a consistent estimator. We demonstrate our method’s efficacy using real-world data from a large educational experiment. 1 Introduction In domains such as healthcare, education, and marketing there is growing interest in using observa- tional data to draw causal conclusions about individual-level effects; for example, using electronic healthcare records to determine which patients should get what treatments, using school records to optimize educational policy interventions, or using past advertising campaign data to refine targeting and maximize lift. Observational datasets, due to their often very large number of samples and exhaustive scope (many measured covariates) in comparison to experimental datasets, offer a unique opportunity to uncover fine-grained effects that may apply to many target populations.
    [Show full text]
  • Internal and External Validity: Can You Apply Research Study Results to Your Patients? Cecilia Maria Patino1,2,A, Juliana Carvalho Ferreira1,3,B
    J Bras Pneumol. 2018;44(3):183-183 CONTINUING EDUCATION: http://dx.doi.org/10.1590/S1806-37562018000000164 SCIENTIFIC METHODOLOGY Internal and external validity: can you apply research study results to your patients? Cecilia Maria Patino1,2,a, Juliana Carvalho Ferreira1,3,b CLINICAL SCENARIO extent to which the results of a study are generalizable to patients in our daily practice, especially for the population In a multicenter study in France, investigators conducted that the sample is thought to represent. a randomized controlled trial to test the effect of prone vs. supine positioning ventilation on mortality among patients Lack of internal validity implies that the results of the with early, severe ARDS. They showed that prolonged study deviate from the truth, and, therefore, we cannot prone-positioning ventilation decreased 28-day mortality draw any conclusions; hence, if the results of a trial are [hazard ratio (HR) = 0.39; 95% CI: 0.25-0.63].(1) not internally valid, external validity is irrelevant.(2) Lack of external validity implies that the results of the trial may not apply to patients who differ from the study STUDY VALIDITY population and, consequently, could lead to low adoption The validity of a research study refers to how well the of the treatment tested in the trial by other clinicians. results among the study participants represent true fi ndings among similar individuals outside the study. INCREASING VALIDITY OF RESEARCH This concept of validity applies to all types of clinical STUDIES studies, including those about prevalence, associations, interventions, and diagnosis. The validity of a research To increase internal validity, investigators should ensure study includes two domains: internal and external validity.
    [Show full text]
  • Confounding Bias, Part I
    ERIC NOTEBOOK SERIES Second Edition Confounding Bias, Part I Confounding is one type of Second Edition Authors: Confounding is also a form a systematic error that can occur in bias. Confounding is a bias because it Lorraine K. Alexander, DrPH epidemiologic studies. Other types of can result in a distortion in the systematic error such as information measure of association between an Brettania Lopes, MPH bias or selection bias are discussed exposure and health outcome. in other ERIC notebook issues. Kristen Ricchetti-Masterson, MSPH Confounding may be present in any Confounding is an important concept Karin B. Yeatts, PhD, MS study design (i.e., cohort, case-control, in epidemiology, because, if present, observational, ecological), primarily it can cause an over- or under- because it's not a result of the study estimate of the observed association design. However, of all study designs, between exposure and health ecological studies are the most outcome. The distortion introduced susceptible to confounding, because it by a confounding factor can be large, is more difficult to control for and it can even change the apparent confounders at the aggregate level of direction of an effect. However, data. In all other cases, as long as unlike selection and information there are available data on potential bias, it can be adjusted for in the confounders, they can be adjusted for analysis. during analysis. What is confounding? Confounding should be of concern Confounding is the distortion of the under the following conditions: association between an exposure 1. Evaluating an exposure-health and health outcome by an outcome association.
    [Show full text]
  • Experimental Design
    UNIVERSITY OF BERGEN Experimental design Sebastian Jentschke UNIVERSITY OF BERGEN Agenda • experiments and causal inference • validity • internal validity • statistical conclusion validity • construct validity • external validity • tradeoffs and priorities PAGE 2 Experiments and causal inference UNIVERSITY OF BERGEN Some definitions a hypothesis often concerns a cause- effect-relationship PAGE 4 UNIVERSITY OF BERGEN Scientific revolution • discovery of America → french revolution: renaissance and enlightenment – Copernicus, Galilei, Newton • empiricism: use observation to correct errors in theory • scientific experimentation: taking a deliberate action [manipulation, vary something] followed by systematic observation of what occured afterwards [effect] controlling extraneous influences that might limit or bias observation: random assignment, control groups • mathematization, institutionalization PAGE 5 UNIVERSITY OF BERGEN Causal relationships Definitions and some philosophy: • causal relationships are recognized intuitively by most people in their daily lives • Locke: „A cause is which makes any other thing, either simple idea, substance or mode, begin to be; and an effect is that, which had its beginning from some other thing“ (1975, p. 325) • Stuart Mill: A causal relationship exists if (a) the cause preceded the effect, (b) the cause was related to the effect, (c) we can find no plausible alternative explanantion for the effect other than the cause. Experiments: (a) manipulate the presumed cause, (b) assess whether variation in the
    [Show full text]
  • As an "Odds Ratio", the Ratio of the Odds of Being Exposed by a Case Divided by the Odds of Being Exposed by a Control
    STUDY DESIGNS IN BIOMEDICAL RESEARCH VALIDITY & SAMPLE SIZE Validity is an important concept; it involves the assessment against accepted absolute standards, or in a milder form, to see if the evaluation appears to cover its intended target or targets. INFERENCES & VALIDITIES Two major levels of inferences are involved in interpreting a study, a clinical trial The first level concerns Internal validity; the degree to which the investigator draws the correct conclusions about what actually happened in the study. The second level concerns External Validity (also referred to as generalizability or inference); the degree to which these conclusions could be appropriately applied to people and events outside the study. External Validity Internal Validity Truth in Truth in Findings in The Universe The Study The Study Research Question Study Plan Study Data A Simple Example: An experiment on the effect of Vitamin C on the prevention of colds could be simply conducted as follows. A number of n children (the sample size) are randomized; half were each give a 1,000-mg tablet of Vitamin C daily during the test period and form the “experimental group”. The remaining half , who made up the “control group” received “placebo” – an identical tablet containing no Vitamin C – also on a daily basis. At the end, the “Number of colds per child” could be chosen as the outcome/response variable, and the means of the two groups are compared. Assignment of the treatments (factor levels: Vitamin C or Placebo) to the experimental units (children) was performed using a process called “randomization”. The purpose of randomization was to “balance” the characteristics of the children in each of the treatment groups, so that the difference in the response variable, the number of cold episodes per child, can be rightly attributed to the effect of the predictor – the difference between Vitamin C and Placebo.
    [Show full text]
  • Understanding Replication of Experiments in Software Engineering: a Classification Omar S
    Understanding replication of experiments in software engineering: A classification Omar S. Gómez a, , Natalia Juristo b,c, Sira Vegas b a Facultad de Matemáticas, Universidad Autónoma de Yucatán, 97119 Mérida, Yucatán, Mexico Facultad de Informática, Universidad Politécnica de Madrid, 28660 Boadilla del Monte, Madrid, Spain c Department of Information Processing Science, University of Oulu, Oulu, Finland abstract Context: Replication plays an important role in experimental disciplines. There are still many uncertain-ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment-ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi-fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera-ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment.
    [Show full text]
  • Introduction to Difference in Differences (DID) Analysis
    Introduction to Difference in Differences (DID) Analysis Hsueh-Sheng Wu CFDR Workshop Series June 15, 2020 1 Outline of Presentation • What is Difference-in-Differences (DID) analysis • Threats to internal and external validity • Compare and contrast three different research designs • Graphic presentation of the DID analysis • Link between regression and DID • Stata -diff- module • Sample Stata codes • Conclusions 2 What Is Difference-in-Differences Analysis • Difference-in-Differences (DID) analysis is a statistic technique that analyzes data from a nonequivalence control group design and makes a casual inference about an independent variable (e.g., an event, treatment, or policy) on an outcome variable • A non-equivalence control group design establishes the temporal order of the independent variable and the dependent variable, so it establishes which variable is the cause and which one is the effect • A non-equivalence control group design does not randomly assign respondents to the treatment or control group, so treatment and control groups may not be equivalent in their characteristics and reactions to the treatment • DID is commonly used to evaluate the outcome of policies or natural events (such as Covid-19) 3 Internal and External Validity • When designing an experiment, researchers need to consider how extraneous variables may threaten the internal validity and external validity of an experiment • Internal validity refers to the extent to which an experiment can establish the causal relation between the independent variable and
    [Show full text]
  • Beyond Controlling for Confounding: Design Strategies to Avoid Selection Bias and Improve Efficiency in Observational Studies
    September 27, 2018 Beyond Controlling for Confounding: Design Strategies to Avoid Selection Bias and Improve Efficiency in Observational Studies A Case Study of Screening Colonoscopy Our Team Xabier Garcia de Albeniz Martinez +34 93.3624732 [email protected] Bradley Layton 919.541.8885 [email protected] 2 Poll Questions Poll question 1 Ice breaker: Which of the following study designs is best to evaluate the causal effect of a medical intervention? Cross-sectional study Case series Case-control study Prospective cohort study Randomized, controlled clinical trial 3 We All Trust RCTs… Why? • Obvious reasons – No confusion (i.e., exchangeability) • Not so obvious reasons – Exposure represented at all levels of potential confounders (i.e., positivity) – Therapeutic intervention is well defined (i.e., consistency) – … And, because of the alignment of eligibility, exposure assignment and the start of follow-up (we’ll soon see why is this important) ? RCT = randomized clinical trial. 4 Poll Questions We just used the C-word: “causal” effect Poll question 2 Which of the following is true? In pharmacoepidemiology, we work to ensure that drugs are effective and safe for the population In pharmacoepidemiology, we want to know if a drug causes an undesired toxicity Causal inference from observational data can be questionable, but being explicit about the causal goal and the validity conditions help inform a scientific discussion All of the above 5 In Pharmacoepidemiology, We Try to Infer Causes • These are good times to be an epidemiologist
    [Show full text]