Design and Statistical Analysis of Some Confounded Factorial Experiments Jerome C
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Fractional Factorial Designs
Statistics 514: Fractional Factorial Designs k−p Lecture 12: 2 Fractional Factorial Design Montgomery: Chapter 8 Fall , 2005 Page 1 Statistics 514: Fractional Factorial Designs Fundamental Principles Regarding Factorial Effects Suppose there are k factors (A,B,...,J,K) in an experiment. All possible factorial effects include effects of order 1: A, B, ..., K (main effects) effects of order 2: AB, AC, ....,JK (2-factor interactions) ................. • Hierarchical Ordering principle – Lower order effects are more likely to be important than higher order effects. – Effects of the same order are equally likely to be important • Effect Sparsity Principle (Pareto principle) – The number of relatively important effects in a factorial experiment is small • Effect Heredity Principle – In order for an interaction to be significant, at least one of its parent factors should be significant. Fall , 2005 Page 2 Statistics 514: Fractional Factorial Designs Fractional Factorials • May not have sources (time,money,etc) for full factorial design • Number of runs required for full factorial grows quickly k – Consider 2 design – If k =7→ 128 runs required – Can estimate 127 effects – Only 7 df for main effects, 21 for 2-factor interactions – the remaining 99 df are for interactions of order ≥ 3 • Often only lower order effects are important • Full factorial design may not be necessary according to – Hierarchical ordering principle – Effect Sparsity Principle • A fraction of the full factorial design ( i.e. a subset of all possible level combinations) is sufficient. Fractional Factorial Design Fall , 2005 Page 3 Statistics 514: Fractional Factorial Designs Example 1 • Suppose you were designing a new car • Wanted to consider the following nine factors each with 2 levels – 1. -
Removing Hidden Confounding by Experimental Grounding
Removing Hidden Confounding by Experimental Grounding Nathan Kallus Aahlad Manas Puli Uri Shalit Cornell University and Cornell Tech New York University Technion New York, NY New York, NY Haifa, Israel [email protected] [email protected] [email protected] Abstract Observational data is increasingly used as a means for making individual-level causal predictions and intervention recommendations. The foremost challenge of causal inference from observational data is hidden confounding, whose presence cannot be tested in data and can invalidate any causal conclusion. Experimental data does not suffer from confounding but is usually limited in both scope and scale. We introduce a novel method of using limited experimental data to correct the hidden confounding in causal effect models trained on larger observational data, even if the observational data does not fully overlap with the experimental data. Our method makes strictly weaker assumptions than existing approaches, and we prove conditions under which it yields a consistent estimator. We demonstrate our method’s efficacy using real-world data from a large educational experiment. 1 Introduction In domains such as healthcare, education, and marketing there is growing interest in using observa- tional data to draw causal conclusions about individual-level effects; for example, using electronic healthcare records to determine which patients should get what treatments, using school records to optimize educational policy interventions, or using past advertising campaign data to refine targeting and maximize lift. Observational datasets, due to their often very large number of samples and exhaustive scope (many measured covariates) in comparison to experimental datasets, offer a unique opportunity to uncover fine-grained effects that may apply to many target populations. -
Single-Factor Experiments
D.G. Bonett (8/2018) Module 3 One-factor Experiments A between-subjects treatment factor is an independent variable with a 2 levels in which participants are randomized into a groups. It is common, but not necessary, to have an equal number of participants in each group. Each group receives one of the a levels of the independent variable with participants being treated identically in every other respect. The two-group experiment considered previously is a special case of this type of design. In a one-factor experiment with a levels of the independent variable (also called a completely randomized design), the population parameters are 휇1, 휇2, …, 휇푎 where 휇푗 (j = 1 to a) is the population mean of the response variable if all members of the study population had received level j of the independent variable. One way to assess the differences among the a population means is to compute confidence intervals for all possible pairs of differences. For example, with a = 3 levels the following pairwise comparisons of population means could be examined. 휇1 – 휇2 휇1 – 휇3 휇2 – 휇3 In a one-factor experiment with a levels there are a(a – 1)/2 pairwise comparisons. Confidence intervals for any of the two-group measures of effects size (e.g., mean difference, standardized mean difference, mean ratio, median difference, median ratio) described in Module 2 can be used to analyze any pair of groups. For any single 100(1 − 훼)% confidence interval, we can be 100(1 − 훼)% confident that the confidence interval has captured the population parameter and if v 100(1 − 훼)% confidence intervals are computed, we can be at least 100(1 − 푣훼)% confident that all v confidence intervals have captured their population parameters. -
Confounding Bias, Part I
ERIC NOTEBOOK SERIES Second Edition Confounding Bias, Part I Confounding is one type of Second Edition Authors: Confounding is also a form a systematic error that can occur in bias. Confounding is a bias because it Lorraine K. Alexander, DrPH epidemiologic studies. Other types of can result in a distortion in the systematic error such as information measure of association between an Brettania Lopes, MPH bias or selection bias are discussed exposure and health outcome. in other ERIC notebook issues. Kristen Ricchetti-Masterson, MSPH Confounding may be present in any Confounding is an important concept Karin B. Yeatts, PhD, MS study design (i.e., cohort, case-control, in epidemiology, because, if present, observational, ecological), primarily it can cause an over- or under- because it's not a result of the study estimate of the observed association design. However, of all study designs, between exposure and health ecological studies are the most outcome. The distortion introduced susceptible to confounding, because it by a confounding factor can be large, is more difficult to control for and it can even change the apparent confounders at the aggregate level of direction of an effect. However, data. In all other cases, as long as unlike selection and information there are available data on potential bias, it can be adjusted for in the confounders, they can be adjusted for analysis. during analysis. What is confounding? Confounding should be of concern Confounding is the distortion of the under the following conditions: association between an exposure 1. Evaluating an exposure-health and health outcome by an outcome association. -
Understanding Replication of Experiments in Software Engineering: a Classification Omar S
Understanding replication of experiments in software engineering: A classification Omar S. Gómez a, , Natalia Juristo b,c, Sira Vegas b a Facultad de Matemáticas, Universidad Autónoma de Yucatán, 97119 Mérida, Yucatán, Mexico Facultad de Informática, Universidad Politécnica de Madrid, 28660 Boadilla del Monte, Madrid, Spain c Department of Information Processing Science, University of Oulu, Oulu, Finland abstract Context: Replication plays an important role in experimental disciplines. There are still many uncertain-ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment-ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi-fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera-ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment. -
Beyond Controlling for Confounding: Design Strategies to Avoid Selection Bias and Improve Efficiency in Observational Studies
September 27, 2018 Beyond Controlling for Confounding: Design Strategies to Avoid Selection Bias and Improve Efficiency in Observational Studies A Case Study of Screening Colonoscopy Our Team Xabier Garcia de Albeniz Martinez +34 93.3624732 [email protected] Bradley Layton 919.541.8885 [email protected] 2 Poll Questions Poll question 1 Ice breaker: Which of the following study designs is best to evaluate the causal effect of a medical intervention? Cross-sectional study Case series Case-control study Prospective cohort study Randomized, controlled clinical trial 3 We All Trust RCTs… Why? • Obvious reasons – No confusion (i.e., exchangeability) • Not so obvious reasons – Exposure represented at all levels of potential confounders (i.e., positivity) – Therapeutic intervention is well defined (i.e., consistency) – … And, because of the alignment of eligibility, exposure assignment and the start of follow-up (we’ll soon see why is this important) ? RCT = randomized clinical trial. 4 Poll Questions We just used the C-word: “causal” effect Poll question 2 Which of the following is true? In pharmacoepidemiology, we work to ensure that drugs are effective and safe for the population In pharmacoepidemiology, we want to know if a drug causes an undesired toxicity Causal inference from observational data can be questionable, but being explicit about the causal goal and the validity conditions help inform a scientific discussion All of the above 5 In Pharmacoepidemiology, We Try to Infer Causes • These are good times to be an epidemiologist -
Comparison of Response Surface Designs in a Spherical Region
World Academy of Science, Engineering and Technology International Journal of Mathematical and Computational Sciences Vol:6, No:5, 2012 Comparison of Response Surface Designs in a Spherical Region Boonorm Chomtee, John J. Borkowski Abstract—The objective of the research is to study and compare Where X is the design matrix, p is the number of model response surface designs: Central composite designs (CCD), Box- parameters, N is the design size, σ 2 is the maximum of Behnken designs (BBD), Small composite designs (SCD), Hybrid max designs, and Uniform shell designs (USD) over sets of reduced models f′′(x)( X X )−1 f (x) approximated over the set of candidate when the design is in a spherical region for 3 and 4 design variables. points. The values of the two criteria were calculated using The two optimality criteria ( D and G ) are considered which larger Matlab software [1]. values imply a better design. The comparison of design optimality criteria of the response surface designs across the full second order B. Reduced Models model and sets of reduced models for 3 and 4 factors based on the The set of reduced models is consistent with the definition two criteria are presented. of weak heredity given in Chipman [2]. That is, (i) a quadratic 2 Keywords—design optimality criteria, reduced models, response xi term is in the model only if the xi term is also in the model surface design, spherical design region and (ii) an interaction xi x j term is in the model only if the xi or x or both terms are also in the model. -
Study Designs in Biomedical Research
STUDY DESIGNS IN BIOMEDICAL RESEARCH INTRODUCTION TO CLINICAL TRIALS SOME TERMINOLOGIES Research Designs: Methods for data collection Clinical Studies: Class of all scientific approaches to evaluate Disease Prevention, Diagnostics, and Treatments. Clinical Trials: Subset of clinical studies that evaluates Investigational Drugs; they are in prospective/longitudinal form (the basic nature of trials is prospective). TYPICAL CLINICAL TRIAL Study Initiation Study Termination No subjects enrolled after π1 0 π1 π2 Enrollment Period, e.g. Follow-up Period, e.g. three (3) years two (2) years OPERATION: Patients come sequentially; each is enrolled & randomized to receive one of two or several treatments, and followed for varying amount of time- between π1 & π2 In clinical trials, investigators apply an “intervention” and observe the effect on outcomes. The major advantage is the ability to demonstrate causality; in particular: (1) random assigning subjects to intervention helps to reduce or eliminate the influence of confounders, and (2) blinding its administration helps to reduce or eliminate the effect of biases from ascertainment of the outcome. Clinical Trials form a subset of cohort studies but not all cohort studies are clinical trials because not every research question is amenable to the clinical trial design. For example: (1) By ethical reasons, we cannot assign subjects to smoking in a trial in order to learn about its harmful effects, or (2) It is not feasible to study whether drug treatment of high LDL-cholesterol in children will prevent heart attacks many decades later. In addition, clinical trials are generally expensive, time consuming, address narrow clinical questions, and sometimes expose participants to potential harm. -
Design of Engineering Experiments Blocking & Confounding in the 2K
Design of Engineering Experiments Blocking & Confounding in the 2 k • Text reference, Chapter 7 • Blocking is a technique for dealing with controllable nuisance variables • Two cases are considered – Replicated designs – Unreplicated designs Chapter 7 Design & Analysis of Experiments 1 8E 2012 Montgomery Chapter 7 Design & Analysis of Experiments 2 8E 2012 Montgomery Blocking a Replicated Design • This is the same scenario discussed previously in Chapter 5 • If there are n replicates of the design, then each replicate is a block • Each replicate is run in one of the blocks (time periods, batches of raw material, etc.) • Runs within the block are randomized Chapter 7 Design & Analysis of Experiments 3 8E 2012 Montgomery Blocking a Replicated Design Consider the example from Section 6-2 (next slide); k = 2 factors, n = 3 replicates This is the “usual” method for calculating a block 3 B2 y 2 sum of squares =i − ... SS Blocks ∑ i=1 4 12 = 6.50 Chapter 7 Design & Analysis of Experiments 4 8E 2012 Montgomery 6-2: The Simplest Case: The 22 Chemical Process Example (1) (a) (b) (ab) A = reactant concentration, B = catalyst amount, y = recovery ANOVA for the Blocked Design Page 305 Chapter 7 Design & Analysis of Experiments 6 8E 2012 Montgomery Confounding in Blocks • Confounding is a design technique for arranging a complete factorial experiment in blocks, where the block size is smaller than the number of treatment combinations in one replicate. • Now consider the unreplicated case • Clearly the previous discussion does not apply, since there -
Chapter 5 Experiments, Good And
Chapter 5 Experiments, Good and Bad Point of both observational studies and designed experiments is to identify variable or set of variables, called explanatory variables, which are thought to predict outcome or response variable. Confounding between explanatory variables occurs when two or more explanatory variables are not separated and so it is not clear how much each explanatory variable contributes in prediction of response variable. Lurking variable is explanatory variable not considered in study but confounded with one or more explanatory variables in study. Confounding with lurking variables effectively reduced in randomized comparative experiments where subjects are assigned to treatments at random. Confounding with a (only one at a time) lurking variable reduced in observational studies by controlling for it by comparing matched groups. Consequently, experiments much more effec- tive than observed studies at detecting which explanatory variables cause differences in response. In both cases, statistically significant observed differences in average responses implies differences are \real", did not occur by chance alone. Exercise 5.1 (Experiments, Good and Bad) 1. Randomized comparative experiment: effect of temperature on mice rate of oxy- gen consumption. For example, mice rate of oxygen consumption 10.3 mL/sec when subjected to 10o F. temperature (Fo) 0 10 20 30 ROC (mL/sec) 9.7 10.3 11.2 14.0 (a) Explanatory variable considered in study is (choose one) i. temperature ii. rate of oxygen consumption iii. mice iv. mouse weight 25 26 Chapter 5. Experiments, Good and Bad (ATTENDANCE 3) (b) Response is (choose one) i. temperature ii. rate of oxygen consumption iii. -
The Impact of Other Factors: Confounding, Mediation, and Effect Modification Amy Yang
The Impact of Other Factors: Confounding, Mediation, and Effect Modification Amy Yang Senior Statistical Analyst Biostatistics Collaboration Center Oct. 14 2016 BCC: Biostatistics Collaboration Center Who We Are Leah J. Welty, PhD Joan S. Chmiel, PhD Jody D. Ciolino, PhD Kwang-Youn A. Kim, PhD Assoc. Professor Professor Asst. Professor Asst. Professor BCC Director Alfred W. Rademaker, PhD Masha Kocherginsky, PhD Mary J. Kwasny, ScD Julia Lee, PhD, MPH Professor Assoc. Professor Assoc. Professor Assoc. Professor Not Pictured: 1. David A. Aaby, MS Senior Stat. Analyst 2. Tameka L. Brannon Financial | Research Hannah L. Palac, MS Gerald W. Rouleau, MS Amy Yang, MS Administrator Senior Stat. Analyst Stat. Analyst Senior Stat. Analyst Biostatistics Collaboration Center |680 N. Lake Shore Drive, Suite 1400 |Chicago, IL 60611 BCC: Biostatistics Collaboration Center What We Do Our mission is to support FSM investigators in the conduct of high-quality, innovative health-related research by providing expertise in biostatistics, statistical programming, and data management. BCC: Biostatistics Collaboration Center How We Do It Statistical support for Cancer-related projects or The BCC recommends Lurie Children’s should be requesting grant triaged through their support at least 6 -8 available resources. weeks before submission deadline We provide: Study Design BCC faculty serve as Co- YES Investigators; analysts Analysis Plan serve as Biostatisticians. Power Sample Size Are you writing a Recharge Model grant? (hourly rate) Short Short or long term NO collaboration? Long Subscription Model Every investigator is (salary support) provided a FREE initial consultation of up to 2 hours with BCC faculty of staff BCC: Biostatistics Collaboration Center How can you contact us? • Request an Appointment - http://www.feinberg.northwestern.edu/sites/bcc/contact-us/request- form.html • General Inquiries - [email protected] - 312.503.2288 • Visit Our Website - http://www.feinberg.northwestern.edu/sites/bcc/index.html Biostatistics Collaboration Center |680 N. -
Relationships Between Sleep Duration and Adolescent Depression: a Conceptual Replication
HHS Public Access Author manuscript Author ManuscriptAuthor Manuscript Author Sleep Health Manuscript Author . Author manuscript; Manuscript Author available in PMC 2020 February 25. Published in final edited form as: Sleep Health. 2019 April ; 5(2): 175–179. doi:10.1016/j.sleh.2018.12.003. Relationships between sleep duration and adolescent depression: a conceptual replication AT Berger1, KL Wahlstrom2, R Widome1 1Division of Epidemiology and Community Health, University of Minnesota School of Public Health, 1300 South 2nd St, Suite #300, Minneapolis, MN 55454 2Department of Organizational Leadership, Policy and Development, 210D Burton Hall, College of Education and Human Development, University of Minnesota, Minneapolis, MN 55455 Abstract Objective—Given the growing concern about research reproducibility, we conceptually replicated a previous analysis of the relationships between adolescent sleep and mental well-being, using a new dataset. Methods—We conceptually reproduced an earlier analysis (Sleep Health, June 2017) using baseline data from the START Study. START is a longitudinal research study designed to evaluate a natural experiment in delaying high school start times, examining the impact of sleep duration on weight change in adolescents. In both START and the previous study, school day bedtime, wake up time, and answers to a six-item depression subscale were self-reported using a survey administered during the school day. Logistic regression models were used to compute the association and 95% confidence intervals (CI) between the sleep variables (sleep duration, wake- up time and bed time) and a range of outcomes. Results—In both analyses, greater sleep duration was associated with lower odds (P < 0.0001) of all six indicators of depressive mood.