Design and Analysis of Experiments Course Notes for STAT 568

Total Page:16

File Type:pdf, Size:1020Kb

Design and Analysis of Experiments Course Notes for STAT 568 Design and Analysis of Experiments Course notes for STAT 568 Adam B Kashlak Mathematical & Statistical Sciences University of Alberta Edmonton, Canada, T6G 2G1 March 27, 2019 cbna This work is licensed under the Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/ by-nc-sa/4.0/. Contents Preface1 1 One-Way ANOVA2 1.0.1 Terminology...........................2 1.1 Analysis of Variance...........................3 1.1.1 Sample size computation....................6 1.1.2 Contrasts.............................7 1.2 Multiple Comparisons..........................8 1.3 Random Effects..............................9 1.3.1 Derivation of the F statistic................... 11 1.4 Cochran's Theorem............................ 12 2 Multiple Factors 15 2.1 Randomized Block Design........................ 15 2.1.1 Paired vs Unpaired Data.................... 17 2.1.2 Tukey's One DoF Test...................... 18 2.2 Two-Way Layout............................. 20 2.2.1 Fixed Effects........................... 20 2.3 Latin Squares............................... 21 2.3.1 Graeco-Latin Squares...................... 23 2.4 Balanced Incomplete Block Designs................... 24 2.5 Split-Plot Designs............................ 26 2.6 Analysis of Covariance.......................... 30 3 Multiple Testing 33 3.1 Family-wise Error Rate......................... 34 3.1.1 Bonferroni's Method....................... 34 3.1.2 Sidak's Method.......................... 34 3.1.3 Holms' Method.......................... 35 3.1.4 Stepwise Methods........................ 35 3.2 False Discovery Rate........................... 36 3.2.1 Benjamini-Hochberg Method.................. 37 4 Factorial Design 39 4.1 Full Factorial Design........................... 40 4.1.1 Estimating effects with regression................ 41 4.1.2 Lenth's Method.......................... 43 4.1.3 Key Concepts........................... 44 4.1.4 Dispersion and Variance Homogeneity............. 45 4.1.5 Blocking with Factorial Design................. 46 4.2 Fractional Factorial Design....................... 48 4.2.1 How to choose a design..................... 49 4.3 3k Factorial Designs........................... 52 4.3.1 Linear and Quadratic Contrasts................ 54 4.3.2 3k−q Fractional Designs..................... 55 4.3.3 Agricultural Example...................... 58 5 Response Surface Methodology 63 5.1 First and Second Order......................... 64 5.2 Some Response Surface Designs..................... 65 5.2.1 Central Composite Design.................... 65 5.2.2 Box-Behnken Design....................... 67 5.2.3 Uniform Shell Design....................... 68 5.3 Search and Optimization......................... 69 5.3.1 Ascent via First Order Designs................. 69 5.4 Chemical Reaction Data Example................... 70 6 Nonregular, Nonnormal, and other Designs 74 6.1 Prime Level Factorial Designs...................... 74 6.1.1 5 level designs.......................... 75 6.1.2 7 level designs.......................... 78 6.1.3 Example of a 25-run design................... 78 6.2 Mixed Level Designs........................... 79 6.2.1 2n4m Designs........................... 81 6.2.2 36-Run Designs.......................... 84 6.3 Nonregular Designs............................ 86 6.3.1 Plackett-Burman Designs.................... 86 6.3.2 Aliasing and Correlation..................... 89 6.3.3 Simulation Example....................... 91 6.A Paley's Contruction of HN ........................ 94 Preface It's the random factors. You can't be sure, ever. All of time and space for them to happen in. How can we know where to look or turn? Do we have to fight our way through every possible happening to get the thing we want? Time and Again Clifford D Simak (1951) The following are lecture notes originally produced for a graduate course on experimental design at the University of Alberta in the winter of 2018. The goal of these notes is to cover the classical theory of design as born from some of the founding fathers of statistics. The proper approach is to begin with an hypothesis to test, design an experiment to test that hypothesis, collected data as needed by the design, and run the test. These days, data is collected en masse and often subjected to many tests in an exploratory search. Though, understanding how to design experiments is still critical for being able to determine which factors effect the observations. These notes were produced by consolidating two sources. One is the text of Wu and Hamada, Experiments: Planning, Analysis, and Optimization. The second is lecture notes and lecture slides from Dr. Doug Wiens and Dr. Linglong Kong, respectively. Adam B Kashlak Edmonton, Canada January 2018 Additional notes on multiple testing were included based on the text Large-Scale Inference by Bradley Efron, which is quite relevant to the many hypothesis tests considered in factorial designs. ABK, Jan 2019 1 Chapter 1 One-Way ANOVA Introduction We begin by considering an experiment in which k groups are compared. The primary goal is to determine whether or not there is a significant difference among all of the groups. The secondary goal is then to determine which specific pairs of groups differ the most. One example would be sampling n residents from each of the k = 10 Canadian provinces and comparing their heights perhaps to test whether or not stature is effected by province. Here, the province is the single factor in the experiment with 10 different factor levels. Another example would be contrasting the heights of k = 3 groups of flowers where group one is given just water, group two is given water and nutrients, and group three is given water and vinegar. In this case, the factor is the liquid given to the flowers. It is also often referred to as a treatment. When more than one factor is considered, the treatment refers to specific levels of all factors. 1.0.1 Terminology In the design of experiments literature, there is much terminology to consider. The following is a list of some of the common terms: • Size or level of a test: the probability of a false positive. That is, the probability of falsely rejecting the null hypothesis. • Power of a test: the probability of a true positive. That is, the probability of correctly rejecting the null hypothesis. • Response: the dependent variable or output of the model. It is what we are interested in modelling. 2 • Factor: an explanatory variable or an input into the model. Often controlled by the experimenter. • Factor Level: the different values that a factor can take. Often this is cate- gorical. • Treatment: the overall combination of many factors and levels. • Blocking: grouping subjects by type in order to understand the variation between the blocks versus the variation within the blocks. • Fixed effects: When a factor is chosen by the experimenter, it is considered fixed. • Random effects: When a factor is not controlled by the experimenter, it is considered random. One example comes from the Rabbit dataset from the MASS library in R. Here, five rabbits are given drugs (MDL) and placebos in different dosages and the effect of their blood pressure is recorded. The blood pressure would be the response. The factors are drug, dosage, and rabbit where the factor levels for drug are fMDL, placebog, the levels for dosage are f6.25, 12.5, 25, 50, 100, 200g, and the levels for rabbit are fR1,. ,R5g. A specific treatment could be rabbit R3 with a dosage of 25 of placebo. 1.1 Analysis of Variance1 In statistics in general, analysis of variance or ANOVA is concerned with decompos- ing the total variation of the data by the factors. That is, it determines how much of the variation can be explained by each factor and how much is left to random noise. We begin with the setting of one-way fixed effects. Consider a sample of size N = nk and k different treatments. Thus, we have k different groups of size n. Each group is given a different treatment, and measurements yij for i = 1; : : : ; k and j = 1; : : : ; n are collected. The one-way ANOVA is concerned with comparing the between group variation to the within group variation, which is the variation explained by the treatments vs the unexplained variation. Remark 1.1.1 (Randomization). In the fixed effects setting in practise, the N subjects are randomly assigned to one of the k treatment groups. However, this is not always possible for a given experiment. 1 See Wu & Hamada Section 2.1 3 The model for the observations yij is yij = µ + τi + "ij where µ is the global mean and τi is the effect of the ith category or treatment. 2 2 The "ij are random noise variables generally assumed to be iid N 0; σ with σ unknown. Based on this model, we can rewrite the observation yij as yij =µ ^ +τ ^i + rij (1.1.1) where k n n k n 1 X X 1 X 1 X X µ^ =y ¯ = y ; τ^ =y ¯ − y¯ = y − y ; r = y − y¯ ·· N ij i i· ·· n ij N lj ij ij i· i=1 j=1 j=1 l=1 j=1 Equation 1.1.1 can be rearranged into yij − y¯·· = (¯yi· − y¯··) + (yij − y¯i·): This, in turn, can be squared and summed to get k n k k n X X 2 X 2 X X 2 (yij − y¯··) = n(¯yi· − y¯··) + (yij − y¯i·) ; i=1 j=1 i=1 i=1 j=1 which is just the total sum of squares, SStot, decomposed into the sum of the treat- ment sum of squares, SStr, and the error sum of squares, SSerr. Under the assumption that the errors "ij are normally distributed, the usual F statistic can be derived to test the hypothesis that H0 : τ1 = ::: = τk vs H1 : 9i1; i2 s.t. τi1 6= τi2 : 2 Indeed, under this model, it can be shown that SSerr ∼ χ (N − k) and that under 2 the null hypothesis SStr ∼ χ (k − 1). Hence, the test statistic is SS =(k − 1) F = tr ∼ F (k − 1;N − k) : SSres=(N − k) Often, for example in R, all of these terms from the one-way ANOVA experiment are represented in a table as follows: DoF Sum Squares Mean Squares F value p-value Treatment k − 1 SStr SStr=(k − 1) F P(> F ) Residuals N − k SSerr SSerr=(N − k) 4 Remark 1.1.2 (Degrees of Freedom).
Recommended publications
  • Design of Experiments Application, Concepts, Examples: State of the Art
    Periodicals of Engineering and Natural Scinces ISSN 2303-4521 Vol 5, No 3, December 2017, pp. 421‒439 Available online at: http://pen.ius.edu.ba Design of Experiments Application, Concepts, Examples: State of the Art Benjamin Durakovic Industrial Engineering, International University of Sarajevo Article Info ABSTRACT Article history: Design of Experiments (DOE) is statistical tool deployed in various types of th system, process and product design, development and optimization. It is Received Aug 3 , 2018 multipurpose tool that can be used in various situations such as design for th Revised Oct 20 , 2018 comparisons, variable screening, transfer function identification, optimization th Accepted Dec 1 , 2018 and robust design. This paper explores historical aspects of DOE and provides state of the art of its application, guides researchers how to conceptualize, plan and conduct experiments, and how to analyze and interpret data including Keyword: examples. Also, this paper reveals that in past 20 years application of DOE have Design of Experiments, been grooving rapidly in manufacturing as well as non-manufacturing full factorial design; industries. It was most popular tool in scientific areas of medicine, engineering, fractional factorial design; biochemistry, physics, computer science and counts about 50% of its product design; applications compared to all other scientific areas. quality improvement; Corresponding Author: Benjamin Durakovic International University of Sarajevo Hrasnicka cesta 15 7100 Sarajevo, Bosnia Email: [email protected] 1. Introduction Design of Experiments (DOE) mathematical methodology used for planning and conducting experiments as well as analyzing and interpreting data obtained from the experiments. It is a branch of applied statistics that is used for conducting scientific studies of a system, process or product in which input variables (Xs) were manipulated to investigate its effects on measured response variable (Y).
    [Show full text]
  • Design of Experiments (DOE) Using JMP Charles E
    PharmaSUG 2014 - Paper PO08 Design of Experiments (DOE) Using JMP Charles E. Shipp, Consider Consulting, Los Angeles, CA ABSTRACT JMP has provided some of the best design of experiment software for years. The JMP team continues the tradition of providing state-of-the-art DOE support. In addition to the full range of classical and modern design of experiment approaches, JMP provides a template for Custom Design for specific requirements. The other choices include: Screening Design; Response Surface Design; Choice Design; Accelerated Life Test Design; Nonlinear Design; Space Filling Design; Full Factorial Design; Taguchi Arrays; Mixture Design; and Augmented Design. Further, sample size and power plots are available. We give an introduction to these methods followed by a few examples with factors. BRIEF HISTORY AND BACKGROUND From early times, there has been a desire to improve processes with controlled experimentation. A famous early experiment cured scurvy with seamen taking citrus fruit. Lives were saved. Thomas Edison invented the light bulb with many experiments. These experiments depended on trial and error with no software to guide, measure, and refine experimentation. Japan led the way with improving manufacturing—Genichi Taguchi wanted to create repeatable and robust quality in manufacturing. He had been taught management and statistical methods by William Edwards Deming. Deming taught “Plan, Do, Check, and Act”: PLAN (design) the experiment; DO the experiment by performing the steps; CHECK the results by testing information; and ACT on the decisions based on those results. In America, other pioneers brought us “Total Quality” and “Six Sigma” with “Design of Experiments” being an advanced methodology.
    [Show full text]
  • Recommendations for Design Parameters for Central Composite Designs with Restricted Randomization
    Recommendations for Design Parameters for Central Composite Designs with Restricted Randomization Li Wang Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Statistics Geoff Vining, Chair Scott Kowalski, Co-Chair John P. Morgan Dan Spitzner Keying Ye August 15, 2006 Blacksburg, Virginia Keywords: Response Surface Designs, Central Composite Design, Split-plot Designs, Rotatability, Orthogonal Blocking. Copyright 2006, Li Wang Recommendations for Design Parameters for Central Composite Designs with Restricted Randomization Li Wang ABSTRACT In response surface methodology, the central composite design is the most popular choice for fitting a second order model. The choice of the distance for the axial runs, α, in a central composite design is very crucial to the performance of the design. In the literature, there are plenty of discussions and recommendations for the choice of α, among which a rotatable α and an orthogonal blocking α receive the greatest attention. Box and Hunter (1957) discuss and calculate the values for α that achieve rotatability, which is a way to stabilize prediction variance of the design. They also give the values for α that make the design orthogonally blocked, where the estimates of the model coefficients remain the same even when the block effects are added to the model. In the last ten years, people have begun to realize the importance of a split-plot structure in industrial experiments. Constructing response surface designs with a split-plot structure is a hot research area now. In this dissertation, Box and Hunters’ choice of α for rotatablity and orthogonal blocking is extended to central composite designs with a split-plot structure.
    [Show full text]
  • Concepts Underlying the Design of Experiments
    Concepts Underlying the Design of Experiments March 2000 University of Reading Statistical Services Centre Biometrics Advisory and Support Service to DFID Contents 1. Introduction 3 2. Specifying the objectives 5 3. Selection of treatments 6 3.1 Terminology 6 3.2 Control treatments 7 3.3 Factorial treatment structure 7 4. Choosing the sites 9 5. Replication and levels of variation 10 6. Choosing the blocks 12 7. Size and shape of plots in field experiments 13 8. Allocating treatment to units 14 9. Taking measurements 15 10. Data management and analysis 17 11. Taking design seriously 17 © 2000 Statistical Services Centre, The University of Reading, UK 1. Introduction This guide describes the main concepts that are involved in designing an experiment. Its direct use is to assist scientists involved in the design of on-station field trials, but many of the concepts also apply to the design of other research exercises. These range from simulation exercises, and laboratory studies, to on-farm experiments, participatory studies and surveys. What characterises the design of on-station experiments is that the researcher has control of the treatments to apply and the units (plots) to which they will be applied. The results therefore usually provide a detailed knowledge of the effects of the treat- ments within the experiment. The major limitation is that they provide this information within the artificial "environment" of small plots in a research station. A laboratory study should achieve more precision, but in a yet more artificial environment. Even more precise is a simulation model, partly because it is then very cheap to collect data.
    [Show full text]
  • Split-Plot Designs and Response Surface Designs
    Chapter 8: Split-Plot Designs Split-plot designs were originally developed by Fisher (1925,Statistical Methods for Research Workers) for use in agricultural experiments where factors are differentiated with respect to the ease with which they can be changed from experimental run to experimental run. This may be due to the fact that a particular treatment is expensive or time-consuming to change or it may be due to the fact that the experiment is to be run in a large batches and batches can be subdivided later for additional treatments. A split-plot design may be considered as a special case of the two-factor randomized design where one wants to obtain more precise information about one factor and also about the interaction between the two factors. The second factor being of secondary importance to the experimenter. Example 1 Consider a study of the effects of four irrigation methods (A1, A2, A3, and A4) and three fertilizers (B1, B2, and B3). Two replicates are planned to carried out for each of the 12 combinations. A completely randomized design will randomly assign a combination of irrigation and fertilization methods to one of the 24 fields. This may be easy for fertilization but it is hard for irrigation where sprinklers are arranged in a certain way. What is usually done is to use just eight plots of land, chosen so that each can be irrigated at a set level without affecting the irrigation of the others. These large plots are called the whole plots. Randomly assign irrigation levels to each whole plot so that each irrigation level is assigned to exactly two plots.
    [Show full text]
  • The Theory of the Design of Experiments
    The Theory of the Design of Experiments D.R. COX Honorary Fellow Nuffield College Oxford, UK AND N. REID Professor of Statistics University of Toronto, Canada CHAPMAN & HALL/CRC Boca Raton London New York Washington, D.C. C195X/disclaimer Page 1 Friday, April 28, 2000 10:59 AM Library of Congress Cataloging-in-Publication Data Cox, D. R. (David Roxbee) The theory of the design of experiments / D. R. Cox, N. Reid. p. cm. — (Monographs on statistics and applied probability ; 86) Includes bibliographical references and index. ISBN 1-58488-195-X (alk. paper) 1. Experimental design. I. Reid, N. II.Title. III. Series. QA279 .C73 2000 001.4 '34 —dc21 00-029529 CIP This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W.
    [Show full text]
  • Argus: Interactive a Priori Power Analysis
    © 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and Computer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/ Argus: Interactive a priori Power Analysis Xiaoyi Wang, Alexander Eiselmayer, Wendy E. Mackay, Kasper Hornbæk, Chat Wacharamanotham 50 Reading Time (Minutes) Replications: 3 Power history Power of the hypothesis Screen – Paper 1.0 40 1.0 Replications: 2 0.8 0.8 30 Pairwise differences 0.6 0.6 20 0.4 0.4 10 0 1 2 3 4 5 6 One_Column Two_Column One_Column Two_Column 0.2 Minutes with 95% confidence interval 0.2 Screen Paper Fatigue Effect Minutes 0.0 0.0 0 2 3 4 6 8 10 12 14 8 12 16 20 24 28 32 36 40 44 48 Fig. 1: Argus interface: (A) Expected-averages view helps users estimate the means of the dependent variables through interactive chart. (B) Confound sliders incorporate potential confounds, e.g., fatigue or practice effects. (C) Power trade-off view simulates data to calculate statistical power; and (D) Pairwise-difference view displays confidence intervals for mean differences, animated as a dance of intervals. (E) History view displays an interactive power history tree so users can quickly compare statistical power with previously explored configurations. Abstract— A key challenge HCI researchers face when designing a controlled experiment is choosing the appropriate number of participants, or sample size. A priori power analysis examines the relationships among multiple parameters, including the complexity associated with human participants, e.g., order and fatigue effects, to calculate the statistical power of a given experiment design.
    [Show full text]
  • Design of Experiments and Data Analysis,” 2012 Reliability and Maintainability Symposium, January, 2012
    Copyright © 2012 IEEE. Reprinted, with permission, from Huairui Guo and Adamantios Mettas, “Design of Experiments and Data Analysis,” 2012 Reliability and Maintainability Symposium, January, 2012. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of ReliaSoft Corporation's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. 2012 Annual RELIABILITY and MAINTAINABILITY Symposium Design of Experiments and Data Analysis Huairui Guo, Ph. D. & Adamantios Mettas Huairui Guo, Ph.D., CPR. Adamantios Mettas, CPR ReliaSoft Corporation ReliaSoft Corporation 1450 S. Eastside Loop 1450 S. Eastside Loop Tucson, AZ 85710 USA Tucson, AZ 85710 USA e-mail: [email protected] e-mail: [email protected] Tutorial Notes © 2012 AR&MS SUMMARY & PURPOSE Design of Experiments (DOE) is one of the most useful statistical tools in product design and testing. While many organizations benefit from designed experiments, others are getting data with little useful information and wasting resources because of experiments that have not been carefully designed. Design of Experiments can be applied in many areas including but not limited to: design comparisons, variable identification, design optimization, process control and product performance prediction. Different design types in DOE have been developed for different purposes.
    [Show full text]
  • Interventions for Controlling Myopia Onset and Progression Report
    Special Issue IMI – Interventions for Controlling Myopia Onset and Progression Report Christine F. Wildsoet,1 Audrey Chia,2 Pauline Cho,3 Jeremy A. Guggenheim,4 Jan Roelof Polling,5,6 Scott Read,7 Padmaja Sankaridurg,8 Seang-Mei Saw,9 Klaus Trier,10 Jeffrey J. Walline,11 Pei-Chang Wu,12 and James S. Wolffsohn13 1Berkeley Myopia Research Group, School of Optometry and Vision Science Program, University of California Berkeley, Berkeley, California, United States 2Singapore Eye Research Institute and Singapore National Eye Center, Singapore 3School of Optometry, The Hong Kong Polytechnic University, Hong Kong 4School of Optometry and Vision Sciences, Cardiff University, Cardiff, United Kingdom 5Erasmus MC Department of Ophthalmology, Rotterdam, The Netherlands 6HU University of Applied Sciences, Optometry and Orthoptics, Utrecht, The Netherlands 7School of Optometry and Vision Science and Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia 8Brien Holden Vision Institute and School of Optometry and Vision Science, University of New South Wales, Sydney, Australia 9Saw Swee Hock School of Public Health, National University of Singapore, Singapore 10Trier Research Laboratories, Hellerup, Denmark 11The Ohio State University College of Optometry, Columbus, Ohio, United States 12Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan 13Ophthalmic Research Group, Aston University, Birmingham, United Kingdom Correspondence: Christine F. Wild- Myopia has been predicted to affect approximately 50% of the world’s population based soet, School of Optometry, Universi- on trending myopia prevalence figures. Critical to minimizing the associated adverse ty of California Berkeley, 588 Minor visual consequences of complicating ocular pathologies are interventions to prevent or Hall, Berkeley, CA 94720-2020, USA; delay the onset of myopia, slow its progression, and to address the problem of mechanical [email protected].
    [Show full text]
  • The Usefulness of Optimal Design for Generating Blocked and Split-Plot Response Surface Experiments Peter Goos
    The Usefulness of Optimal Design for Generating Blocked and Split-Plot Response Surface Experiments Peter Goos Universiteit Antwerpen Abstract This article provides an overview of the recent literature on the design of blocked and split-plot experiments with quantitative experimental variables. A detailed literature study introduces the ongoing debate between an optimal design approach to constructing blocked and split-plot designs and approaches where the equivalence of ordinary least squares and generalized least squares estimates are envisaged. Examples where the competing approaches lead to totally different designs are given, as well as examples in which the proposed designs satisfy both camps. Keywords: equivalence of OLS and GLS, ordinary least squares, generalized least squares, orthogonal blocking, D-optimal design 1 Introduction Blocked experiments are typically used when not all the runs of an experiment can be conducted under homogeneous circumstances. In such experiments, the runs are par- titioned in blocks so that the within-block variability is considerably smaller than the between-block variability. Split-plot experiments are used for a different reason, that is because the levels of some of the experimental variables are not independently reset for every experimental run as this is often impractical, costly or time-consuming. As a con- sequence, the runs of a split-plot experiment are also partitioned in groups, and within each group the levels of the so-called hard-to-change variables are constant. Split-plot experiments can thus be interpreted as special cases of blocked experiments. Despite the similarity between the two types of experiments, the terminology utilized in the context of blocked and split-plot experiments is different: the groups of observations in a blocked experiment are called blocks, whereas those in split-plot experiments are named whole plots.
    [Show full text]
  • Experiment Design and Analysis Reference
    Experiment Design & Analysis Reference ReliaSoft Corporation Worldwide Headquarters 1450 South Eastside Loop Tucson, Arizona 85710-6703, USA http://www.ReliaSoft.com Notice of Rights: The content is the Property and Copyright of ReliaSoft Corporation, Tucson, Arizona, USA. It is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. See the next pages for a complete legal description of the license or go to http://creativecommons.org/licenses/by-nc-sa/4.0/legalcode. Quick License Summary Overview You are Free to: Share: Copy and redistribute the material in any medium or format Adapt: Remix, transform, and build upon the material Under the following terms: Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. See example at http://www.reliawiki.org/index.php/Attribution_Example NonCommercial: You may not use this material for commercial purposes (sell or distribute for profit). Commercial use is the distribution of the material for profit (selling a book based on the material) or adaptations of this material. ShareAlike: If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. Generation Date: This document was generated on April 29, 2015 based on the current state of the online reference book posted on ReliaWiki.org. Information in this document is subject to change without notice and does not represent a commitment on the part of ReliaSoft Corporation.
    [Show full text]
  • Subject Experiments in Augmented Reality
    Quan%ta%ve and Qualita%ve Methods for Human-Subject Experiments in Augmented Reality IEEE VR 2014 Tutorial J. Edward Swan II, Mississippi State University (organizer) Joseph L. Gabbard, Virginia Tech (co-organizer) 1 Schedule 9:00–10:30 1.5 hrs Experimental Design and Analysis Ed 10:30–11:00 0.5 hrs Coffee Break 11:00–12:30 1.5 hrs Experimental Design and Analysis / Ed / Formative Usability Evaluation Joe 12:30–2:00 1.5 hrs Lunch Break 2:00–3:30 1.5 hrs Formative Usability Evaluation Joe 3:30–4:00 0.5 hrs Coffee Break 4:00–5:30 1.5 hrs Formative Usability Evaluation / Joe / Panel Discussion Ed 2 Experimental Design and Analysis J. Edward Swan II, Ph.D. Department of Computer Science and Engineering Department of Psychology (Adjunct) Mississippi State University Motivation and Goals • Course attendee backgrounds? • Studying experimental design and analysis at Mississippi State University: – PSY 3103 Introduction to Psychological Statistics – PSY 3314 Experimental Psychology – PSY 6103 Psychometrics – PSY 8214 Quantitative Methods In Psychology II – PSY 8803 Advanced Quantitative Methods – IE 6613 Engineering Statistics I – IE 6623 Engineering Statistics II – ST 8114 Statistical Methods – ST 8214 Design & Analysis Of Experiments – ST 8853 Advanced Design of Experiments I – ST 8863 Advanced Design of Experiments II • 7 undergrad hours; 30 grad hours; 3 departments! 4 Motivation and Goals • What can we accomplish in one day? • Study subset of basic techniques – Presenters have found these to be the most applicable to VR, AR systems • Focus on intuition behind basic techniques • Become familiar with basic concepts and terms – Facilitate working with collaborators from psychology, industrial engineering, statistics, etc.
    [Show full text]