Single-Factor Experiments

Total Page:16

File Type:pdf, Size:1020Kb

Single-Factor Experiments D.G. Bonett (8/2018) Module 3 One-factor Experiments A between-subjects treatment factor is an independent variable with a 2 levels in which participants are randomized into a groups. It is common, but not necessary, to have an equal number of participants in each group. Each group receives one of the a levels of the independent variable with participants being treated identically in every other respect. The two-group experiment considered previously is a special case of this type of design. In a one-factor experiment with a levels of the independent variable (also called a completely randomized design), the population parameters are 휇1, 휇2, …, 휇푎 where 휇푗 (j = 1 to a) is the population mean of the response variable if all members of the study population had received level j of the independent variable. One way to assess the differences among the a population means is to compute confidence intervals for all possible pairs of differences. For example, with a = 3 levels the following pairwise comparisons of population means could be examined. 휇1 – 휇2 휇1 – 휇3 휇2 – 휇3 In a one-factor experiment with a levels there are a(a – 1)/2 pairwise comparisons. Confidence intervals for any of the two-group measures of effects size (e.g., mean difference, standardized mean difference, mean ratio, median difference, median ratio) described in Module 2 can be used to analyze any pair of groups. For any single 100(1 − 훼)% confidence interval, we can be 100(1 − 훼)% confident that the confidence interval has captured the population parameter and if v 100(1 − 훼)% confidence intervals are computed, we can be at least 100(1 − 푣훼)% confident that all v confidence intervals have captured their population parameters. For example, if 95% confidence intervals for 휇1 – 휇2, 휇1 – 휇3, and 휇2 – 휇3 are computed, we can be at least 100(1 − 3훼)% = 100(1 – .15)% = 85% confident that all three confidence intervals have captured the three population mean differences. When considering v confidence intervals for some measure of effect size, the researcher would like to be at least 100(1 − 훼)% confident, rather than at least 100(1 − 푣훼)% confident, that all v confidence intervals will capture the v population effect size values. One simple way to achieve this is to use a Bonferroni adjustment 훼* = 훼/v rather than 훼 in the critical t-value or critical z-value for each confidence interval. 1 D.G. Bonett (8/2018) When examining all possible pairwise differences, the Tukey-Kramer method yields a narrower confidence interval than the Bonferroni method. The classical Tukey-Kramer method for comparing all possible pairs of means assumes equal population variances, but a version of the Tukey-Kramer method that does not require equal population variances is available. SPSS provides an option to compute Games-Howell confidence intervals for all pair-wise comparisons of means that are the same as the unequal variance version of the Tukey-Kramer confidence intervals. The Tukey-Kramer and Games-Howell methods are used only when the researcher is interested in examining all possible pairwise differences. A Bonferroni confidence interval will be narrower than a Tukey- Kramer or Games-Howell confidence interval if, prior to an examination of the sample results, the researcher is interested in only u < v of the v = a(a – 1)/2 possible pairwise comparisons. For u planned comparisons, the Bonferroni adjustment is 훼* = 훼/u. However, if u of the v possible pairwise comparisons appear interesting after an examination of the sample results, it is necessary to use 훼* = 훼/v and not 훼* = 훼/u. Example 3.1. There is considerable variability in measures of intellectual ability among college students. One psychologist believes that some of this variability can be explained by differences in how students expect to perform on these tests. Ninety undergraduates were randomly selected from a list of about 5,400 undergraduates. The 90 students were randomly divided into three groups of equal size and all 90 students were given a nonverbal intelligence test (Raven’s Progressive Matrices) under identical testing conditions. The raw scores for this test range from 0 to 60. The students in group 1 were told that they were taking a very difficult intelligence test. The students in group 2 were told that they were taking an interesting “puzzle”. The students in group 3 were not told anything. Simultaneous Tukey-Kramer confidence intervals for all pairwise comparisons of population means are given below Comparison 95% Lower Limit 95% Upper Limit 휇1 – 휇2 -5.4 -3.1 휇1 – 휇3 -3.2 -1.4 휇2 – 휇3 1.2 3.5 The researcher is 95% confident that the mean intelligence score would be 3.1 to 5.4 greater if all 5,400 undergraduates had been told that the test was a puzzle instead of a difficult IQ test, 1.4 to 3.2 greater if they all had been told nothing instead of being told that the test is a difficult IQ test, and 1.2 to 3.5 greater if they all had been told the test was a puzzle instead of being told nothing. The simultaneous confidence intervals allow the researcher to be 95% confident regarding all three conclusions. Linear Contrasts Some research questions can be expressed in terms of a linear contrast of population means, ∑푎 푐 휇 , where 푐 is called a contrast coefficient. For example, 푗=1 푗 푗 푗 2 D.G. Bonett (8/2018) in an experiment that compares two costly treatments (Treatments 1 and 2) with a new inexpensive treatment (Treatment 3), a confidence interval for (휇1 + 휇2)/2 – 휇3 may provide valuable information regarding the relative costs and benefits of the new treatment. Statistical packages and various statistical formulas require 푎 linear contrasts to be expressed as ∑푗=1 푐푗휇푗 which requires the specification of the contrast coefficients. For example, (휇1 + 휇2)/2 – 휇3 can be expressed as (½)휇1 + (½)휇2 + (-1)휇3 so that 푐1= .5, 푐2 = .5, and 푐3= -1. Consider another example where Treatment 1 is delivered to groups 1 and 2 by experimenters A and B and Treatment 2 is delivered to groups 3 and 4 by experimenters C and D. In this study we may want to estimate (휇1 + 휇2)/2 – (휇3 + 휇4)/2 which can be expressed as (½)휇1 + (½)휇2 + (-½)휇3 + (-½)휇4 so that 푐1= .5, 푐2 = .5 푐3= -.5 and 푐4= -.5. 푎 A 100(1 − 훼)% unequal-variance confidence interval for ∑푗=1 푐푗휇푗 is 2̂2 푎 푎 푐푗 휎푗 ∑푗=1 푐푗휇̂푗 푡훼/2;푑푓√ ∑푗=1 (3.1) 푛푗 2 2 2 4 4 푎 푐푗 휎̂푗 푎 푐푗 휎̂푗 where df = [∑푗=1 ] /[ ∑푗=1 2 ]. When examining v linear contrasts, 훼 can 푛푗 푛푗 (푛푗−1) be replaced with 훼* = 훼/v in Equation 3.1 to give a set of Bonferroni simultaneous confidence intervals. If the sample sizes are approximately equal and there is convincing evidence from previous research that the population variances are not highly dissimilar, then the unequal-variance standard error in Equation 3.1 could be replaced with an equal- 2 푎 2 2 푎 2 variance standard error√휎̂푝 ∑푗=1 푐푗 /푛푗 where 휎̂푝 = [∑푗=1(푛푗 − 1) 휎̂푗 ]/푑푓 and 푎 df = (∑푗=1 푛푗) − 푎. Standardized Linear Contrasts In applications where the intended audience may be unfamiliar with the metric of the response variable, it could be helpful to report a confidence interval for a standardized linear contrast of population means which is defined as 푎 ∑푗=1 푐푗휇푗 휑 = 푎 2 √∑푗=1 휎푗 /푎 and is generalization of the standardized mean difference defined in Module 2. The denominator of 휑 is called the standardizer. Some alternative standardizers have 3 D.G. Bonett (8/2018) been proposed for linear contrasts. One alternative standardizer averages variances across only those groups that have a non-zero contrast coefficient. Another standardizer uses only the variance from a control group. Although not recommended for routine use, the most popular standardizer is the square root of 2 휎̂푝 defined above, which can be justified only when the population variances are approximately equal or the sample sizes are equal. An approximate equal-variance 100(1 − 훼)% confidence interval for 휑 is 휑̂ ± 푧훼/2푆퐸휑̂ (3.2) 푎 푎 2 2 2 푎 1 푎 2 where 휑̂ = ∑푗=1 푐푗휇̂푗/√(∑푗=1 휎̂푗 )/푎 and 푆퐸휑̂ = √(휑̂ /2푎 ) ∑푗=1 + ∑푗=1 푐푗 /푛푗. 푛푗−1 An unequal-variance confidence interval for 휑 is available and is recommended in studies with unequal sample sizes. When examining v linear contrasts, 훼 can be replaced with 훼* = 훼/v in Equation 3.2 to give a set of Bonferroni simultaneous confidence intervals. Example 3.2. Ninety students were randomly selected from a research participant pool and randomized into three groups. All three groups were given the same set of boring tasks for 20 minutes. Then all students listened to an audio recording that listed the names of 40 people who will be attending a party and the names of 20 people who will not be attending the party in random order. The participants were told to simply write down the names of the people who will attend the party as they hear them. In group 1, the participants were asked to draw copies of complex geometric figures while they were listening to the audio recording and writing. In group 2, the participants were not told to draw anything while listening and writing. In group 3, the participants were told to draw squares while listening and writing. The number of correctly recorded attendees was obtained from each participant.
Recommended publications
  • Fractional Factorial Designs
    Statistics 514: Fractional Factorial Designs k−p Lecture 12: 2 Fractional Factorial Design Montgomery: Chapter 8 Fall , 2005 Page 1 Statistics 514: Fractional Factorial Designs Fundamental Principles Regarding Factorial Effects Suppose there are k factors (A,B,...,J,K) in an experiment. All possible factorial effects include effects of order 1: A, B, ..., K (main effects) effects of order 2: AB, AC, ....,JK (2-factor interactions) ................. • Hierarchical Ordering principle – Lower order effects are more likely to be important than higher order effects. – Effects of the same order are equally likely to be important • Effect Sparsity Principle (Pareto principle) – The number of relatively important effects in a factorial experiment is small • Effect Heredity Principle – In order for an interaction to be significant, at least one of its parent factors should be significant. Fall , 2005 Page 2 Statistics 514: Fractional Factorial Designs Fractional Factorials • May not have sources (time,money,etc) for full factorial design • Number of runs required for full factorial grows quickly k – Consider 2 design – If k =7→ 128 runs required – Can estimate 127 effects – Only 7 df for main effects, 21 for 2-factor interactions – the remaining 99 df are for interactions of order ≥ 3 • Often only lower order effects are important • Full factorial design may not be necessary according to – Hierarchical ordering principle – Effect Sparsity Principle • A fraction of the full factorial design ( i.e. a subset of all possible level combinations) is sufficient. Fractional Factorial Design Fall , 2005 Page 3 Statistics 514: Fractional Factorial Designs Example 1 • Suppose you were designing a new car • Wanted to consider the following nine factors each with 2 levels – 1.
    [Show full text]
  • Design of Experiments
    The Design of Experiments By R. A. .Fisher, Sc.D., F.R.S. Formerly Fellow of Gonville and (Jams College, Cambridge Honorary Member, American Statistical Association and American Academy of Arts and Sciences Galton Professor, University of London Oliver and Boyd Edinburgh: Tweeddale Court London: 33 Paternoster Row, E.C. 1935 CONTENTS I. INTRODUCTION PAC.F 1. The Grounds on whidi Evidence is Disputed 1 2. The Mathematical Attitude towards Induction 3 3. The Rejection of Inverse Probability 6 >4- The Logic of the Laboratory 8 II. THE PRINCIPLES OF EXPERIMENTATION, ILLUSTRATED BY A PSYCIIO-PIIYSICAL EXPERIMENT 5. Statement of Experiment ....... 13 6. Interpretation and its Reasoned Basis ..... 14 -7.. The Test of Significance . 15 8. The Null Hypothesis ....... 18 9. Randomisation ; the Physical Basis of the Validity of the Test 20 10. The Effectiveness of Randomisation ..... 22 It. The Sensitiveness of an Experiment. Effects of Enlargement and Repetition ........ 24 12. Qualitative Methods of increasing Sensitiveness 26 III. A HISTORICAL EXPERIMENT ON GROWTH RATE. 13- ............................... 30 14. Darwin’s Discussion of the Data 31 15. Gajton’s Method of Interpretation 32 » 16. Pairing and Grouping 35 y ¡. “ Student’s ” t Test . 3« 18. Fallacious Use of Statistics 43 19. Manipulation of the Data . 44 20. Validity and Randomisation 46 21. Test of a Wider Hypothesis 50 vii * VIH CONTENTS IV. AN AGRICULTURAL EXPERIMENT IN RANDOMISED BLOCKS PAGE 22. Description of the Experiment ...... 55 23. Statistical Analysis of the Observations .... 57 24. Precision of the Comparisons ...... 64 25'. The Purposes of Replication ...... 66 26. Validity of the Estimation of Error ..... 68 27.
    [Show full text]
  • Lecture 9: Factorial Design Montgomery: Chapter 5
    Statistics 514: Factorial Design Lecture 9: Factorial Design Montgomery: chapter 5 Fall , 2005 Page 1 Statistics 514: Factorial Design Examples Example I. Two factors (A, B) each with two levels (−, +) Fall , 2005 Page 2 Statistics 514: Factorial Design Three Data for Example I Ex.I-Data 1 A B − + + 27,33 51,51 − 18,22 39,41 EX.I-Data 2 A B − + + 38,42 10,14 − 19,21 53,47 EX.I-Data 3 A B − + + 27,33 62,68 − 21,21 38,42 Fall , 2005 Page 3 Statistics 514: Factorial Design Example II: Battery life experiment An engineer is studying the effective life of a certain type of battery. Two factors, plate material and temperature, are involved. There are three types of plate materials (1, 2, 3) and three temperature levels (15, 70, 125). Four batteries are tested at each combination of plate material and temperature, and all 36 tests are run in random order. The experiment and the resulting observed battery life data are given below. temperature material 15 70 125 1 130,155,74,180 34,40,80,75 20,70,82,58 2 150,188,159,126 136,122,106,115 25,70,58,45 3 138,110,168,160 174,120,150,139 96,104,82,60 Fall , 2005 Page 4 Statistics 514: Factorial Design Example III: Bottling Experiment A soft drink bottler is interested in obtaining more uniform fill heights in the bottles produced by his manufacturing process. An experiment is conducted to study three factors of the process, which are the percent carbonation (A): 10, 12, 14 percent the operating pressure (B): 25, 30 psi the line speed (C): 200, 250 bpm The response is the deviation from the target fill height.
    [Show full text]
  • EE EA Comprehensive Guide to Factorial Two-Level Experimentation
    AEE E Comprehensive Guide to Factorial Two-Level Experimentation Robert W. Mee A Comprehensive Guide to Factorial Two-Level Experimentation Robert W. Mee Department of Statistics, Operations, and Management Science The University of Tennessee 333 Stokely Management Center Knoxville, TN 37996-0532 USA ISBN 978-0-387-89102-6 e-ISBN 978-0-387-89103-3 DOI 10.1007/b105081 Springer Dordrecht Heidelberg London New York Library of Congress Control Number: 2009927712 © Springer Science+ Business Media, LLC 2009 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) ASTM R is a registered trademark of ASTM International. AT&T R is a reg- istered trademark of AT&T in the United States and other countries. Baskin- Robbins R is a registered trademark of BR IP Holder LLC. JMP R and SAS R are registered trademarks of the SAS Institute, Inc.
    [Show full text]
  • Design Options for an Evaluation of Head Start Coaching Review of Methods for Evaluating Components of Social Interventions
    Design Options for an Evaluation of Head Start Coaching Review of Methods for Evaluating Components of Social Interventions OPRE Report #2014-81 July 2014 Design Options for an Evaluation of Head Start Coaching REVIEW OF METHODS FOR EVALUATING COMPONENTS OF SOCIAL INTERVENTIONS JULY 2014 Office of Planning, Research and Evaluation Administration for Children and Families U.S. Department of Health and Human Services http://www.acf.hhs.gov/programs/opre Wendy DeCourcey, Project Officer Christine Fortunato, Project Specialist American Institutes for Research 1000 Thomas Jefferson Street NW Washington, DC 20007-3835 Eboni C. Howard, Project Director Kathryn Drummond, Project Manager Authors Marie-Andrée Somers, MDRC Linda Collins, Pennsylvania State University Michelle Maier, MDRC OPRE Report #2014-81 Suggested Citation Somers, M., Collins, L., Maier, M. (2014). Review of Experimental Designs for Evaluating Component Effects in Social Interventions. Produced by American Institutes for Research for Head Start Professional Development: Developing Evidence for Best Practices in Coaching. Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research and Evaluation. Disclaimer This report and the findings within were prepared under Contract #HHSP23320095626W with the Administration for Children and Families, U.S. Department of Health and Human Services. The views expressed in this publication are those of the authors and do not necessarily reflect the views or policies of the Office of Planning, Research and Evaluation, the Administration for Children and Families, or the U.S. Department of Health and Human Services. This report and other reports are available from the Office of Planning, Research and Evaluation.
    [Show full text]
  • Design and Analysis Af Experiments with K Factors Having P Levels
    Design and Analysis af Experiments with k Factors having p Levels Henrik Spliid Lecture notes in the Design and Analysis of Experiments 1st English edition 2002 Informatics and Mathematical Modelling Technical University of Denmark, DK–2800 Lyngby, Denmark 0 c hs. Design of Experiments, Course 02411, IMM, DTU 1 Foreword These notes have been prepared for use in the course 02411, Statistical Design of Ex- periments, at the Technical University of Denmark. The notes are concerned solely with experiments that have k factors, which all occur on p levels and are balanced. Such ex- periments are generally called pk factorial experiments, and they are often used in the laboratory, where it is wanted to investigate many factors in a limited - perhaps as few as possible - number of single experiments. Readers are expected to have a basic knowledge of the theory and practice of the design and analysis of factorial experiments, or, in other words, to be familiar with concepts and methods that are used in statistical experimental planning in general, including for example, analysis of variance technique, factorial experiments, block experiments, square experiments, confounding, balancing and randomisation as well as techniques for the cal- culation of the sums of squares and estimates on the basis of average values and contrasts. The present version is a revised English edition, which in relation to the Danish has been improved as regards contents, layout, notation and, in part, organisation. Substantial parts of the text have been rewritten to improve readability and to make the various methods easier to apply. Finally, the examples on which the notes are largely based have been drawn up with a greater degree of detailing, and new examples have been added.
    [Show full text]
  • Application of Design and Analysis of 23 Factorial Experiment in Determining Some Factors Influencing Recall Ability in Short Term Memory
    371 GLOBAL JOURNAL OF PURE AND APPLIED SCIENCES VOL. 14, NO. 3, 2008: 371 - 374 COPYRIGHT (C) BACHUDO SCIENCES CO. LTD. PRINTED IN NIGERIA. 1SSN 1118 - 0579 APPLICATION OF DESIGN AND ANALYSIS OF 23 FACTORIAL EXPERIMENT IN DETERMINING SOME FACTORS INFLUENCING RECALL ABILITY IN SHORT TERM MEMORY E. J. EKPENYONG, C. O. OMEKARA AND A. E. USORO (Received 20 October 2007; Revision Accepted 21 February 2008) ABSTRACT In this paper, we consider a 23 factorial experiment of factors influencing recall ability in short – term memory. The factors of interest are word length, word list and study time; and these factors tested on a group of students. The data obtained from the experiment are analyzed using the analysis of variance (ANOVA) of 23 factorial experiment (Yates Algorithm). The results show that recall ability in short term memory depends on word list, word length, study time and the interaction effect of list length and word length. KEYWORDS: 23 factorial experiment, word length, design of experiment, Yate’s Algorithm, Analysis of Variance. INTRODUCTION average. Henderson (1972) cited various studies on the recall of spatical locations or of items in those locations, conducted Before now, complicated experiments, which involve by Scarborough (1971) and Posner (1969), to make the point many levels of factors were analyzed and concluded using that there is a “new magic number 4 ±1.” Broadbent (1975) certain descriptive statistics that could not give accurate and proposed a similar limit of 3 items on the basis of more varied comprehensive interperations. Lawrence, A. J., (1996) sources of information including, for example, studies showing designed a factorial experiment with students in order to that people form clusters of not more than three of four items investigate characteristics of short-term memory.
    [Show full text]
  • Causal Inference in a 2 Factorial Design Using Generalized
    Causal Inference in a 22 Factorial Design Using Generalized Propensity Score By Matilda Nilsson Department of Statistics Uppsala University Supervisors: Johan Lyhagen and Ronnie Pingel 2013 Abstract When estimating causal effects, typically one binary treatment is evaluated at a time. This thesis aims to extend the causal inference framework using the potential outcomes scheme to a situation in which it is of interest to simultaneously estimate the causal effects of two treatments, as well as their interaction effect. The model proposed is a 22 factorial model, where two methods have been used to estimate the generalized propensity score to assure unconfoundedness of the estimators. Of main focus is the inverse probability weighting estimator (IPW) and the doubly robust estimator (DR) for causal effects. Also, an estimator based on linear regression is included. A Monte Carlo simulation study is performed to evaluate the proposed estimators under both constant and variable treatment effects. Furthermore, an application on an empirical study is conducted. The empirical ap- plication is an assessment of the causal effects of two social factors (parents’ educational background and students’ Swedish background) on averages grades for ninth graders in Swedish compulsory schools. The data are from 2012 and are measured on school level. The results show that the IPW and DR estimators produces unbiased estimates for both constant and variable treatment effects, while the estimator based on linear regression is biased when treatment effects vary. Keywords: Potential outcomes, two treatments, Inverse probability weighting estimator, Doubly robust estimator. Contents 1 Introduction 1 2 The Causal Inference Framework 2 3 Causal Inference in a 22 Factorial Design 6 3.1 Estimators for the Average Treatment Effect .
    [Show full text]
  • Chapter 13. Experimental Design: Multiple Independent Variables
    13 - 1 Chapter 13. Experimental Design: Multiple Independent Variables Characteristics of Factorial Designs Possible Outcomes of a 2 X 2 Factorial Experiment Different Types of Factorial Designs Completely randomized (independent samples) Repeated measures Mixed design Interpreting Main Effects and Interactions More Complex Factorial Designs Case Analysis General Summary Detailed Summary Key Terms Review Questions/Exercises 13 - 2 Characteristics of Factorial Designs Why do children engage in aggressive behaviors? From our discussions thus far, it is clear that aggression is the result of several factors. Indeed, nearly all behaviors have their cause in a multitude of factors, both genetic and environmental factors. Thus, when we attempt to understand some area of human or animal behavior, it is often advantageous to be able to study more than one independent variable at a time. In the previous two chapters, we discussed experimental designs that involved only one independent variable and one dependent variable. In this chapter, we examine factorial designs that include more than one independent variable. In this book, we have continued to discuss the independent variable of TV violence and its potential impact on the aggressive behavior of children. One issue that we have not discussed, but others have, is whether the violence depicted on television programs is perceived as real. Does a child realize that violence seen in cartoons is not real? Might exposure to such violence affect the child’s behavior in a way different from real characters? This is an interesting question. In the previous chapters, we would have treated the type of television character as an extraneous variable and used a design technique to control it (e.g., only used shows with real characters).
    [Show full text]
  • Design and Analysis of Experiments Volume 2
    Design and Analysis of Experiments Volum e 2 Advanced Experimental Design KLAUS HINKELMANN Virginia Polytechnic Institute and State University Department of Statistics Blacksburg, VA OSCAR KEMPTHORNE Iowa State University Department of Statistics Ames, IA A JOHN WILEY & SONS, INC., PUBLICATION Design and Analysis of Experiments Design and Analysis of Experiments Volum e 2 Advanced Experimental Design KLAUS HINKELMANN Virginia Polytechnic Institute and State University Department of Statistics Blacksburg, VA OSCAR KEMPTHORNE Iowa State University Department of Statistics Ames, IA A JOHN WILEY & SONS, INC., PUBLICATION Copyright 2005 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose.
    [Show full text]
  • Unit 6: Fractional Factorial Experiments at Three Levels
    Unit 6: Fractional Factorial Experiments at Three Levels Source : Chapter 6 (Sections 6.1 - 6.6) Larger-the-better and smaller-the-better problems. • Basic concepts for 3k full factorial designs. • Analysis of 3k designs using orthogonal components system. • Design of 3-level fractional factorials. • Effect aliasing, resolution and minimum aberration in 3k p fractional • − factorial designs. An alternative analysis method using linear-quadratic system. • 1 Seat Belt Experiment An experiment to study the effect of four factors on the pull strength of • truck seat belts. Four factors, each at three levels (Table 1). • Two responses : crimp tensile strength that must be at least 4000 lb and flash • that cannot exceed 14 mm. 27 runs were conducted; each run was replicated three times as shown in • Table 2. Table 1: Factors and Levels, Seat-Belt Experiment Level Factor 0 1 2 A. pressure (psi) 1100 1400 1700 B. dieflat(mm) 10.0 10.2 10.4 C. crimplength(mm) 18 23 27 D. anchor lot (#) P74 P75 P76 2 Design Matrix and Response Data, Seat-Belt Experiment Table 2: Design Matrix and Response Data, Seat-Belt Experiment: first 14 runs Factor Run A B C D Strength Flash 1 0 0 0 0 5164 6615 5959 12.89 12.70 12.74 2 0 0 1 1 5356 6117 5224 12.83 12.73 13.07 3 0 0 2 2 3070 3773 4257 12.37 12.47 12.44 4 0 1 0 1 5547 6566 6320 13.29 12.86 12.70 5 0 1 1 2 4754 4401 5436 12.64 12.50 12.61 6 0 1 2 0 5524 4050 4526 12.76 12.72 12.94 7 0 2 0 2 5684 6251 6214 13.17 13.33 13.98 8 0 2 1 0 5735 6271 5843 13.02 13.11 12.67 9 0 2 2 1 5744 4797 5416 12.37 12.67 12.54 10 1
    [Show full text]
  • A Brief Introduction to Design of Experiments
    J. K. TELFORD A Brief Introduction to Design of Experiments Jacqueline K. Telford esign of experiments is a series of tests in which purposeful changes are made to the input variables of a system or pro- cess and the effects on response variables are measured. Design of experiments is applicable to both physical processes and computer simulation models. Experimental design is an effective tool for maximizing the amount of information gained from a study while minimizing the amount of data to be collected. Factorial experimental designs investigate the effects of many different factors by varying them simultaneously instead of changing only one factor at a time. Factorial designs allow estimation of the sensitivity to each factor and also to the combined effect of two or more factors. Experimental design methods have been successfully applied to several Ballistic Missile Defense sensitivity studies to maximize the amount of information Dwith a minimum number of computer simulation runs. In a highly competitive world of testing and evaluation, an efficient method for testing many factors is needed. BACKGROUND Would you like to be sure that you will be able to the individual runs in the experiment are to be con- draw valid and definitive conclusions from your data ducted. This multivariable testing method varies the with the minimum use of resources? If so, you should factors simultaneously. Because the factors are varied be using design of experiments. Design of experiments, independently of each other, a causal predictive model also called experimental design, is a structured and orga- can be determined. Data obtained from observational nized way of conducting and analyzing controlled tests studies or other data not collected in accordance with a to evaluate the factors that are affecting a response vari- design of experiments approach can only establish cor- able.
    [Show full text]