Chapter 13. Experimental Design: Multiple Independent Variables
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
APPLICATION of the TAGUCHI METHOD to SENSITIVITY ANALYSIS of a MIDDLE- EAR FINITE-ELEMENT MODEL Li Qi1, Chadia S
APPLICATION OF THE TAGUCHI METHOD TO SENSITIVITY ANALYSIS OF A MIDDLE- EAR FINITE-ELEMENT MODEL Li Qi1, Chadia S. Mikhael1 and W. Robert J. Funnell1, 2 1 Department of BioMedical Engineering 2 Department of Otolaryngology McGill University Montréal, QC, Canada H3A 2B4 ABSTRACT difference in the model output due to the change in the input variable is referred to as the sensitivity. The Sensitivity analysis of a model is the investigation relative importance of parameters is judged based on of how outputs vary with changes of input parameters, the magnitude of the calculated sensitivity. The OFAT in order to identify the relative importance of method does not, however, take into account the parameters and to help in optimization of the model. possibility of interactions among parameters. Such The one-factor-at-a-time (OFAT) method has been interactions mean that the model sensitivity to one widely used for sensitivity analysis of middle-ear parameter can change depending on the values of models. The results of OFAT, however, are unreliable other parameters. if there are significant interactions among parameters. Alternatively, the full-factorial method permits the This paper incorporates the Taguchi method into the analysis of parameter interactions, but generally sensitivity analysis of a middle-ear finite-element requires a very large number of simulations. This can model. Two outputs, tympanic-membrane volume be impractical when individual simulations are time- displacement and stapes footplate displacement, are consuming. A more practical approach is the Taguchi measured. Nine input parameters and four possible method, which is commonly used in industry. It interactions are investigated for two model outputs. -
How Differences Between Online and Offline Interaction Influence Social
Available online at www.sciencedirect.com ScienceDirect Two social lives: How differences between online and offline interaction influence social outcomes 1 2 Alicea Lieberman and Juliana Schroeder For hundreds of thousands of years, humans only Facebook users,75% ofwhom report checking the platform communicated in person, but in just the past fifty years they daily [2]. Among teenagers, 95% report using smartphones have started also communicating online. Today, people and 45%reportbeingonline‘constantly’[2].Thisshiftfrom communicate more online than offline. What does this shift offline to online socializing has meaningful and measurable mean for human social life? We identify four structural consequences for every aspect of human interaction, from differences between online (versus offline) interaction: (1) fewer how people form impressions of one another, to how they nonverbal cues, (2) greater anonymity, (3) more opportunity to treat each other, to the breadth and depth of their connec- form new social ties and bolster weak ties, and (4) wider tion. The current article proposes a new framework to dissemination of information. Each of these differences identify, understand, and study these consequences, underlies systematic psychological and behavioral highlighting promising avenues for future research. consequences. Online and offline lives often intersect; we thus further review how online engagement can (1) disrupt or (2) Structural differences between online and enhance offline interaction. This work provides a useful offline interaction -
Fractional Factorial Designs
Statistics 514: Fractional Factorial Designs k−p Lecture 12: 2 Fractional Factorial Design Montgomery: Chapter 8 Fall , 2005 Page 1 Statistics 514: Fractional Factorial Designs Fundamental Principles Regarding Factorial Effects Suppose there are k factors (A,B,...,J,K) in an experiment. All possible factorial effects include effects of order 1: A, B, ..., K (main effects) effects of order 2: AB, AC, ....,JK (2-factor interactions) ................. • Hierarchical Ordering principle – Lower order effects are more likely to be important than higher order effects. – Effects of the same order are equally likely to be important • Effect Sparsity Principle (Pareto principle) – The number of relatively important effects in a factorial experiment is small • Effect Heredity Principle – In order for an interaction to be significant, at least one of its parent factors should be significant. Fall , 2005 Page 2 Statistics 514: Fractional Factorial Designs Fractional Factorials • May not have sources (time,money,etc) for full factorial design • Number of runs required for full factorial grows quickly k – Consider 2 design – If k =7→ 128 runs required – Can estimate 127 effects – Only 7 df for main effects, 21 for 2-factor interactions – the remaining 99 df are for interactions of order ≥ 3 • Often only lower order effects are important • Full factorial design may not be necessary according to – Hierarchical ordering principle – Effect Sparsity Principle • A fraction of the full factorial design ( i.e. a subset of all possible level combinations) is sufficient. Fractional Factorial Design Fall , 2005 Page 3 Statistics 514: Fractional Factorial Designs Example 1 • Suppose you were designing a new car • Wanted to consider the following nine factors each with 2 levels – 1. -
ARIADNE (Axion Resonant Interaction Detection Experiment): an NMR-Based Axion Search, Snowmass LOI
ARIADNE (Axion Resonant InterAction Detection Experiment): an NMR-based Axion Search, Snowmass LOI Andrew A. Geraci,∗ Aharon Kapitulnik, William Snow, Joshua C. Long, Chen-Yu Liu, Yannis Semertzidis, Yun Shin, and Asimina Arvanitaki (Dated: August 31, 2020) The Axion Resonant InterAction Detection Experiment (ARIADNE) is a collaborative effort to search for the QCD axion using techniques based on nuclear magnetic resonance. In the experiment, axions or axion-like particles would mediate short-range spin-dependent interactions between a laser-polarized 3He gas and a rotating (unpolarized) tungsten source mass, acting as a tiny, fictitious magnetic field. The experiment has the potential to probe deep within the theoretically interesting regime for the QCD axion in the mass range of 0.1- 10 meV, independently of cosmological assumptions. In this SNOWMASS LOI, we briefly describe this technique which covers a wide range of axion masses as well as discuss future prospects for improvements. Taken together with other existing and planned axion experi- ments, ARIADNE has the potential to completely explore the allowed parameter space for the QCD axion. A well-motivated example of a light mass pseudoscalar boson that can mediate new interactions or a “fifth-force” is the axion. The axion is a hypothetical particle that arises as a consequence of the Peccei-Quinn (PQ) mechanism to solve the strong CP problem of quantum chromodynamics (QCD)[1]. Axions have also been well motivated as a dark matter candidatedue to their \invisible" nature[2]. Axions can couple to fundamental fermions through a scalar vertex and a pseudoscalar vertex with very weak coupling strength. -
Single-Factor Experiments
D.G. Bonett (8/2018) Module 3 One-factor Experiments A between-subjects treatment factor is an independent variable with a 2 levels in which participants are randomized into a groups. It is common, but not necessary, to have an equal number of participants in each group. Each group receives one of the a levels of the independent variable with participants being treated identically in every other respect. The two-group experiment considered previously is a special case of this type of design. In a one-factor experiment with a levels of the independent variable (also called a completely randomized design), the population parameters are 휇1, 휇2, …, 휇푎 where 휇푗 (j = 1 to a) is the population mean of the response variable if all members of the study population had received level j of the independent variable. One way to assess the differences among the a population means is to compute confidence intervals for all possible pairs of differences. For example, with a = 3 levels the following pairwise comparisons of population means could be examined. 휇1 – 휇2 휇1 – 휇3 휇2 – 휇3 In a one-factor experiment with a levels there are a(a – 1)/2 pairwise comparisons. Confidence intervals for any of the two-group measures of effects size (e.g., mean difference, standardized mean difference, mean ratio, median difference, median ratio) described in Module 2 can be used to analyze any pair of groups. For any single 100(1 − 훼)% confidence interval, we can be 100(1 − 훼)% confident that the confidence interval has captured the population parameter and if v 100(1 − 훼)% confidence intervals are computed, we can be at least 100(1 − 푣훼)% confident that all v confidence intervals have captured their population parameters. -
Conditional Process Analysis in Two-Instance Repeated-Measures Designs
Conditional Process Analysis in Two-Instance Repeated-Measures Designs Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Amanda Kay Montoya, M.S., M.A. Graduate Program in Department of Psychology The Ohio State University 2018 Dissertation Committee: Andrew F. Hayes, Advisor Jolynn Pek Paul de Boeck c Copyright by Amanda Kay Montoya 2018 Abstract Conditional process models are commonly used in many areas of psychol- ogy research as well as research in other academic fields (e.g., marketing, communication, and education). Conditional process models combine me- diation analysis and moderation analysis. Mediation analysis, sometimes called process analysis, investigates if an independent variable influences an outcome variable through a specific intermediary variable, sometimes called a mediator. Moderation analysis investigates if the relationship be- tween two variables depends on another. Conditional process models are very popular because they allow us to better understand how the processes we are interested in might vary depending on characteristics of different individuals, situations, and other moderating variables. Methodological developments in conditional process analysis have primarily focused on the analysis of data collected using between-subjects experimental designs or cross-sectional designs. However, another very common design is the two- instance repeated-measures design. A two-instance repeated-measures de- sign is one where each subject is measured twice; once in each of two instances. In the analysis discussed in this dissertation, the factor that differentiates the two repeated measurements is the independent variable of interest. Research on how to statistically test mediation, moderation, and conditional process models in these designs has been minimal. -
The Efficacy of Within-Subject Measurement (Pdf)
INFORMATION TO USERS While the most advanced technology has been used to photograph and reproduce this manuscript, the quality of the reproduction is heavily dependent upon the quality of the material submitted. For example: • Manuscript pages may have indistinct print. In such cases, the best available copy has been filmed. • Manuscripts may not always be complete. In such cases, a note will indicate that it is not possible to obtain missing pages. • Copyrighted material may have been removed from the manuscript. In such cases, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, and charts) are photographed by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each oversize page is also filmed as one exposure and is available, for an additional charge, as a standard 35mm slide or as a 17”x 23” black and white photographic print. Most photographs reproduce acceptably on positive microfilm or microfiche but lack the clarity on xerographic copies made from the microfilm. For an additional charge, 35mm slides of 6”x 9” black and white photographic prints are available for any photographs or illustrations that cannot be reproduced satisfactorily by xerography. 8706188 Meltzer, Richard J. THE EFFICACY OF WITHIN-SUBJECT MEASUREMENT WayneState University Ph.D. 1986 University Microfilms International300 N. Zeeb Road, Ann Arbor, Ml 48106 Copyright 1986 by Meltzer, Richard J. All Rights Reserved PLEASE NOTE: In all cases this material has been filmed in the best possible way from the available copy. Problems encountered with this document have been identified here with a check mark V . -
M×M Cross-Over Designs
PASS Sample Size Software NCSS.com Chapter 568 M×M Cross-Over Designs Introduction This module calculates the power for an MxM cross-over design in which each subject receives a sequence of M treatments and is measured at M periods (or time points). The 2x2 cross-over design, the simplest and most common case, is well covered by other PASS procedures. We anticipate that users will use this procedure when the number of treatments is greater than or equal to three. MxM cross-over designs are often created from Latin-squares by letting groups of subjects follow different column of the square. The subjects are randomly assigned to different treatment sequences. The overall design is arranged so that there is balance in that each treatment occurs before and after every other treatment an equal number of times. We refer you to any book on cross-over designs or Latin-squares for more details. The power calculations used here ignore the complexity of specifying sequence terms in the analysis. Therefore, the data may be thought of as a one-way, repeated measures design. Combining subjects with different treatment sequences makes it more reasonable to assume a simple, circular variance-covariance pattern. Hence, usually, you will assume that all correlations are equal. This procedure computes power for both the univariate (F test and F test with Geisser-Greenhouse correction) and multivariate (Wilks’ lambda, Pillai-Bartlett trace, and Hotelling-Lawley trace) approaches. Impact of Treatment Order, Unequal Variances, and Unequal Correlations Treatment Order It is important to understand under what conditions the order of treatment application matters since the design will usually include subjects with different treatment arrangements, although each subject will eventually receive all treatments. -
Interaction of Quantitative Variables
Regression Including the Interaction Between Quantitative Variables The purpose of the study was to examine the inter-relationships among social skills, the complexity of the social situation, and performance in a social situation. Each participant considered their most recent interaction in a group of 10 or larger that included at least 50% strangers, and rated their social performance (perf) and the complexity of the situation (sitcom), Then, each participant completed a social skills inventory that provided a single index of this construct (soskil). The researcher wanted to determine the contribution of the “person” and “situation” variables to social performance, as well as to consider their interaction. Descriptive Statistics N Minimum Maximum Mean Std. Deviation SITCOM 60 .00 37.00 19.7000 8.5300 SOSKIL 60 27.00 65.00 49.4700 8.2600 Valid N (listwise) 60 As before, we will want to center our quantitative variables by subtracting the mean from each person’s scores. We also need to compute an interaction term as the product of the two centered variables. Some prefer using a “full model” approach, others a “hierarchical model” approach – remember they produce the same results. Using the hierarchical approach, the centered quantitative variables (main effects) are entered on the first step and the interaction is added on the second step Be sure to check the “R-squared change” on the Statistics window SPSS Output: Model 1 is the main effects model and Model 2 is the full model. Model Summary Change Statistics Adjusted Std. Error of R Square Model R R Square R Square the Estimate Change F Change df1 df2 Sig. -
Interaction Graphs for a Two-Level Combined Array Experiment Design by Dr
Journal of Industrial Technology • Volume 18, Number 4 • August 2002 to October 2002 • www.nait.org Volume 18, Number 4 - August 2002 to October 2002 Interaction Graphs For A Two-Level Combined Array Experiment Design By Dr. M.L. Aggarwal, Dr. B.C. Gupta, Dr. S. Roy Chaudhury & Dr. H. F. Walker KEYWORD SEARCH Research Statistical Methods Reviewed Article The Official Electronic Publication of the National Association of Industrial Technology • www.nait.org © 2002 1 Journal of Industrial Technology • Volume 18, Number 4 • August 2002 to October 2002 • www.nait.org Interaction Graphs For A Two-Level Combined Array Experiment Design By Dr. M.L. Aggarwal, Dr. B.C. Gupta, Dr. S. Roy Chaudhury & Dr. H. F. Walker Abstract interactions among those variables. To Dr. Bisham Gupta is a professor of Statistics in In planning a 2k-p fractional pinpoint these most significant variables the Department of Mathematics and Statistics at the University of Southern Maine. Bisham devel- factorial experiment, prior knowledge and their interactions, the IT’s, engi- oped the undergraduate and graduate programs in Statistics and has taught a variety of courses in may enable an experimenter to pinpoint neers, and management team members statistics to Industrial Technologists, Engineers, interactions which should be estimated who serve in the role of experimenters Science, and Business majors. Specific courses in- clude Design of Experiments (DOE), Quality Con- free of the main effects and any other rely on the Design of Experiments trol, Regression Analysis, and Biostatistics. desired interactions. Taguchi (1987) (DOE) as the primary tool of their trade. Dr. Gupta’s research interests are in DOE and sam- gave a graph-aided method known as Within the branch of DOE known pling. -
Theory and Experiment in the Analysis of Strategic Interaction
CHAPTER 7 Theory and experiment in the analysis of strategic interaction Vincent P. Crawford One cannot, without empirical evidence, deduce what understand- ings can be perceived in a nonzero-sum game of maneuver any more than one can prove, by purely formal deduction, that a particular joke is bound to be funny. Thomas Schelling, The Strategy of Conflict 1 INTRODUCTION Much of economics has to do with the coordination of independent decisions, and such questions - with some well-known exceptions - are inherently game theoretic. Yet when the Econometric Society held its First World Congress in 1965, economic theory was still almost entirely non-strategic and game theory remained largely a branch of mathematics, whose applications in economics were the work of a few pioneers. As recently as the early 1970s, the profession's view of game-theoretic modeling was typified by Paul Samuelson's customarily vivid phrase, "the swamp of n-person game theory"; and even students to whom the swamp seemed a fascinating place thought carefully before descending from the high ground of perfect competition and monopoly. The game-theoretic revolution that ensued altered the landscape in ways that would have been difficult to imagine in 1965, adding so much to our understanding that many questions whose strategic aspects once made them seem intractable are now considered fit for textbook treatment. This process was driven by a fruitful dialogue between game theory and economics, in which game theory supplied a rich language for describing strategic interactions and a set of tools for predicting their outcomes, and economics contributed questions and intuitions about strategic behavior Cambridge Collections Online © Cambridge University Press, 2006 Theory and experiment in the analysis of strategic interaction 207 against which game theory's methods could be tested and honed. -
Repeated Measures ANOVA
1 Repeated Measures ANOVA Rick Balkin, Ph.D., LPC-S, NCC Department of Counseling Texas A & M University-Commerce [email protected] 2 What is a repeated measures ANOVA? As with any ANOVA, repeated measures ANOVA tests the equality of means. However, repeated measures ANOVA is used when all members of a random sample are measured under a number of different conditions. As the sample is exposed to each condition in turn, the measurement of the dependent variable is repeated. Using a standard ANOVA in this case is not appropriate because it fails to model the correlation between the repeated measures: the data violate the ANOVA assumption of independence. Keep in mind that some ANOVA designs combine repeated measures factors and nonrepeated factors. If any repeated factor is present, then repeated measures ANOVA should be used. 3 What is a repeated measures ANOVA? This approach is used for several reasons: First, some research hypotheses require repeated measures. Longitudinal research, for example, measures each sample member at each of several ages. In this case, age would be a repeated factor. Second, in cases where there is a great deal of variation between sample members, error variance estimates from standard ANOVAs are large. Repeated measures of each sample member provides a way of accounting for this variance, thus reducing error variance. Third, when sample members are difficult to recruit, repeated measures designs are economical because each member is measured under all conditions. 4 What is a repeated measures ANOVA? Repeated measures ANOVA can also be used when sample members have been matched according to some important characteristic.