A Review of Robust Optimal Design and Its Application in Dynamics

Total Page:16

File Type:pdf, Size:1020Kb

A Review of Robust Optimal Design and Its Application in Dynamics Computers and Structures 83 (2005) 315–326 www.elsevier.com/locate/compstruc A review of robust optimal design and its application in dynamics C. Zang a, M.I. Friswell a,*, J.E. Mottershead b a Department of Aerospace Engineering, University of Bristol, Queen’s Building, Bristol BS8 1TR, United Kingdom b Department of Engineering, University of Liverpool, Brownlow Hill, Liverpool L69 3GH, United Kingdom Received 13 November 2003; accepted 9 October 2004 Abstract The objective of robust design is to optimise the mean and minimize the variability that results from uncertainty represented by noise factors. The various objective functions and analysis techniques used for the Taguchi based approaches and optimisation methods are reviewed. Most applications of robust design have been concerned with static performance in mechanical engineering and process systems, and applications in structural dynamics are rare. The robust design of a vibration absorber with mass and stiffness uncertainty in the main system is used to demonstrate the robust design approach in dynamics. The results show a significant improvement in performance compared with the conventional solution. Ó 2004 Elsevier Ltd. All rights reserved. Keywords: Robust design; Stochastic optimization; Taguchi; Vibration absorber 1. Introduction on product design and performance are of paramount concern to researchers and practitioners. Successful manufactured products rely on the best The earliest approach to reducing the output varia- possible design and performance. They are usually pro- tion was to use the Six Sigma Quality strategy [1,2] so duced using the tools of engineering design optimisation that ±6 standard deviations lie between the mean and in order to meet design targets. However, conventional the nearest specification limit. Six Sigma as a measure- design optimisation may not always satisfy the desired ment standard in product variation can be traced back targets due to the significant uncertainty that exists in to the 1920s when Walter Shewhart showed that three material and geometrical parameters such as modulus, sigma from the mean is the point where a process re- thickness, density and residual strain, as well as in joints quires correction. In the early and mid-1980s, Motorola and component assembly. Crucial problems in noise and developed this new standard and documented more than vibration, caused by the variance in the structural $16 billion in savings as a result of the Six Sigma efforts. dynamics, are rarely considered in design optimisation. In the last twenty years, various non-deterministic Therefore, ways to minimize the effect of uncertainty methods have been developed to deal with design uncer- tainties. These methods can be classified into two ap- proaches, namely reliability-based methods and robust * Corresponding author. Fax: +44 117 927 2771. design based methods. The reliability methods estimate E-mail address: [email protected] (M.I. Friswell). the probability distribution of the systemÕs response 0045-7949/$ - see front matter Ó 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.compstruc.2004.10.007 316 C. Zang et al. / Computers and Structures 83 (2005) 315–326 based on the known probability distributions of the ran- tions industry and since then the Taguchi robust design dom parameters, and is predominantly used for risk method has been successfully applied to various indus- analysis by computing the probability of failure of a sys- trial fields such as electronics, automotive products, tem. However, the variation is not minimized in the reli- photography, and telecommunications [6–8]. ability approaches [3], which concentrate on the rare The fundamental definition of robust design is de- events at the tails of the probability distribution [4]. Ro- scribed as A product or process is said to be robust when bust design improves the quality of a product by mini- it is insensitive to the effects of sources of variability, even mizing the effect of the causes of variation without through the sources themselves have not been eliminated eliminating these causes. The objective is different from [9]. In the design process, a number of parameters can the reliability approach, and is to optimise the mean per- affect the quality characteristic or performance of the formance and minimise its variation, while maintaining product. Parameters within the system may be classified feasibility with probabilistic constraints. This is achieved as signal factors, noise factors and control factors. Sig- by optimising the product and process design to make nal factors are parameters that determine the range of the performance minimally sensitive to the various configurations to be considered by the robust design. causes of variation. Hence robust design concentrates Noise factors are parameters that cannot be controlled on the probability distribution near to the mean values. by the designer, or are difficult and expensive to control, In this paper, robust optimal design methods are re- and constitute the source of variability in the system. viewed extensively and an example of the robust design Control factors are the specified parameters that the de- of a passive vibration absorber is used to demonstrate signer has to optimise to give the least sensitivity of the the application of these techniques to vibration analysis. response to the effect of the noise factors. For example, in an automotive body in white, uncertainty may arise in the panel thicknesses given as the noise factors. The 2. The concept of robust design geometry is then optimised through control factors describing the panel shape, for the different configura- The robust design method is essential to improving tions to be analysed determined by the signal factors, engineering productivity, and early work can be traced such as different applied loads or the response at differ- back to the early 1920s when Fisher and Yates [5] devel- ent frequencies. oped the statistical design of experiments (DOE) ap- A P-diagram [6] may be used to represent different proach to improve the yield of agricultural crops in types of parameters and their relationships. Fig. 1 shows England. In the 1950s and early 1960s, Taguchi devel- the different types of performance variations, where the oped the foundations of robust design to meet the chal- large circles denote the target and the response distribu- lenge of producing high-quality products. In 1980, he tion is indicated by the dots and the associated probabil- applied his methods in the American telecommunica- ity density function. The aim of robust design is to make Fig. 1. Different types of performance variations. C. Zang et al. / Computers and Structures 83 (2005) 315–326 317 the system response close to the target with low varia- tion is developed. Parameter design, sometimes called tions, without eliminating the noise factors in the sys- robust design, identifies factors that reduce the system tem, as illustrated in Fig. 1(d). sensitivity to noise, thereby enhancing the systemÕs Suppose that y = f(s,z,x) denotes the vector of re- robustness. Tolerance design specifies the allowable sponses for a particular set of factors, where s, z and x deviations in the parameter values, loosening tolerances are vectors of the signal, noise and control factors. if possible and tightening tolerances if necessary [9]. The noise factors are uncertain and generally specified TaguchiÕs objective functions for robust design arise probabilistically. Thus z, and hence f, are random vari- from quality measures using quadratic loss functions. ables. The range of configurations are specified by a In the extension of this definition to design optimisation, set of signal factors, V, and thus s 2 V. One possible Taguchi suggested the signal-to-noise ratio (SNR), mathematical description of robust design is then À10log10(MSD), as a measure of the mean squared devi- hi ation (MSD) in the performance. The use of SNR in sys- 2 x^ ¼ arg min max Ez kfðs; z; xÞtk ð1Þ tem analysis provides a quantitative value for response x s2V variation comparison. Maximizing the SNR results in subject to the constraints, the minimization of the response variation and more ro- g ðs; z; xÞ 6 0 for j ¼ 1; ...; m; ð2Þ bust system performance is obtained. Suppose we have j only one response variable, y, and only one configura- where E denotes the expected value (over the uncertainty tion of the system (so the signal factor may be ne- due to z) and t is the vector of target responses, which glected). Then for any set of control factors, x, the may depend on the signal factors, s. Note that x is the noise factors are represented by n sets of parameters, parameter vector that is adjusted to obtain the optimal leading to the n responses, yi. Although there are many solution, given by Eq. (1) as the solution that gives the possible SNR ratios, only two will be considered here. smallest worse case response over the range of configu- The target is best SNR. This SNR quantifies the devi- rations, or signal factors, in the set V. The key step in ation of the response from the target, t, and is the robust design problem is the specification of the objective function, and once this has been done the tools SNR ¼10log ðMSDÞ 10 ! of statistics (such as the analysis of variance) and the de- 1 X sign of experiments (such as orthogonal arrays) may be ¼10log ðy À tÞ2 10 n i used to calculate the solution. i For convenience, the robust design approaches will ¼10log ðS2 þðy À tÞ2Þð3Þ be classified into three categories, namely Taguchi, opti- 10 misation and stochastic optimisation methods. Further where S is the population standard deviation. Eq. (3) is details are given in the following sections. The stochastic essentially a sampled version of the general optimisation optimisation methods include those that propagate the criteria given in Eq. (1). Note that the second form indi- uncertainty through the model and hence use accurate cates that the MSD is the summation of population var- statistics in the optimisation.
Recommended publications
  • Harmonizing Fully Optimal Designs with Classic Randomization in Fixed Trial Experiments Arxiv:1810.08389V1 [Stat.ME] 19 Oct 20
    Harmonizing Fully Optimal Designs with Classic Randomization in Fixed Trial Experiments Adam Kapelner, Department of Mathematics, Queens College, CUNY, Abba M. Krieger, Department of Statistics, The Wharton School of the University of Pennsylvania, Uri Shalit and David Azriel Faculty of Industrial Engineering and Management, The Technion October 22, 2018 Abstract There is a movement in design of experiments away from the classic randomization put forward by Fisher, Cochran and others to one based on optimization. In fixed-sample trials comparing two groups, measurements of subjects are known in advance and subjects can be divided optimally into two groups based on a criterion of homogeneity or \imbalance" between the two groups. These designs are far from random. This paper seeks to understand the benefits and the costs over classic randomization in the context of different performance criterions such as Efron's worst-case analysis. In the criterion that we motivate, randomization beats optimization. However, the optimal arXiv:1810.08389v1 [stat.ME] 19 Oct 2018 design is shown to lie between these two extremes. Much-needed further work will provide a procedure to find this optimal designs in different scenarios in practice. Until then, it is best to randomize. Keywords: randomization, experimental design, optimization, restricted randomization 1 1 Introduction In this short survey, we wish to investigate performance differences between completely random experimental designs and non-random designs that optimize for observed covariate imbalance. We demonstrate that depending on how we wish to evaluate our estimator, the optimal strategy will change. We motivate a performance criterion that when applied, does not crown either as the better choice, but a design that is a harmony between the two of them.
    [Show full text]
  • Optimal Design of Experiments Session 4: Some Theory
    Optimal design of experiments Session 4: Some theory Peter Goos 1 / 40 Optimal design theory Ï continuous or approximate optimal designs Ï implicitly assume an infinitely large number of observations are available Ï is mathematically convenient Ï exact or discrete designs Ï finite number of observations Ï fewer theoretical results 2 / 40 Continuous versus exact designs continuous Ï ½ ¾ x1 x2 ... xh Ï » Æ w1 w2 ... wh Ï x1,x2,...,xh: design points or support points w ,w ,...,w : weights (w 0, P w 1) Ï 1 2 h i ¸ i i Æ Ï h: number of different points exact Ï ½ ¾ x1 x2 ... xh Ï » Æ n1 n2 ... nh Ï n1,n2,...,nh: (integer) numbers of observations at x1,...,xn P n n Ï i i Æ Ï h: number of different points 3 / 40 Information matrix Ï all criteria to select a design are based on information matrix Ï model matrix 2 3 2 T 3 1 1 1 1 1 1 f (x1) ¡ ¡ Å Å Å T 61 1 1 1 1 17 6f (x2)7 6 Å ¡ ¡ Å Å 7 6 T 7 X 61 1 1 1 1 17 6f (x3)7 6 ¡ Å ¡ Å Å 7 6 7 Æ 6 . 7 Æ 6 . 7 4 . 5 4 . 5 T 100000 f (xn) " " " " "2 "2 I x1 x2 x1x2 x1 x2 4 / 40 Information matrix Ï (total) information matrix 1 1 Xn M XT X f(x )fT (x ) 2 2 i i Æ σ Æ σ i 1 Æ Ï per observation information matrix 1 T f(xi)f (xi) σ2 5 / 40 Information matrix industrial example 2 3 11 0 0 0 6 6 6 0 6 0 0 0 07 6 7 1 6 0 0 6 0 0 07 ¡XT X¢ 6 7 2 6 7 σ Æ 6 0 0 0 4 0 07 6 7 4 6 0 0 0 6 45 6 0 0 0 4 6 6 / 40 Information matrix Ï exact designs h X T M ni f(xi)f (xi) Æ i 1 Æ where h = number of different points ni = number of replications of point i Ï continuous designs h X T M wi f(xi)f (xi) Æ i 1 Æ 7 / 40 D-optimality criterion Ï seeks designs that minimize variance-covariance matrix of ¯ˆ ¯ 2 T 1¯ Ï .
    [Show full text]
  • ESD.77 Lecture 18, Robust Design
    ESD.77 – Multidisciplinary System Design Optimization Robust Design main two-factor interactions III Run a resolution III Again, run a resolution on effects n-k on noise factors noise factors. If there is an improvement, in transmitted Change variance, retain the change one factor k b b a c a c If the response gets worse, k n k b go back to the previous state 2 a c B k 1 Stop after you’ve changed C b 2 a c every factor once A Dan Frey Associate Professor of Mechanical Engineering and Engineering Systems Research Overview Concept Outreach Design to K-12 Adaptive Experimentation and Robust Design 2 2 n x1 x2 1 2 1 2 2 INT 2 2 2 1 x 2 ME (n 2) INT erf 1 e 2 1 n 2 Pr x x 0 INT dx dx 12 1 2 12 ij 2 1 2 2 2 1 2 0 x2 (n 2) INT ME INT 2 Complex main effects Methodology Systems A B C D Validation two-factor interactions AB AC AD BC BD CD three-factor interactions ABC ABD ACD BCD ABCD four-factor interactions Outline • Introduction – History – Motivation • Recent research – Adaptive experimentation – Robust design “An experiment is simply a question put to nature … The chief requirement is simplicity: only one question should be asked at a time.” Russell, E. J., 1926, “Field experiments: How they are made and what they are,” Journal of the Ministry of Agriculture 32:989-1001. “To call in the statistician after the experiment is done may be no more than asking him to perform a post- mortem examination: he may be able to say what the experiment died of.” - Fisher, R.
    [Show full text]
  • A Fresh Look at Effect Aliasing and Interactions: Some New Wine in Old
    Annals of the Institute of Statistical Mathematics manuscript No. (will be inserted by the editor) A fresh look at effect aliasing and interactions: some new wine in old bottles C. F. Jeff Wu Received: date / Revised: date Abstract Insert your abstract here. Include keywords as needed. Keywords First keyword · Second keyword · More 1 Introduction When it is expensive or unaffordable to run a full factorial experiment, a fractional factorial design is used instead. Since there is no free lunch for getting run size economy, a price to pay for using fractional factorial design is the aliasing of effects. Effect aliasing can be handled in different ways. Background knowledge may suggest that one effect in the aliased set is insignificant, thus making the other aliased effect estimable in the analysis. Alternatively, a follow- up experiment may be conducted, specifically to de-alias the set of aliased effects. Details on these strategies can be found in design texts like Box et al. (2005) or Wu and Hamada (2009). Another problem with effect aliasing is the difficulty in interpreting the significance of aliased effects in data analysis. Ever since the pioneering work of Finney (1945) on fractional factorial designs and effect aliasing, it has been taken for granted that aliased effects can only be de-aliased by adding more runs. The main purpose of this paper is to show that, for three classes of factorial designs, there are strategies that can be used to de-alias aliased effects without the need to conduct additional runs. Each of the three cases has been studied in prior publications, but this paper is the first one to examine this class of problems with a fresh new look and in a unified framework.
    [Show full text]
  • Advanced Power Analysis Workshop
    Advanced Power Analysis Workshop www.nicebread.de PD Dr. Felix Schönbrodt & Dr. Stella Bollmann www.researchtransparency.org Ludwig-Maximilians-Universität München Twitter: @nicebread303 •Part I: General concepts of power analysis •Part II: Hands-on: Repeated measures ANOVA and multiple regression •Part III: Power analysis in multilevel models •Part IV: Tailored design analyses by simulations in 2 Part I: General concepts of power analysis •What is “statistical power”? •Why power is important •From power analysis to design analysis: Planning for precision (and other stuff) •How to determine the expected/minimally interesting effect size 3 What is statistical power? A 2x2 classification matrix Reality: Reality: Effect present No effect present Test indicates: True Positive False Positive Effect present Test indicates: False Negative True Negative No effect present 5 https://effectsizefaq.files.wordpress.com/2010/05/type-i-and-type-ii-errors.jpg 6 https://effectsizefaq.files.wordpress.com/2010/05/type-i-and-type-ii-errors.jpg 7 A priori power analysis: We assume that the effect exists in reality Reality: Reality: Effect present No effect present Power = α=5% Test indicates: 1- β p < .05 True Positive False Positive Test indicates: β=20% p > .05 False Negative True Negative 8 Calibrate your power feeling total n Two-sample t test (between design), d = 0.5 128 (64 each group) One-sample t test (within design), d = 0.5 34 Correlation: r = .21 173 Difference between two correlations, 428 r₁ = .15, r₂ = .40 ➙ q = 0.273 ANOVA, 2x2 Design: Interaction effect, f = 0.21 180 (45 each group) All a priori power analyses with α = 5%, β = 20% and two-tailed 9 total n Two-sample t test (between design), d = 0.5 128 (64 each group) One-sample t test (within design), d = 0.5 34 Correlation: r = .21 173 Difference between two correlations, 428 r₁ = .15, r₂ = .40 ➙ q = 0.273 ANOVA, 2x2 Design: Interaction effect, f = 0.21 180 (45 each group) All a priori power analyses with α = 5%, β = 20% and two-tailed 10 The power ofwithin-SS designs Thepower 11 May, K., & Hittner, J.
    [Show full text]
  • Approximation Algorithms for D-Optimal Design
    Approximation Algorithms for D-optimal Design Mohit Singh School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, [email protected]. Weijun Xie Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061, [email protected]. Experimental design is a classical statistics problem and its aim is to estimate an unknown m-dimensional vector β from linear measurements where a Gaussian noise is introduced in each measurement. For the com- binatorial experimental design problem, the goal is to pick k out of the given n experiments so as to make the most accurate estimate of the unknown parameters, denoted as βb. In this paper, we will study one of the most robust measures of error estimation - D-optimality criterion, which corresponds to minimizing the volume of the confidence ellipsoid for the estimation error β − βb. The problem gives rise to two natural vari- ants depending on whether repetitions of experiments are allowed or not. We first propose an approximation 1 algorithm with a e -approximation for the D-optimal design problem with and without repetitions, giving the first constant factor approximation for the problem. We then analyze another sampling approximation algo- 4m 12 1 rithm and prove that it is (1 − )-approximation if k ≥ + 2 log( ) for any 2 (0; 1). Finally, for D-optimal design with repetitions, we study a different algorithm proposed by literature and show that it can improve this asymptotic approximation ratio. Key words: D-optimal Design; approximation algorithm; determinant; derandomization. History: 1. Introduction Experimental design is a classical problem in statistics [5, 17, 18, 23, 31] and recently has also been applied to machine learning [2, 39].
    [Show full text]
  • Contributions of the Taguchi Method
    1 International System Dynamics Conference 1998, Québec Model Building and Validation: Contributions of the Taguchi Method Markus Schwaninger, University of St. Gallen, St. Gallen, Switzerland Andreas Hadjis, Technikum Vorarlberg, Dornbirn, Austria 1. The Context. Model validation is a crucial aspect of any model-based methodology in general and system dynamics (SD) methodology in particular. Most of the literature on SD model building and validation revolves around the notion that a SD simulation model constitutes a theory about how a system actually works (Forrester 1967: 116). SD models are claimed to be causal ones and as such are used to generate information and insights for diagnosis and policy design, theory testing or simply learning. Therefore, there is a strong similarity between how theories are accepted or refuted in science (a major epistemological and philosophical issue) and how system dynamics models are validated. Barlas and Carpenter (1992) give a detailed account of this issue (Barlas/Carpenter 1990: 152) comparing the two major opposing streams of philosophies of science and convincingly showing that the philosophy of system dynamics model validation is in agreement with the relativistic/holistic philosophy of science. For the traditional reductionist/logical empiricist philosophy, a valid model is an objective representation of the real system. The model is compared to the empirical facts and can be either correct or false. In this philosophy validity is seen as a matter of formal accuracy, not practical use. In contrast, the more recent relativist/holistic philosophy would see a valid model as one of many ways to describe a real situation, connected to a particular purpose.
    [Show full text]
  • A Review on Optimal Experimental Design
    A Review On Optimal Experimental Design Noha A. Youssef London School Of Economics 1 Introduction Finding an optimal experimental design is considered one of the most impor- tant topics in the context of the experimental design. Optimal design is the design that achieves some targets of our interest. The Bayesian and the Non- Bayesian approaches have introduced some criteria that coincide with the target of the experiment based on some speci¯c utility or loss functions. The choice between the Bayesian and Non-Bayesian approaches depends on the availability of the prior information and also the computational di±culties that might be encountered when using any of them. This report aims mainly to provide a short summary on optimal experi- mental design focusing more on Bayesian optimal one. Therefore a background about the Bayesian analysis was given in Section 2. Optimality criteria for non-Bayesian design of experiments are reviewed in Section 3 . Section 4 il- lustrates how the Bayesian analysis is employed in the design of experiments. The remaining sections of this report give a brief view of the paper written by (Chalenor & Verdinelli 1995). An illustration for the Bayesian optimality criteria for normal linear model associated with di®erent objectives is given in Section 5. Also, some problems related to Bayesian optimal designs for nor- mal linear model are discussed in Section 5. Section 6 presents some ideas for Bayesian optimal design for one way and two way analysis of variance. Section 7 discusses the problems associated with nonlinear models and presents some ideas for solving these problems.
    [Show full text]
  • A Comparison of the Taguchi Method and Evolutionary Optimization in Multivariate Testing
    A Comparison of the Taguchi Method and Evolutionary Optimization in Multivariate Testing Jingbo Jiang Diego Legrand Robert Severn Risto Miikkulainen University of Pennsylvania Criteo Evolv Technologies The University of Texas at Austin Philadelphia, USA Paris, France San Francisco, USA and Cognizant Technology Solutions [email protected] [email protected] [email protected] Austin and San Francisco, USA [email protected],[email protected] Abstract—Multivariate testing has recently emerged as a [17]. While this process captures interactions between these promising technique in web interface design. In contrast to the elements, only a very small number of elements is usually standard A/B testing, multivariate approach aims at evaluating included (e.g. 2-3); the rest of the design space remains a large number of values in a few key variables systematically. The Taguchi method is a practical implementation of this idea, unexplored. The Taguchi method [12], [18] is a practical focusing on orthogonal combinations of values. It is the current implementation of multivariate testing. It avoids the compu- state of the art in applications such as Adobe Target. This paper tational complexity of full multivariate testing by evaluating evaluates an alternative method: population-based search, i.e. only orthogonal combinations of element values. Taguchi is evolutionary optimization. Its performance is compared to that the current state of the art in this area, included in commercial of the Taguchi method in several simulated conditions, including an orthogonal one designed to favor the Taguchi method, and applications such as the Adobe Target [1]. However, it assumes two realistic conditions with dependences between variables.
    [Show full text]
  • Design of Experiments (DOE) Using the Taguchi Approach
    Design of Experiments (DOE) Using the Taguchi Approach This document contains brief reviews of several topics in the technique. For summaries of the recommended steps in application, read the published article attached. (Available for free download and review.) TOPICS: • Subject Overview • References Taguchi Method Review • Application Procedure • Quality Characteristics Brainstorming • Factors and Levels • Interaction Between Factors Noise Factors and Outer Arrays • Scope and Size of Experiments • Order of Running Experiments Repetitions and Replications • Available Orthogonal Arrays • Triangular Table and Linear Graphs Upgrading Columns • Dummy Treatments • Results of Multiple Criteria S/N Ratios for Static and Dynamic Systems • Why Taguchi Approach and Taguchi vs. Classical DOE • Loss Function • General Notes and Comments Helpful Tips on Applications • Quality Digest Article • Experiment Design Solutions • Common Orthogonal Arrays Other References: 1. DOE Demystified.. : http://manufacturingcenter.com/tooling/archives/1202/1202qmdesign.asp 2. 16 Steps to Product... http://www.qualitydigest.com/june01/html/sixteen.html 3. Read an independent review of Qualitek-4: http://www.qualitydigest.com/jan99/html/body_software.html 4. A Strategy for Simultaneous Evaluation of Multiple Objectives, A journal of the Reliability Analysis Center, 2004, Second quarter, Pages 14 - 18. http://rac.alionscience.com/pdf/2Q2004.pdf 5. Design of Experiments Using the Taguchi Approach : 16 Steps to Product and Process Improvement by Ranjit K. Roy Hardcover - 600 pages Bk&Cd-Rom edition (January 2001) John Wiley & Sons; ISBN: 0471361011 6. Primer on the Taguchi Method - Ranjit Roy (ISBN:087263468X Originally published in 1989 by Van Nostrand Reinhold. Current publisher/source is Society of Manufacturing Engineers). The book is available directly from the publisher, Society of Manufacturing Engineers (SME ) P.O.
    [Show full text]
  • Minimax Efficient Random Designs with Application to Model-Robust Design
    Minimax efficient random designs with application to model-robust design for prediction Tim Waite [email protected] School of Mathematics University of Manchester, UK Joint work with Dave Woods S3RI, University of Southampton, UK Supported by the UK Engineering and Physical Sciences Research Council 8 Aug 2017, Banff, AB, Canada Tim Waite (U. Manchester) Minimax efficient random designs Banff - 8 Aug 2017 1 / 36 Outline Randomized decisions and experimental design Random designs for prediction - correct model Extension of G-optimality Model-robust random designs for prediction Theoretical results - tractable classes Algorithms for optimization Examples: illustration of bias-variance tradeoff Tim Waite (U. Manchester) Minimax efficient random designs Banff - 8 Aug 2017 2 / 36 Randomized decisions A well known fact in statistical decision theory and game theory: Under minimax expected loss, random decisions beat deterministic ones. Experimental design can be viewed as a game played by the Statistician against nature (Wu, 1981; Berger, 1985). Therefore a random design strategy should often be beneficial. Despite this, consideration of minimax efficient random design strategies is relatively unusual. Tim Waite (U. Manchester) Minimax efficient random designs Banff - 8 Aug 2017 3 / 36 Game theory Consider a two-person zero-sum game. Player I takes action θ 2 Θ and Player II takes action ξ 2 Ξ. Player II experiences a loss L(θ; ξ), to be minimized. A random strategy for Player II is a probability measure π on Ξ. Deterministic actions are a special case (point mass distribution). Strategy π1 is preferred to π2 (π1 π2) iff Eπ1 L(θ; ξ) < Eπ2 L(θ; ξ) : Tim Waite (U.
    [Show full text]
  • Interaction Graphs for a Two-Level Combined Array Experiment Design by Dr
    Journal of Industrial Technology • Volume 18, Number 4 • August 2002 to October 2002 • www.nait.org Volume 18, Number 4 - August 2002 to October 2002 Interaction Graphs For A Two-Level Combined Array Experiment Design By Dr. M.L. Aggarwal, Dr. B.C. Gupta, Dr. S. Roy Chaudhury & Dr. H. F. Walker KEYWORD SEARCH Research Statistical Methods Reviewed Article The Official Electronic Publication of the National Association of Industrial Technology • www.nait.org © 2002 1 Journal of Industrial Technology • Volume 18, Number 4 • August 2002 to October 2002 • www.nait.org Interaction Graphs For A Two-Level Combined Array Experiment Design By Dr. M.L. Aggarwal, Dr. B.C. Gupta, Dr. S. Roy Chaudhury & Dr. H. F. Walker Abstract interactions among those variables. To Dr. Bisham Gupta is a professor of Statistics in In planning a 2k-p fractional pinpoint these most significant variables the Department of Mathematics and Statistics at the University of Southern Maine. Bisham devel- factorial experiment, prior knowledge and their interactions, the IT’s, engi- oped the undergraduate and graduate programs in Statistics and has taught a variety of courses in may enable an experimenter to pinpoint neers, and management team members statistics to Industrial Technologists, Engineers, interactions which should be estimated who serve in the role of experimenters Science, and Business majors. Specific courses in- clude Design of Experiments (DOE), Quality Con- free of the main effects and any other rely on the Design of Experiments trol, Regression Analysis, and Biostatistics. desired interactions. Taguchi (1987) (DOE) as the primary tool of their trade. Dr. Gupta’s research interests are in DOE and sam- gave a graph-aided method known as Within the branch of DOE known pling.
    [Show full text]