Robust Design

Total Page:16

File Type:pdf, Size:1020Kb

Robust Design 16.888 – Multidisciplinary System Design Optimization Robust Design Response Control Factor Prof. Dan Frey Mechanical Engineering and Engineering Systems Plan for the Session • Basic concepts in probability and statistics • Review design of experiments • Basics of Robust Design • Research topics – Model-based assessment of RD methods – Faster computer-based robust design – Robust invention Ball and Ramp Ball Response = Ramp the time the Funnel ball remains in the funnel Causes of experimental error = ? Probability Measure • Axioms – For any event A, P(A) t 0 – P(U)=1 – If the intersection of A and B=I, then P(A+B)=P(A)+P(B) Continuous Random Variables • Can take values anywhere within continuous ranges • Probability density function b – P a x d b f (x)dx ^`³ x a f (x) – 0 d f x (x) for all x x f – f (x)dx 1 ³ x f a b x Histograms • A graph of continuous data • Approximates a pdf in the limit of large n Histogram of Crankpin Diameters 5 Frequency 0 Diameter, Pin #1 Measures of Central Tendency • Expected value E(g(x)) g(x) f (x)dx ³ x S • Mean P= E(x) • Arithmetic average 1 n ¦ xi n i 1 Measures of Dispersion •Variance VAR(x) V 2 E((x E(x))2 ) • Standard deviation V E((x E(x))2 ) 1 n • Sample variance 2 2 S ¦(xi x) n 1 i1 • nth central moment E((x E(x))n ) th • n moment about m E((x m)n ) Sums of Random Variables • Average of the sum is the sum of the average (regardless of distribution and independence) E(x y) E(x) E( y) • Variance also sums iff independent V 2 (x y) V (x)2 V ( y)2 • This is the origin of the RSS rule – Beware of the independence restriction! Concept Test • A bracket holds a component as shown. The dimensions are independent random variables with standard deviations as noted. Approximately what is the standard deviation of the gap? A) 0.011” V 0.01" B) 0.01” V 0.001" C) 0.001” gap Expectation Shift Under utility theory, S=E(y(x))- y(E(x)) S is the only difference between probabilistic and deterministic design S y(E(x)) y(x) E(y(x)) fy(y(x)) fx(x) x E(x) Probability Distribution of Sums • If z is the sum of two random variables x and y z x y • Then the probability density function of z can be computed by convolution z p (z) x(z ] )y(] )d] z ³ f Convolution z p (z) x(z ] )y(] )d] z ³ f Convolution z p (z) x(z ] )y(] )d] z ³ f Central Limit Theorem The mean of a sequence of n iid random variables with – Finite P 2G – E xi E(xi ) < f G ! 0 approximates a normal distribution in the limit of a large n. Normal Distribution ( xP )2 1 2V 2 f x (x) e V 2S -6V -3V -1V P +1V +3V +6V 68.3% 99.7% 1-2ppb Engineering Tolerances • Tolerance --The total amount by which a specified dimension is permitted to vary (ANSI Y14.5M) pp(qy)) • Every component within spec adds to the yield (Y) Y L U qy Process Capability Indices UL /2 • Process Capability Index Cp { 3V UL P 2 • Bias factor k { ()/UL 2 CC()1 k • Performance Index pk{ p p(q) UL 2 L UL U q 2 18 Concept Test • Motorola’s “6 sigma” programs suggest that we should strive for a Cp of 2.0. If this is achieved but the mean is off target so that k=0.5, estimate the process yield. Plan for the Session • Basic concepts in probability and statistics • Review design of experiments • Basics of Robust Design • Research topics – Model-based assessment of RD methods – Faster computer-based robust design – Robust invention Pop Quiz • Assume we wish to estimate the effect of ball position on the ramp on swirl time. The experimental error causes V = 1 sec in the response. We run the experiment 4 times. What is the error our estimate of swirl time? A) V = 1 sec B) V = 1/2 sec Ball C) V = 1/4 sec Ramp Funnel History of DoE • 1926 – R. A. Fisher introduced the idea of factorial design • 1950-70 – Response surface methods • 1987 – G. Taguchi, System of Experimental Design Full Factorial Design 4 A B C D Response • This is the 2 +1 +1 +1 +1 +1 +1 +1 -1 • All main effects +1 +1 -1 +1 and interactions +1 +1 -1 -1 +1 -1 +1 +1 can be resolved +1 -1 +1 -1 +1 -1 -1 +1 • Scales very +1 -1 -1 -1 poorly with -1 +1 +1 +1 -1 +1 +1 -1 number of factors -1 +1 -1 +1 -1 +1 -1 -1 -1 -1 +1 +1 -1 -1 +1 -1 -1 -1 -1 +1 -1 -1 -1 -1 Replication and Precision A B C D Response +1 +1 +1 +1 +1 +1 +1 -1 The average of +1 +1 -1 +1 trials 1 through 8 +1 +1 -1 -1 has a of 1/8 +1 -1 +1 +1 V +1 -1 +1 -1 that of each trial +1 -1 -1 +1 +1 -1 -1 -1 -1 +1 +1 +1 -1 +1 +1 -1 “the same precision as if -1 +1 -1 +1 the whole … -1 +1 -1 -1 had been devoted to one -1 -1 +1 +1 single component” -1 -1 +1 -1 -1 -1 -1 +1 – Fisher -1 -1 -1 -1 Resolution and Aliasing Trial A B C D E F G FG=-A 1 -1 -1 -1 -1 -1 -1 -1 +1 2 -1 -1 -1 +1 +1 +1 +1 +1 3 -1 +1 +1 -1 -1 +1 +1 +1 4 -1 +1 +1 +1 +1 -1 -1 +1 5 +1 -1 +1 -1 +1 -1 +1 -1 6 +1 -1 +1 +1 -1 +1 -1 -1 7 +1 +1 -1 -1 +1 +1 -1 -1 8 +1 +1 -1 +1 -1 -1 +1 -1 27-4 Design (aka “orthogonal array L8”) Resolution III. Projective Property + B + - - C - A + Considered important for exploiting sparsity of effects. DOE – Key Assumptions • Pure experimental error error in observations is random & independent • Hierarchy lower order effects are more likely to be significant than higher order effects • Sparsity of effects there are few important effects • Effect heredity for an interaction to be significant, at least one parent should be significant Sparsity of Effects • An experimenter may list several factors 1.2 • They usually affect the 1 response to greatly 0.8 varying degrees 0.6 0.4 • The drop off is 0.2 surprisingly steep Factor effects 0 2 (~1/n ) 1234567 • Not sparse if prior Pareto ordered factors knowledge is used or if factors are screened Hierarchy • Main effects are usually more important than two- factor interactions A B C D • Two-way interactions are usually more important than three-factor interactions AB AC AD BC BD CD •And so on • Taylor’s series seems to support the idea ABC ABD ACD BCD f f (n) (a) ¦(x a)n ABCD n 0 n! Inheritance • Two-factor interactions are most likely when A B C D both participating factors (parents?) are strong AB AC AD BC BD CD • Two-way interactions are least likely when ACD ABC ABD BCD neither parent is strong • And so on ABCD Resolution • II Main effects are aliased with main effects • III Main effects are clear of other main effects but aliased with two-factor interactions • IV Main effects are clear of other main effects and clear of two-factor interactions but main effects are aliased with three-factor interactions and two-factor interactions are aliased with other two-factor interactions • V Two-factor interactions are clear of other two-factor interactions but are aliased with three factor interactions… Discussion Point • What are the four most important factors affecting swirl time? • If you want to have sparsity of effects and hierarchy, how would you formulate the variables? Important Concepts in DOE • Resolution – the ability of an experiment to provide estimates of effects that are clear of other effects • Sparsity of Effects – factor effects are few • Hierarchy – interactions are generally less significant than main effects • Inheritance – if an interaction is significant, at least one of its “parents” is usually significant • Efficiency – ability of an experiment to estimate effects with small error variance Plan for the Session • Basic concepts in probability and statistics • Review design of experiments • Basics of Robust Design • Research topics – Model-based assessment of RD methods – Faster computer-based robust design – Robust invention Major Concepts of Taguchi Method • Variation causes quality loss • Two-step optimization • Parameter design via orthogonal arrays • Inducing noise (outer arrays) • Interactions and confirmation Loss Function Concept • Quantify the economic consequences of performance degradation due to variation L(y) What should the function be? y Fraction Defective Fallacy • ANSI seems to imply a “goalpost” L(y) mentality • But, what is the A difference between o – 1 and 2? 1 2 3 – 2 and 3? m y m-' m+'R Isn’t a continuous function R more appropriate? A Generic Loss Function • Desired properties L(y) – Zero at nominal value – Equal to cost at A specification limit o – C1 continuous • Taylor series m m+' m-'R R y f 1 f (x) | ¦ (x a)n f (n) (a) n 0 n! Nominal-the-best • Defined as L(y) Ao 2 L( y) 2 ( y m) 'o A • Average loss is o proportional to the 2nd moment y m-' m m+' about m R R quadratic quality loss function "goal post" loss function Average Quality Loss P L(y) A Ao o 2 2 V E[L( y)] 2 >@V (P m) 'o y m m+ m-'R 'R quadratic quality loss function probability density function Other Loss Functions Ao 2 • Smaller the better L( y) 2 y 'o 2 1 • Larger the better L( y) Ao'o 2 y • Asymmetric Ao 2 2 ( y m) if y ! m 'Upper L( y) Ao 2 2 ( y m) if y d m 'Lower Who is the better target shooter? Sam John Who is the better target shooter? Sam John Sam can just John requires adjust his sights lengthy training The “P” Diagram Noise Factors Product / Process Response Control Factors There are usually more control factors than responses Exploiting Non-linearity Response Use your extra “degrees of freedom” and search for robust set points.
Recommended publications
  • Orthogonal Arrays and Row-Column and Block Designs for CDC Systems
    Current Trends on Biostatistics & L UPINE PUBLISHERS Biometrics Open Access DOI: 10.32474/CTBB.2018.01.000103 ISSN: 2644-1381 Review Article Orthogonal Arrays and Row-Column and Block Designs for CDC Systems Mahndra Kumar Sharma* and Mekonnen Tadesse Department of Statistics, Addis Ababa University, Addis Ababa, Ethiopia Received: September 06, 2018; Published: September 20, 2018 *Corresponding author: Mahndra Kumar Sharma, Department of Statistics, Addis Ababa University, Addis Ababa, Ethiopia Abstract In this article, block and row-column designs for genetic crosses such as Complete diallel cross system using orthogonal arrays (p2, r, p, 2), where p is prime or a power of prime and semi balanced arrays (p(p-1)/2, p, p, 2), where p is a prime or power of an odd columnprime, are designs derived. for Themethod block A designsand C are and new row-column and consume designs minimum for Griffing’s experimental methods units. A and According B are found to Guptato be A-optimalblock designs and thefor block designs for Griffing’s methods C and D are found to be universally optimal in the sense of Kiefer. The derived block and row- Keywords:Griffing’s methods Orthogonal A,B,C Array;and D areSemi-balanced orthogonally Array; blocked Complete designs. diallel AMS Cross; classification: Row-Column 62K05. Design; Optimality Introduction d) one set of F ’s hybrid but neither parents nor reciprocals Orthogonal arrays of strength d were introduced and applied 1 F ’s hybrid is included (v = 1/2p(p-1). The problem of generating in the construction of confounded symmetrical and asymmetrical 1 optimal mating designs for CDC method D has been investigated factorial designs, multifactorial designs (fractional replication) by several authors Singh, Gupta, and Parsad [10].
    [Show full text]
  • Orthogonal Array Experiments and Response Surface Methodology
    Unit 7: Orthogonal Array Experiments and Response Surface Methodology • A modern system of experimental design. • Orthogonal arrays (sections 8.1-8.2; appendix 8A and 8C). • Analysis of experiments with complex aliasing (part of sections 9.1-9.4). • Brief response surface methodology, central composite designs (sections 10.1-10.2). 1 Two Types of Fractional Factorial Designs • Regular (2n−k, 3n−k designs): columns of the design matrix form a group over a finite field; the interaction between any two columns is among the columns, ⇒ any two factorial effects are either orthogonal or fully aliased. • Nonregular (mixed-level designs, orthogonal arrays) some pairs of factorial effects can be partially aliased ⇒ more complex aliasing pattern. This includes 3n−k designs with linear-quadratic system. 2 A Modern System of Experimental Design It has four branches: • Regular orthogonal arrays (Fisher, Yates, Finney, ...): 2n−k, 3n−k designs, using minimum aberration criterion. • Nonregular orthogonal designs (Plackett-Burman, Rao, Bose): Plackett-Burman designs, orthogonal arrays. • Response surface designs (Box): fitting a parametric response surface. • Optimal designs (Kiefer): optimality driven by specific model/criterion. 3 Orthogonal Arrays • In Tables 1 and 2, the design used does not belong to the 2k−p series (Chapter 5) or the 3k−p series (Chapter 6), because the latter would require run size as a power of 2 or 3. These designs belong to the class of orthogonal arrays. m1 mγ • An orthogonal array array OA(N,s1 ...sγ ,t) of strength t is an N × m matrix, m = m1 + ... + mγ, in which mi columns have si(≥ 2) symbols or levels such that, for any t columns, all possible combinations of symbols appear equally often in the matrix.
    [Show full text]
  • ESD.77 Lecture 18, Robust Design
    ESD.77 – Multidisciplinary System Design Optimization Robust Design main two-factor interactions III Run a resolution III Again, run a resolution on effects n-k on noise factors noise factors. If there is an improvement, in transmitted Change variance, retain the change one factor k b b a c a c If the response gets worse, k n k b go back to the previous state 2 a c B k 1 Stop after you’ve changed C b 2 a c every factor once A Dan Frey Associate Professor of Mechanical Engineering and Engineering Systems Research Overview Concept Outreach Design to K-12 Adaptive Experimentation and Robust Design 2 2 n x1 x2 1 2 1 2 2 INT 2 2 2 1 x 2 ME (n 2) INT erf 1 e 2 1 n 2 Pr x x 0 INT dx dx 12 1 2 12 ij 2 1 2 2 2 1 2 0 x2 (n 2) INT ME INT 2 Complex main effects Methodology Systems A B C D Validation two-factor interactions AB AC AD BC BD CD three-factor interactions ABC ABD ACD BCD ABCD four-factor interactions Outline • Introduction – History – Motivation • Recent research – Adaptive experimentation – Robust design “An experiment is simply a question put to nature … The chief requirement is simplicity: only one question should be asked at a time.” Russell, E. J., 1926, “Field experiments: How they are made and what they are,” Journal of the Ministry of Agriculture 32:989-1001. “To call in the statistician after the experiment is done may be no more than asking him to perform a post- mortem examination: he may be able to say what the experiment died of.” - Fisher, R.
    [Show full text]
  • A Fresh Look at Effect Aliasing and Interactions: Some New Wine in Old
    Annals of the Institute of Statistical Mathematics manuscript No. (will be inserted by the editor) A fresh look at effect aliasing and interactions: some new wine in old bottles C. F. Jeff Wu Received: date / Revised: date Abstract Insert your abstract here. Include keywords as needed. Keywords First keyword · Second keyword · More 1 Introduction When it is expensive or unaffordable to run a full factorial experiment, a fractional factorial design is used instead. Since there is no free lunch for getting run size economy, a price to pay for using fractional factorial design is the aliasing of effects. Effect aliasing can be handled in different ways. Background knowledge may suggest that one effect in the aliased set is insignificant, thus making the other aliased effect estimable in the analysis. Alternatively, a follow- up experiment may be conducted, specifically to de-alias the set of aliased effects. Details on these strategies can be found in design texts like Box et al. (2005) or Wu and Hamada (2009). Another problem with effect aliasing is the difficulty in interpreting the significance of aliased effects in data analysis. Ever since the pioneering work of Finney (1945) on fractional factorial designs and effect aliasing, it has been taken for granted that aliased effects can only be de-aliased by adding more runs. The main purpose of this paper is to show that, for three classes of factorial designs, there are strategies that can be used to de-alias aliased effects without the need to conduct additional runs. Each of the three cases has been studied in prior publications, but this paper is the first one to examine this class of problems with a fresh new look and in a unified framework.
    [Show full text]
  • Latin Squares in Experimental Design
    Latin Squares in Experimental Design Lei Gao Michigan State University December 10, 2005 Abstract: For the past three decades, Latin Squares techniques have been widely used in many statistical applications. Much effort has been devoted to Latin Square Design. In this paper, I introduce the mathematical properties of Latin squares and the application of Latin squares in experimental design. Some examples and SAS codes are provided that illustrates these methods. Work done in partial fulfillment of the requirements of Michigan State University MTH 880 advised by Professor J. Hall. 1 Index Index ............................................................................................................................... 2 1. Introduction................................................................................................................. 3 1.1 Latin square........................................................................................................... 3 1.2 Orthogonal array representation ........................................................................... 3 1.3 Equivalence classes of Latin squares.................................................................... 3 2. Latin Square Design.................................................................................................... 4 2.1 Latin square design ............................................................................................... 4 2.2 Pros and cons of Latin square design...................................................................
    [Show full text]
  • Orthogonal Array Application for Optimized Software Testing
    WSEAS TRANSACTIONS on COMPUTERS Ljubomir Lazic and Nikos Mastorakis Orthogonal Array application for optimal combination of software defect detection techniques choices LJUBOMIR LAZICa, NIKOS MASTORAKISb aTechnical Faculty, University of Novi Pazar Vuka Karadžića bb, 36300 Novi Pazar, SERBIA [email protected] http://www.np.ac.yu bMilitary Institutions of University Education, Hellenic Naval Academy Terma Hatzikyriakou, 18539, Piraeu, Greece [email protected] Abstract: - In this paper, we consider a problem that arises in black box testing: generating small test suites (i.e., sets of test cases) where the combinations that have to be covered are specified by input-output parameter relationships of a software system. That is, we only consider combinations of input parameters that affect an output parameter, and we do not assume that the input parameters have the same number of values. To solve this problem, we propose interaction testing, particularly an Orthogonal Array Testing Strategy (OATS) as a systematic, statistical way of testing pair-wise interactions. In software testing process (STP), it provides a natural mechanism for testing systems to be deployed on a variety of hardware and software configurations. The combinatorial approach to software testing uses models to generate a minimal number of test inputs so that selected combinations of input values are covered. The most common coverage criteria are two-way or pairwise coverage of value combinations, though for higher confidence three-way or higher coverage may be required. This paper presents some examples of software-system test requirements and corresponding models for applying the combinatorial approach to those test requirements. The method bridges contributions from mathematics, design of experiments, software test, and algorithms for application to usability testing.
    [Show full text]
  • Generating and Improving Orthogonal Designs by Using Mixed Integer Programming
    European Journal of Operational Research 215 (2011) 629–638 Contents lists available at ScienceDirect European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor Stochastics and Statistics Generating and improving orthogonal designs by using mixed integer programming a, b a a Hélcio Vieira Jr. ⇑, Susan Sanchez , Karl Heinz Kienitz , Mischel Carmen Neyra Belderrain a Technological Institute of Aeronautics, Praça Marechal Eduardo Gomes, 50, 12228-900, São José dos Campos, Brazil b Naval Postgraduate School, 1 University Circle, Monterey, CA, USA article info abstract Article history: Analysts faced with conducting experiments involving quantitative factors have a variety of potential Received 22 November 2010 designs in their portfolio. However, in many experimental settings involving discrete-valued factors (par- Accepted 4 July 2011 ticularly if the factors do not all have the same number of levels), none of these designs are suitable. Available online 13 July 2011 In this paper, we present a mixed integer programming (MIP) method that is suitable for constructing orthogonal designs, or improving existing orthogonal arrays, for experiments involving quantitative fac- Keywords: tors with limited numbers of levels of interest. Our formulation makes use of a novel linearization of the Orthogonal design creation correlation calculation. Design of experiments The orthogonal designs we construct do not satisfy the definition of an orthogonal array, so we do not Statistics advocate their use for qualitative factors. However, they do allow analysts to study, without sacrificing balance or orthogonality, a greater number of quantitative factors than it is possible to do with orthog- onal arrays which have the same number of runs.
    [Show full text]
  • Orthogonal Array Sampling for Monte Carlo Rendering
    Eurographics Symposium on Rendering 2019 Volume 38 (2019), Number 4 T. Boubekeur and P. Sen (Guest Editors) Orthogonal Array Sampling for Monte Carlo Rendering Wojciech Jarosz1 Afnan Enayet1 Andrew Kensler2 Charlie Kilpatrick2 Per Christensen2 1Dartmouth College 2Pixar Animation Studios (a) Jittered 2D projections (b) Multi-Jittered 2D projections (c) (Correlated) Multi-Jittered 2D projections wy wu wv wy wu wv wy wu wv y wx wx wx u wy wy wy v wu wu wu x y u x y u x y u Figure 1: We create high-dimensional samples which simultaneously stratify all bivariate projections, here shown for a set of 25 4D samples, along with their six stratified 2D projections and expected power spectra. We can achieve 2D jittered stratifications (a), optionally with stratified 1D (multi-jittered) projections (b). We can further improve stratification using correlated multi-jittered (c) offsets for primary dimension pairs (xy and uv) while maintaining multi-jittered properties for cross dimension pairs (xu, xv, yu, yv). In contrast to random padding, which degrades to white noise or Latin hypercube sampling in cross dimensional projections (cf. Fig.2), we maintain high-quality stratification and spectral properties in all 2D projections. Abstract We generalize N-rooks, jittered, and (correlated) multi-jittered sampling to higher dimensions by importing and improving upon a class of techniques called orthogonal arrays from the statistics literature. Renderers typically combine or “pad” a collection of lower-dimensional (e.g. 2D and 1D) stratified patterns to form higher-dimensional samples for integration. This maintains stratification in the original dimension pairs, but looses it for all other dimension pairs.
    [Show full text]
  • Association Schemes for Ordered Orthogonal Arrays and (T, M, S)-Nets W
    Canad. J. Math. Vol. 51 (2), 1999 pp. 326–346 Association Schemes for Ordered Orthogonal Arrays and (T, M, S)-Nets W. J. Martin and D. R. Stinson Abstract. In an earlier paper [10], we studied a generalized Rao bound for ordered orthogonal arrays and (T, M, S)-nets. In this paper, we extend this to a coding-theoretic approach to ordered orthogonal arrays. Using a certain association scheme, we prove a MacWilliams-type theorem for linear ordered orthogonal arrays and linear ordered codes as well as a linear programming bound for the general case. We include some tables which compare this bound against two previously known bounds for ordered orthogonal arrays. Finally we show that, for even strength, the LP bound is always at least as strong as the generalized Rao bound. 1 Association Schemes In 1967, Sobol’ introduced an important family of low discrepancy point sets in the unit cube [0, 1)S. These are useful for quasi-Monte Carlo methods such as numerical inte- gration. In 1987, Niederreiter [13] significantly generalized this concept by introducing (T, M, S)-nets, which have received considerable attention in recent literature (see [3] for a survey). In [7], Lawrence gave a combinatorial characterization of (T, M, S)-nets in terms of objects he called generalized orthogonal arrays. Independently, and at about the same time, Schmid defined ordered orthogonal arrays in his 1995 thesis [15] and proved that (T, M, S)-nets can be characterized as (equivalent to) a subclass of these objects. Not sur- prisingly, generalized orthogonal arrays and ordered orthogonal arrays are closely related.
    [Show full text]
  • 14.5 ORTHOGONAL ARRAYS (Updated Spring 2003)
    14.5 ORTHOGONAL ARRAYS (Updated Spring 2003) Orthogonal Arrays (often referred to Taguchi Methods) are often employed in industrial experiments to study the effect of several control factors. Popularized by G. Taguchi. Other Taguchi contributions include: • Model of the Engineering Design Process • Robust Design Principle • Efforts to push quality upstream into the engineering design process An orthogonal array is a type of experiment where the columns for the independent variables are “orthogonal” to one another. Benefits: 1. Conclusions valid over the entire region spanned by the control factors and their settings 2. Large saving in the experimental effort 3. Analysis is easy To define an orthogonal array, one must identify: 1. Number of factors to be studied 2. Levels for each factor 3. The specific 2-factor interactions to be estimated 4. The special difficulties that would be encountered in running the experiment We know that with two-level full factorial experiments, we can estimate variable interactions. When two-level fractional factorial designs are used, we begin to confound our interactions, and often lose the ability to obtain unconfused estimates of main and interaction effects. We have also seen that if the generators are choosen carefully then knowledge of lower order interactions can be obtained under that assumption that higher order interactions are negligible. Orthogonal arrays are highly fractionated factorial designs. The information they provide is a function of two things • the nature of confounding • assumptions about the physical system. Before proceeding further with a closer look, let’s look at an example where orthogonal arrays have been employed. Example taken from students of Alice Agogino at UC-Berkeley A Airplane Taguchi Experiment This experiment has 4 variables at 3 different settings.
    [Show full text]
  • Using Multiple Robust Parameter Design Techniques to Improve Hyperspectral Anomaly Detection Algorithm Performance
    Air Force Institute of Technology AFIT Scholar Theses and Dissertations Student Graduate Works 3-9-2009 Using Multiple Robust Parameter Design Techniques to Improve Hyperspectral Anomaly Detection Algorithm Performance Matthew T. Davis Follow this and additional works at: https://scholar.afit.edu/etd Part of the Other Operations Research, Systems Engineering and Industrial Engineering Commons, and the Remote Sensing Commons Recommended Citation Davis, Matthew T., "Using Multiple Robust Parameter Design Techniques to Improve Hyperspectral Anomaly Detection Algorithm Performance" (2009). Theses and Dissertations. 2506. https://scholar.afit.edu/etd/2506 This Thesis is brought to you for free and open access by the Student Graduate Works at AFIT Scholar. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of AFIT Scholar. For more information, please contact [email protected]. USING MULTIPLE ROBUST PARAMETER DESIGN TECHNIQUES TO IMPROVE HYPERSPECTRAL ANOMALY DETECTION ALGORITHM PERFORMANCE THESIS Matthew Davis, Captain, USAF AFIT/GOR/ENS/09-05 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in this thesis are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense or the United States Government. AFIT/GOR/ENS/09-05 USING MULTIPLE ROBUST PARAMETER DESIGN TECHNIQUES TO IMPROVE HYPERSPECTRAL ANOMALY DETECTION ALGORITHM PERFORMANCE THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command in Partial Fulfillment of the Requirements for the Degree of Master of Science in Operations Research Matthew Davis, B.S.
    [Show full text]
  • Optimal Mixed-Level Robust Parameter Designs
    Optimal Mixed-Level Robust Parameter Designs by Jingjing Hu A Thesis submitted to the Faculty of Graduate Studies of The University of Manitoba in partial fulfilment of the requirements of the degree of Master of Science Department of Statistics University of Manitoba Winnipeg Copyright c 2015 by Jingjing Hu Abstract Fractional factorial designs have been proven useful for efficient data collection and are widely used in many areas of scientific investigation and technology. The study of these designs' structure paves a solid ground for the development of robust parameter designs. Robust parameter design is commonly used as an effective tool for variation reduction by appropriating selection of control factors to make the product less sensitive to noise factor in industrial investigation. A mixed-level robust parameter design is an experimental design whose factors have at least two different level settings. One of the most important consideration is how to select an optimal robust parameter design. However, most experimenters paid their attention on two-level robust parameter designs. It is highly desirable to develop a new method for selecting optimal mixed-level robust parameter designs. In this thesis, we propose a methodology for choosing optimal mixed-level frac- tional factorial robust parameter designs when experiments involve both qualitative factors and quantitative factors. At the beginning, a brief review of fractional facto- rial designs and two-level robust parameter designs is given to help understanding our method. The minimum aberration criterion, one of the most commonly used criterion for design selection, is introduced. We modify this criterion and develop two generalized minimum aberration criteria for selecting optimal mixed-level fractional factorial robust parameter designs.
    [Show full text]