ESD.77 – Multidisciplinary System Design Optimization Robust Design

main two-factor interactions III Run a resolution III Again, run a resolution on effects n-k on noise factors noise factors. If there is an improvement, in transmitted Change variance, retain the change one factor k b b a c a c If the response gets worse, k n k b go back to the previous state 2 a c B k 1 Stop after you’ve changed C b 2 a c every factor once A Dan Frey Associate Professor of Mechanical Engineering and Engineering Systems Research Overview

Concept Outreach Design to K-12

Adaptive Experimentation and Robust Design 2 2 n x1 x2 1 2 1 2 2 INT 2 2 2 1 x 2 ME (n 2) INT erf 1 e 2 1 n 2 Pr x x 0 INT dx dx 12 1 2 12 ij 2 1 2 2 2 1 2 0 x2 (n 2) INT ME INT 2

Complex main effects Methodology Systems A B C D Validation

two-factor interactions AB AC AD BC BD CD

three-factor interactions ABC ABD ACD BCD

ABCD four-factor interactions Outline • Introduction – History – Motivation • Recent research – Adaptive experimentation – Robust design “An is simply a question put to nature … The chief requirement is simplicity: only one question should be asked at a time.” Russell, E. J., 1926, “Field : How they are made and what they are,” Journal of the Ministry of Agriculture 32:989-1001. “To call in the statistician after the experiment is done may be no more than asking him to perform a post- mortem examination: he may be able to say what the experiment died of.”

- Fisher, R. A., Indian Statistical Congress, Sankhya, 1938. Estimation of Factor Effects

Say the independent (bc) (abc) experilimental error o f (b) (ab) observations +

(a)(), (ab), et cetera is σε. B (c) (ac) We define the main effect + - estimate Α to be - C (1) (a) - A + 1 A [ ++≡ + − − − bccbaacababc − )1()()()()()()()( ] 4 The standard deviation of the estimate is 1 1 A factor of two improvement in 8 == 2σσσ A 4 ε 2 ε efficiency as compared to “single question methods” Fractional Factorial Experiments “It will sometimes be advantageous deliberately to sacrifice all possibility of obtaining information on some points, these being confidently believed to be unimportant … These comparisons to be sacrificed will be deliberately confounded with certain elements of the soil heterogeneity… Some additional care should, however, be taken…”

Fisher, R. A., 1926, “The Arrangement of Field Experiments,” Journal of the Ministry of Agriculture of Great Britain, 33: 503-513. Fractional Factorial Experiments

+

B + - - C - A + 3 1 2III Fractional Factorial Experiments

Trial A B C D E F G FG=-A 1 -1 -1 -1 -1 -1 -1 -1 +1 2 -1 -1 -1 +1 +1 +1 +1 +1 3 -1 +1 +1 -1 -1 +1 +1 +1 4 -1 +1 +1 +1 +1 -1 -1 +1 5 +1 -1 +1 -1 +1 -1 +1 -1 6 +1 -1 +1 +1 -1 +1 -1 -1 7 +1 +1 -1 -1 +1 +1 -1 -1 8 +1 +1 -1 +1 -1 -1 +1 -1

27-4 Design (aka “”) Every factor is at each level an equal number of times (balance). High numbers provide precision in effect estimation. Resolution III. Robust Parameter Design

Robust Parameter Design … is a statistical / engineering methodology that aims at reducing the performance variation of a system (i.e. a product or process) by choosing the setting of its control factors to make it less sensitive to noise variation.

Wu, C. F. J. and M. Hamada, 2000, Experiments: Planning, Analysis, and Parameter Design Optimization, John Wiley & Sons, NY. Cross (or Product) Arrays 3 1 Noise Factors 2III

Control Factors a -1 -1 +1 +1 b -1 +1 -1 +1 A B C D E F G c -1 +1 +1 -1 1 -1 -1 -1 -1 -1 -1 -1 2 -1 -1 -1 +1 +1 +1 +1 3 -1 +1 +1 -1 -1 +1 +1 27 4 4 -1 +1 +1 +1 +1 -1 -1 III 5 +1 -1 +1 -1 +1 -1 +1 6 +1 -1 +1 +1 -1 +1 -1 7 +1 +1 -1 -1 +1 +1 -1 8 +1 +1 -1 +1 -1 -1 +1 7 4 3 1 2III 2III Taguchi, G., 1976, System of Experimental Design. Step 1 Step 1 Summary:Step 5 Step 5 Summary: • Form cross function team of experts.• Determine noise factors and levels. Identify Project • Clearly define project objective. • Determine noise strategy • Assign Noise and Team Define roles and responsibilities to team - Surrogatemembers. Noise Strategy • TranslateFactors customer to Outer intent non-technical - termsCompound into technical Noise terms. • Identify productArray quality issues. - Treat Noise Individually • Isolate the boundary conditions and• describe Establish the outer system noise in arrayterms matrix.of its inputs and outputs.

Step 2 Step 2 Summary:Step 6 Step 6 Summary: • Formulate Select a response function(s). • Cross functional team will need to develop a step-by-step plan to carry out the • Select a signal parameter(s). logistical activities necessary for successful completion of the data collection Engineered • Conduct Step 4 Summary: Determine if problem is static or dynamicphase parameter. of the optimization Note: Static experiment. has a single input System: Ideal and oneExperiment or more responses. and Dynamic• hasIdentify multiple Facility input Constraints. signals and multiple output Function / Quality signals. Collect Data • Determine logistical/run order. • Determine the S/N function. See section Step2 e.g. Nominal-the-best, etc. • Determine control factor levels Characteristic(s) • Calculate the DOF

• Determine if there are any interactions Step 3 Step 3 Summary:Step 7 Step 7 Summary: • Select control factor(s). • Calculate the Following Values: Formulate • Rank control factors. • Mean • Select the appropriate orthogonal array Engineered • SelectAnalyze noise factors(s). Data and • Variance • Signal to Noise Ratio System: Select Optimal • ANOVA Parameters Design • Factor Effects • Interpret the Results • Select Factor Levels providing the largest improvement in S/N should be • Use the mean effect values to predict the new mean of the improved system • If needed, select a value for the scaling factor

Step 4 Step 4 Summary: • DetermineStep control factor8 levels. Step 8 Summary: Assign Control • Calculate the DOF. • TBD - Canice • Determine if there are any interactions between control factors. Factors to Inner • Select thePredict appropriate and Orthogonal Array. Array Confirm Majority View on “One at a Time”

One way of thinking of the great advances of the science of experimentation in this century is as the final demise of the “one factor at a time” method, although it should be said that there are still organizations which have never heard of factorial experimentation and use up many man hours wandering a crooked path.

Logothetis, N., and Wynn, H.P., 1994, Quality Through Design: Experimental Design, Off-line Quality Control and Taguchi’s Contributions, Clarendon Press, Oxford. Minority Views on “One at a Time”

“…the factorial design has certain deficiencies … It devotes observations to exploring regions that may be of no interest…These deficiencies … suggest that an efficient design for the present purpose ought to be sequential; that is, ought to adjust the experimental program at each stage in light of the results of prior stages.” Friedman, Milton, and L. J. Savage, 1947, “Planning Experiments Seeking Maxima”, in Techniques of Statistical Analysis, pp. 365-372. “Some scientists do their experimental work in single steps. They hope to learn something from each run … they see and react to data more rapidly …If he has in fact found out a good deal by his methods, it must be true that the effects are at least three or four times his average random error per trial.” Cuthbert Daniel, 1973, “One-at-a-Time Plans”, Journal of the American Statistical Association, vol. 68, no. 342, pp. 353-360. My Observations of Industry

• Farming equipent company has reliability problems • Large blocks of robustness experiments had been planned at outset of the design work • More than 50% were not finished • Reasons given

– Unforseen changes “Well, in the third experiment, we – Resource pressure found a solution that met all our needs, so we cancelled the rest – Satisficing of the experiments and moved on to other tasks…” More Observations of Industry

• Time for design (concept to market) is going down • Fewer physical experiments are being conducted • Greater reliance on computation / CAE • Poor answers in computer modeling are common – Right model → Inaccurate answer – Right model → No answer whatsoever – Not-so right model → Inaccurate answer • Unmodeled effects • Bugs in coding the model Human Subjects Experiment

• Hypothesis: Engineers using a flawed simulation are more likely to detect the flaw while using OFAT than while using a more complex design.

• Method: Between-subjects experiment with human subjects (engineers) performing parameter design with OFAT vs. designed experiment. Treatment: Design Space Sampling Method

Trial A B C D E F G 1 ------• Adaptive OFAT 2 + ------3 + - - - - - – One factor changes in 4 + - - - - each trial 5 + - - - 6 filled in as + - - 7 + - 8 required to adapt +

Trial A B C D E F G • Plackett-Burman L8 1 ------2 + - + - - + + – Four factors change 3 + + + - + - -

4 - + - - + + + between any two trials

5 + - - + + - +

6 - - + + + + -

7 - + + + - - +

8 + + - + - + -

Using a 27 system avoids possible factor of number of trials. Increasing number of factors likely means increasing discrepancy in detection. Larger effect sizes require fewer test subjects for given Type I and II errors. Parameter Design of Catapult to Hit Target

• Modeled on XPultTM

• Commonly used in DOE demonstrations

• Extended to 27 system by introducing – Arm material – Air relative humidity – Air temperature Control Factors and Settings Control Factor Nominal Setting Alternate Setting Relative Humidity 25% 75% • Control factor tied directly to Pullback 30 degrees 40 degrees simulation mistake Type of Ball Orange (Large-ball TT) White (regulation TT) Arm Material Magnesium Aluminum Launch Angle 60 degrees 45 degrees • Arm material selected for its Rubber Bands 3 2 moderate Ambient 72 F 32 F Temperature

50 • Computer simulation equations 100 45 are “correct”, but intentional 40 80 mistake is that arm material 35 properties are reversed 30 60 Effect Size 25 Cumulative Percent

20 40 15 • Control factor ordering is not 10 20 random, to prevent variance due 5 to learning effect 0 0 • “Bad” control factor placed in 4th Pullback Type of Ball Arm MaterialLaunch Angle Relative Humidity column in both designs No. of Rubber Bands Ambient Temperature Results of Human Subjects Experiment • Pilot with N = 8 • Study with N = 55 (1 withdrawal) • External validity high – 50 full time engineers and 5 engineering students – experience ranged from 6 mo. to 40+ yr. • Outcome measured by subject debriefing at end

Method Detected Not detected Detection Rate (95% CI) OFAT 14 13 (0.3195,0.7133) PBL8 1 26 (0.0009,0.1897) Adaptive OFAT Experimentation

Do an experiment If there is an improvement, Change one factor retain the change

+ If the response gets worse, go back to the previous state B + ’ C Stop after you ve changed - - every factor - A +

Frey, D. D., F. Engelhardt, and E. Greitzer, 2003, “A Role for One Factor at a Time Experimentation in Parameter Design”, Research in Engineering Design 14(2): 65-74. Empirical Evaluation of Adaptive OFAT Experimentation • Meta-analysis of 66 responses from published, full factorial data sets • When experimental error is <25% of the combined factor effects OR interactions are >25% of the combined factor effects, adaptive OFAT provides more improvement on average than fractional factorial DOE.

Frey, D. D., F. Engelhardt, and E. Greitzer, 2003, “A Role for One Factor at a Time Experimentation in Parameter Design”, Research in Engineering Design 14(2): 65-74. Detailed Results

0.4 MS 0.1 MSFE FE OFAT/FF

Gray if OFAT>FF

Strength of Experimental Error 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Mild 100/99 99/98 98/98 96/96 94/94 89/92 86/88 81/86 77/82 73/79 69/75 Moderate 96/90 95/90 93/89 90/88 86/86 83/84 80/81 76/81 72/77 69/74 64/70 Strong 86/67 85/64 82/62 79/63 77/63 72/64 71/63 67/61 64/58 62/55 56/50 Strength Dominant 80/39 79/36 77/34 75/37 72/37 70/35 69/35 64/34 63/31 61/35 59/35 A Ma thema tica l M o de l o f Adap tive OFAT

~ ~ ~ initial observation 0 = ( , 21 ,KxxxyO n ) ~ ~ ~ observation with O1 = y(− 1, 2 ,Kxxx n ) first factor toggled

∗ ~ first factor set x = x11 sign{O − O10 } 2for = 2for Kni O = y(x∗ x∗ −~x ~x ,,,, ~x ) repeatft for all i 1 K i−1 ii +1 K n remaining factors ∗ ~ = ii { ( 10 ,,max K −1 )− OOOOsignxx ii } process ends after n+1 observations with [ ( ,, xxxyE ∗∗∗ )] 1 2 K n

Frey, D. D., and H. Wang, 2006, “Adaptive One-Factor-at-a-Time Experimentation and Expected Value of Improvement”, Technometrics 48(3):418-31. A Mathematical Model of a Population of Engineering Systems n n−1 n 21 K n ),,( ∑β xxxxy ii += ∑∑ iij xx j + εβk i=1 i=+11=ij

system 2 response k Ν( ,0~ σεε ) 2 2 βi ~ Ν( ,0 σ ME ) βij ~ Ν( ,0 σ INT ) experimental main effects two-factor interactions error

y ≡ the larggpest response within the s pace max of discrete, coded, two-level factors xi ∈{− +1,1 }

Model adapted from Chipman, H., M. Hamada, and C. F. J. Wu, 2001, “A Bayesian Variable Selection Approach for Analyzing Designed Experiments with Complex Aliasing”, Technometrics 39(4)372-381. Probability of Exploiting an Effect

• The ith main effect is said to be “exploited” if * i xi 0 • The two-factor interaction between the ith and jth factors is said to be “exploited” if ij xi x j 0

• The probabilities and conditional probabilities of exploiting effects provide insight into the mechanisms by which a method provides improvements The Expppected Value of the Response after the First Step

* ~ ~ ∗ ∗ ~ 1 ,(( 2 K,, n )) = [β 11 ]+ − )1( [β 11 xxEnxExxxyE jj ]

2 2 ∗ ~ 2 σ INT ∗ 2 σ ME E[β 11 xx nj ]= []β xE 11 = π 2 2 1 2 π 1 σ ME n )1( INT +−+ σσε σ 2 n )1( 2 +−+ σσ2 2 ME INT 2 ε

1

two-factor interactions main Legend [β ∗~xxE ] effects 11 nj Theorem 1 0.8 σσ= 1.0 × Simulation ε ME β xE ∗ Theorem 1 [ 11 ] ε σσME =1 0.6 Simulation Theorem 1

P σσ=10 + Simulation ε ME

0.4

0.2

0 0 0.25 0.5 0.75 1

n=7 INT σσME Prob abilit y of E xpl oiti ng th e Fi rs t Ma in Effec t

* 1 1 −1 σ ME r(β x11 > 0P )= + sin 2 π 1 σ 2 n )1( 2 +−+ σσ2 ME INT 2 ε

1

Legend Theorem 2 1 σσ= 1.0 × Simulation ε ME 090.9 TheoremTheorem 2 1 σσ=1 If interactions are Simulation ε ME Theorem 2 Theorem 1 σσ=10 + Simulation ε ME small and error is 0.8 not too large,

OFAT will tend to 0.7 exploit main

effects 0.6

0.5 0 0.25 0.5 0.75 1

INT σσME The Expected Value of the Response After the Second Step * * ~ ~ * * * * E( y(x1 x2 ,, x3 K,, xn )) = E[β x11 ]+ 22 (n − 2)E[β j x11 ]+ E[β x112 x2 ]

⎡ ⎤ ⎢ 2 ⎥ ∗∗ 2 ⎢ σ INT ⎥ [β xxE 2112 ]= π ⎢ 2 ⎥ 2 2 σ ε ⎢ σ ME n )1( σ INT +−+ ⎥ ⎣ 2 ⎦

1 two-factor interactions main Legend effects ∗~ ∗~ TheoremTheorem 3 1 [β 11 jj ]= [β 22 xxExxE jj ] σσ= 1.0 0.8 × Simulation ε ME TheoremTheorem 3 1 [β xE ∗ ] σσ= 1 11 Simulation ε ME 0.6 ∗ = [β xE 22 ] TheoremTheorem 3 1 ε σσME = 10 P + Simulation

0.4

∗∗ [β xxE 2112 ]

0.2

0 0 0.25 0.5 0.75 1

INT σσME Probability of Exploiting the First Interaction

Pr(β xx ∗∗ 0 ββ)>>> 12 21 12 ij

−x 2 −x 2 ⎛ n ⎞ 1 + 2 ∗∗ 1 1 −1 σ INT ⎜ ⎟−1 2 1 2 2σ INT ⎛ 2 22 ⎞ Pr()β xx 2112 0 +=> tan ⎡ ⎛ 1 x ⎞⎤⎝ ⎠ ⎜σ ME n )2(2 INT +−+ σσε ⎟ 2 π 1 erf ⎜ 1 ⎟ e ⎝ 2 ⎠ 2 2 2 ∞ ∞ ⎢ ⎜ ⎟⎥ σ ME + (n − 2)σ INT + σ ε 1 ⎛n⎞ ⎝ 2 σ INT ⎠ ⎜ ⎟ ⎣ ⎦ dxdx 2 ⎜ ⎟∫∫ 12 π ⎝2⎠ 0 −x 2 2 1 2 2 σσ n )2( +−+ σσ INT ME INT 2 ε

Legend Theorem 6 1 1 1 σσ= 1.0 × Simulation ε ME Legend TheoremTheorem 6 1 TheoremTheorem 5 1 ε σσME =1 0.9 σ σ = 0.1 0.9 Simulation × Simulation ε ME Theorem 6 1 Theorem 5 1 ε σσME =10 ε σσME =1 + Simulation 0.8 Simulation 0.8 Theorem 5 1 σσ=10 + Simulation ε ME 0.7 0.7

0.6 0.6

0.5 0.5 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1

σσMEINT INT σσME And it Continues

main two-factor interactions 1 max

y Legend

effects n-k /

)) Eqn. 20 n 0.8 0 .25 x ~ ME , Simulation

 0.5

, INT ME 1 0.6

k k x ~ , * k x , n k 0.4

k  , * 2 2 x , * 1

x 0.2 ( y

k 1 (

2 E 0 0 1 2 3 4 5 6 7 k

Prijx i x j 0 Pr12 x 1 x 2 0

We can prove that the probability of exploiting interactions is sustained. Further we can now prove exploitation probability is a function of j only and increases monotonically. Final Outcome

1 1

Eqn 21

Eqn 21 0.8 0.8

Eqn 21 max y / 0.6 0.6 )) * n Legend x , Eqn.Theorem 20 1

 ME 0.1 ,

* Simulation 2 x

, 0.4

* 0.4 1 TheoremEqn.20 1

x ME 1 ( Simulation y (

E Eqn.20Theorem 1 10 Simulation ME 0.2 0.2

0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 INT ME INT ME Adaptive OFAT Resolution III Design Final Outcome

1

0.8

ME 1 0.6

0.4

0.2

0 0 ~0.25 0 0.2 0.4 0.6 0.8 1 INT ME Adaptive OFAT Resolution III Design Adaptive “One Factor at a Time” for Robust Design III Run a resolution III Again, run a resolution on on noise factors noise factors. If there is an improvement, in transmitted Change variance, retain the change one factor

b b a c a c If the response gets worse, b go back to the previous state a c B Stop after you’ve changed C b a c every factor once A Frey, D. D., and N. Sudarsanam, 2007, “An Adaptive One-factor-at-a-time Method for Robust Parameter Design: Comparison with Crossed Arrays via Case Studies,” accepted to ASME Journal of Mechanical Design. Sheet Metal Spinning

Results for Three Methods of Robust Design Applied to the Sheet Metal Spinning Model -4 Maximum S/N

-5 3_1 aOFAT x 2 Informed -6

Image by MIT OpenCourseWare. -7

-8

6_3 3_1 -9 2 x 2

3_1 -10 aOFAT x 2 Random Largest noise Largest control effect effect -11 Average S/N

Smaller-the-better signal to noise ratio ( dB ) Smaller-the-better -12 0 0.5 1 1.5 2 2.5 3 Standard deviation of pure experimental error (mm2)

Image by MIT OpenCourseWare. Paper Airplane

Results for Three Methods of Robust Design Applied to MIT Exercise v2.0 the Paper Airplane Physical Experiment

B1 (up) Parameter B: B2 (flat) Maximum signal to noise ratio Stabilizer Flaps B3 (down) 41 _ 3 1 Experiment # ______Parameter D: aOFAT x 2 Informed Distance ______Wing Angle Name ______D1 D2 D3 D1 Parameter C: Nose Length D2 40 D3 D1 D2 D3 Parameter A: A1 Weight Position A2 A3 39 _ 3 1 aOFAT x 2 Random 3_1 L9 x 2

38 Average signal-to-noise ratio

Combined effects of noise Largest control factor effect 37

0 10 20 30 40 50 Expt. Weight. Stabiliz. Nose Wing Standard deviation of pure experimental error ( ) # A B C D inches

1 A1 B1 C1 D1 2 A1 B2 C2 D2 3 A1 B3 C3 D3 4 A2 B1 C2 D3 5 A2 B2 C3 D1 6 A2 B3 C1 D2 7 A3 B1 C3 D2 8 A3 B2 C1 D3 9 A3 B3 C2 D1

Image by MIT OpenCourseWare. Results Across Four Case studies

Method used

_ aOFAT x 2k p _ III Fractional array x 2k p p Fractional array X 2IIIIII Informed Random

Sheet metal Low ε 51% 75% 56%

Spinning High ε 36% 57% 52%

Low ε 99% 99% 98% Op amp High ε 98% 88% 87%

Low ε 43% 81% 68% Paper airplane High ε 41% 68% 51%

Low ε 94% 100% 100% Freight transport High ε 88% 85% 85%

Low ε 74% 91% 84% Mean of four cases High ε 66% 70% 64%

Low ε 43% to 99% 75% to 100% 56% to 100% Range of four cases High ε 36% to 88% 57% to 88% 51% to 87%

Image by MIT OpenCourseWare.

Frey, D. D., N. and Sudarsanam, 2006, “An Adaptive One-factor-at-a-time Method for Robust Parameter Design: Comparison with Crossed Arrays via Case Studies,” accepted to ASME Journal of Mechanical Design. Ensembles of aOFATs

100 100

90 - Ensemble aOFATs (4) 90 - Ensemble aOFATs (8) - Fractional Factorial 27-2 - Fractional Factorial 27-1

80 80

70 70

60 60

50 50 Expected Value of Largest Control Factor = 16 Expected Value of Largest Control Factor = 16

40 40 0 5 10 15 20 25 0 5 10 15 20 25

7-1 Comparing an Ensemble of 4 aOFATs with a 27-2 Comparing an Ensemble of 8 aOFATs with a 2 Fractional Factorial array using the HPM Fractional Factorial array using the HPM Conclusions

• A new model and theorems show that – Adaptive OFAT plans exploit two-factor interactions especially when they are large – Adaptive OFAT plans provide around 80% of the benefits achievable via parameter design • Adaptive OFAT can be “crossed” with factorial designs which proves to be highly effective

Frey, D. D., and N. Sudarsanam, 2007, “An Adaptive One-factor-at-a-time Method for Robust Parameter Design: Comparison with Crossed Arrays via Case Studies,” accepted to ASME Journal of Mechanical Design. MIT OpenCourseWare http://ocw.mit.edu

ESD.77 / 16.888 Multidisciplinary System Design Optimization Spring 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.