Quick viewing(Text Mode)

Design for Uncertainties of Sheet Metal Forming Process

Design for Uncertainties of Sheet Metal Forming Process

FOR UNCERTAINTIES OF SHEET PROCESS

DISSERTATION

Presented in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the Graduate School of The Ohio State University

By

Wenfeng Zhang, M.S.

* * * *

The Ohio State University

2007

Dissertation Committee: Approved By

Dr. Rajiv Shivpuri, Adviser

Dr. Allen Yi Adviser Graduate Program in Dr. Tunc Aldemir Industrial and Systems

ABSTRACT

The exclusion of inherent process variations in the current deterministic

for forming can lead to very unreliable result that may cause high scrap rate,

frequent rework, machine shut down and thus huge loss of profit. Extensive research has

been done in exploring the deterministic effect of each factor on the part, but the impact

of their variations on the fluctuation of output quality for the process is seldom

addressed and quantified. The inclusion of uncertainty in the design and optimization

cycle should lead to better understanding of the impact of uncertainty associated with

system input on the system output. This understanding can then be applied for managing

such uncertainties. To date, there are very few reports on incorporating uncertainties and

variations in the design of stamping processes.

In this research we propose three different probabilistic design approaches. It uses sheet

metal forming finite element method (FEM) simulation as the fundamental tool. When

the system meta-model is not complex, the design of (DOE) technique and response surface method (RSM) are integrated with FEM to build an explicit function to

connect process input to process performance output. With the quantification of

uncertainties of input variables, by Monte Carlo Simulation (MCS) or other reliability

analysis methods, the probability that the product conforms to its specification can

ii therefore be assessed. Through the right formulation of probabilistic optimization the robust configuration can be found. In the case where the number of process inputs is too many so that the design of cannot handle and the system meta-model cannot be approximated, the alternate approach that integrates FEM, multi-objective genetic and reliability assessment is illustrated.

By the proposed probabilistic design approaches, deeper understanding about the

relationship between process variables uncertainties with the part qualities is achieved.

Ultimately, the process output variability could be reduced, defect rates could be

minimized and product quality would be improved.

iii

DEDICATION

Dedicated to my parents: Zhongping Zhang/Xiaohui Shu and my wife Ji Li.

iv ACKNOWLEDGMENTS

First and foremost I would like to thank Dr. Rajiv Shivpuri, my adviser, for his support,

guidance and above all, encouragement throughout the years of my PhD program. I have

learned knowledge and the way to obtain knowledge under his instructions that has

prepared me to make more contributions in the future.

I thank other members of my dissertation committee, Dr. Allen Yi and Dr. Tunc Aldemir

for their scientific inputs and advice, as well as the member of my general exam

committee, Dr. Steve MacEachern and Dr. Tedd Allen for their valuable comments and

suggestions.

I also own thanks to our group members, Lin Yang, Yijun Zhu, Yongning Mao, Xiaomin

Cheng, Yuanjie Wu, Meixing Ji and Chun Liu, for their academic help and friendship.

Thanks also go to Dr. Jiang Hua, Dr. Ziqiang Sheng, and Dr. Satish Kini for their kindly

supports and helpful suggestions.

Finally, I want to express my sincere gratitude to my wife, Ji Li for her longtime support

and unconditional love! She always makes me smile even facing the greatest difficulty. I

am so glad that she will get her PhD degree the same time with me next month! We are

going to be a proud and happy dual-PhD family! v VITA

May 16, 1978 Born - Wuhan, P.R. China

2000 B.E., Mechanical Engineering, Huazhong Univ. of Science and Technology, Wuhan, P.R. China

2001 – 2003 M.S., Mechanical Engineering, the Ohio State University, Columbus, Ohio, U.S.A.

2001 – 2007 Graduate Research Associate, Department of Industrial, and , the Ohio State University

PUBLICATIONS

Zhang, W., Sheng, Z., Shivpuri, R., Probabilistic Design of Aluminum Sheet for Reduced Risk of Wrinkling and Fracture, Numisheet (Detroit), 2005, pp.247-252.

Zhang, W., Shivpuri, R., Investigating Reliability of Variable Blank Holder Force Control in Sheet Drawing under Process Uncertainties, ASME, Journal of Manufacturing Science and Engineering, accepted, 2006.

Zhang, W., Shivpuri, R., A new Discrete Friction Concept in Sheet Metal Deep Drawing and Its Process Optimization by Multi-objective Optimization, ASME, Journal of Manufacturing Science and Engineering, accepted, 2006.

vi

FIELDS OF STUDY

Major Field: Industrial & Systems Engineering

Major Area: Manufacturing

Minor I:

Minor II: Statistical

vii TABLE OF CONTENTS

Page

ABSTRACT...... ii DEDICATION...... iv ACKNOWLEDGMENTS ...... v VITA...... vi LIST OF TABLES...... xii LIST OF FIGURES ...... xiii

CHAPTER 1. INTRODUCTION...... 1 1.1 Traditional Approach to ...... 4 1.2 New Approach for Robust Process Design...... 6 1.3 Objective of Research...... 8 1.4 Research Significance and Benefits...... 9 1.5 What is My Contribution? ...... 10 1.6 Dissertation Outline ...... 11

2. BACKGROUND AND LITERATURE REVIEW...... 13 2.1 Finite Element Method and Its Application in Sheet Metal Forming...... 15 2.1.1 Introduction to Sheet Metal Forming...... 15 2.1.2 Basis of Finite Element Method ...... 17 2.1.3 Application of Finite Element Method in Sheet Metal Forming ...... 18 2.2 Techniques ...... 21 2.2.1 Full-factorial Design ...... 21

viii 2.2.2 Orthogonal Arrays ...... 21 2.2.3 Latin Hypercube Design ...... 22 2.2.4 ...... 25 2.2.5 Box-Behnken Design...... 25 2.2.6 Computer Aided ...... 27 2.3 Approximating Methods ...... 27 2.3.1 Response Surface Method...... 28 2.3.2 Meta Models...... 30 2.3.3 Neural Networks ...... 31 2.4 Literature Review of Deterministic Sheet Metal Forming Process Optimization Employing FEM, DOE and Approximation Methods ...... 32

3. RESEARCH METHODOLOGY ...... 38 3. 1 Taguchi Robust Design...... 38 3.1.1 Introduction to Taguchi Robust Design...... 38 3.1.2 General Taguchi Robust Formulation ...... 43 3.1.3 Disadvantages of Taguchi Robust Design method ...... 44 3.2 Reliability Based Optimization...... 45 3.2.1 Reliability Analysis...... 45 3.2.2 Reliability Based Optimization Formulation...... 51 3.3 Proposed Probabilistic Design approach ...... 53

4. EXPERIMENT OBSERVATION ON THE PART QUALITY VARIATIONS..... 56 4.1 Experiment Objective ...... 56 4.2 Experimental Design...... 58 4.2.1 Criteria to Select Right Part to Draw ...... 58 4.2.2 Drawing Design ...... 59

ix 4.2.3 The Measurement of Quality Characteristics ...... 63 4.2.4 The Experiment Procedure ...... 65 4.3 The Experiment Result ...... 66 4.3.1 Wrinkling Measurement Result ...... 67 4.3.2 Fracture Measurement Result ...... 71 4.3.3 Springback Measurement Result ...... 74 4.4 Experiment Conclusion...... 76

5. PROBABILISTIC DESIGN OF ALUMINUM SHEET DRAWING FOR REDUCED RISK OF WRINKLING AND FRATURE...... 77 5.1 Introduction...... 78 5.2 General Approach for Probabilistic Design...... 82 5.3 Quality Index to Measure Risk ...... 83 5.4 The Numerical Simulation Model ...... 85 5.5 Selection of Input Variables...... 86 5.6 Prediction of Wrinkling and Fracture ...... 87 5.7 DOE and RSM ...... 91 5.8 Probabilistic Assessment for Wrinkling and Fracture ...... 95 5.9 Optimal Design Approach ...... 96 5.10 Effect of Variation on The Optimum Design ...... 103 5.11 Conclusion ...... 105

6. INVESTIGATING RELIABILITY OF TEMPORAL VARIABLE BLANK HOLDER FORCE CONTROL IN SHEET DRAWING UNDER PROCESS UNCERTAINTIES...... 106 6.1 Introduction...... 107 6.2 General Approach for Probabilistic Design Optimization...... 110

x 6.3 Parameterization of The Variable Blank Holder Force ...... 114 6.4 The numerical Simulation Model ...... 117 6.5 Selection of Input Variables and Output Variables ...... 120 6.6 Deterministic Design vs. Probabilistic Design ...... 122 6.7 Conclusion ...... 134

7. SPATIALLY VARYING CONSTRAINTS AND PROBABILISTIC DESIGN BY MULTI-OBJECTIVE GENETIC ALGORITHM...... 138 7.1 Introduction...... 140 7.2 Setup of Spatially Varying Constraints in Hishida Part Drawing...... 149 7.3 Multi-Objective Genetic Algorithm: NSGA-II...... 153 7.3.1 Pareto optimal and Pareto front ...... 155 7.3.2 Non-dominated Sorting Genetic Algorithm-II...... 157 7.4 Design Optimization Model...... 161 7.4.1 Design Objectives ...... 161 7.4.2 Design Variables...... 165 7.4.3 Integration of NSGA-II Optimization and Pam-Quickstamp Full FEA ...... 165 7.4.4 Hishida Part Drawing Process Heuristics ...... 166 7.5 Optimization Result and Analysis ...... 167 7.6 The Probabilistic Design Search...... 174 7.7 Conclusion ...... 180

8. CONCLUSIONS AND FUTURE WORK ...... 182 LIST OF REFERENCES ...... 187 APPENDIX A ...... 193 APPENDIX B ...... 218

xi LIST OF TABLES

Page

Table 4.1. The die/punch design and test setup ...... 62

Table 5.1. Variation of material properties [28][29]...... 81

Table 5.2, The variation and DOE levels for selected parameters...... 90

Table 5.3. Deterministic design result ...... 101

Table 5.4. Probabilistic design result...... 102

Table 6.1. Circular blank dimensions, material properties, and friction coefficients used in the simulation of drawing the conical cup from AKDQ ...... 119

Table 6.2. The sensitivity of initial selected process parameter...... 121

Table 6.3. The deterministic and probabilistic design result with constraint reliability. 130

Table 7.1. Material properties used in the simulation...... 164

Table 8.1. The summary of prob. & determ. design strategies...... 184

xii

LIST OF FIGURES

Page

Figure 2.1. Latin Hypercube design [11]...... 24

Figure 2.2. Comparison of CCD design and Box-Behnken design...... 26

Figure 3.1. Taguchi robust design matrix [11]...... 41

Figure 3.2. Taguchi [11]...... 42

Figure 3.3. Reliability based analysis [11]...... 46

Figure 3.4. FORM reliability analysis method [11]...... 47

Figure 4.1. of process output variations caused by input variations for sheet drawing process...... 57

Figure 4.2. The cylindrical cup drawing die design...... 60

Figure 4.3. The hydraulic press used in the drawing test...... 61

Figure 4.4. Illustration of measurement of three quality characteristics of drawn cups... 64

Figure 4.5. Parts drawn without and with lubricants at two levels of BHFs...... 68

Figure 4.6. The parts after drawing and the defective parts...... 69

Figure 4.7. The measurement result for number of wrinkling along the flange...... 70

Figure 4.8. The result for the maximum height of wrinkling mark along cup sidewall. .. 72

Figure 4.9. The fracture measurement result...... 73

Figure 4.10. The springback angle measurement result...... 75

Figure 5.1. General probabilistic design approach...... 88

xiii Figure 5.2. Simulation model of the Hishida forming process...... 89

Figure 5.3. The measurement of sidewall wrinkling...... 93

Figure 5.4. Sensitivity plot of wrinkling and thinning...... 94

Figure 5.5. Probabilistic design of BHF...... 99

Figure 5.6. Probabilistic design of of Lub1...... 100

Figure 5.7. Effect of variation of random variables on the quality index...... 104

Figure 6.1. The blank holder control profile from [44] and the fitted Gaussian approximation for sheet drawing...... 116

Figure 6.2. The FEM simulation model of the sheet drawing process and the final drawn part...... 118

Figure 6.3. Maximum thinning and sidewall wrinkling (y-axis) at different s of BHF (x-axis)...... 125

Figure 6.4. The design solution of PI control, deterministic design and probabilistic

design for csw = 0.21...... 128

Figure 6.5. The design solution of PI control, deterministic design and probabilistic

design for csw = 0.23 ...... 129

Figure 6.6. of sidewall wrinkling and maximum thinning for the deterministic designs...... 131

Figure 6.7. Histograms of sidewall wrinkling and maximum thinning for the probabilistic designs...... 132

Figure 6.8. Evaluation of probabilistic design vs. deterministic design ...... 135

Figure 7.1. Example of one setup of drawbeads on the Hishida part drawing [51]...... 142

xiv Figure 7.2. The segment-elastic binder used in the Hishida part drawing [51]...... 144

Figure 7.3. Micro- to alter the friction condition [53]...... 147

Figure 7.4. Topological effect at the roughness [55]...... 148

Figure 7.5. The Hishida Part...... 150

Figure 7.6. Spatial distribution of discrete friction zones...... 152

Figure 7.7. The Pareto optimal and Pareto front...... 156

Figure 7.8. The procedure of NSGA-II [61]...... 159

Figure 7.9. The simulation model of discrete friction drawing of Hishida part...... 163

Figure 7.10. The optimization flow chart...... 168

Figure 7.11. The Pareto front of discrete friction NSGA-II optimization...... 169

Figure 7.12. Uniform friction design and strain distribution after drawing...... 171

Figure 7.13. Discrete friction design and strain distribution after drawing...... 172

Figure 7.14. Strain distribution at different locations of the part after drawing...... 173

Figure 7.15. Reliability analysis of the Pareto front points...... 176

Figure 7.16. Reliability of feasible points in the last Pareto front when BHF COV=0.05, friction COV=0.1 ...... 177

Figure 7.17. Reliability of feasible points in the last Pareto front when BHF COV=0.05, friction COV=0.2 ...... 178

Figure 7.18. Reliability of feasible points in the last Pareto front when BHF COV=0.05, friction COV=0.4 ...... 179

xv CHAPTER 1

1. INTRODUCTION

At present, metal stampings are used in almost every mass-produced product. Consider the number of consumer and industrial products that include sheet metal parts: automobile and truck bodies, airplanes, railway cars, farm and construction equipment, appliances, office furniture, computers, and more. Although these examples are conspicuous because they gave sheet-metal exteriors, many of their internal components are also made of sheet. According to a survey in the US, some 100,000 metal stampings could be found in the average American home in the 1980s [1]. The commercial importance of sheet metalworking is significant. The three major categories of sheet metal processes are cutting, , and drawing. Cutting is used to separate large sheets into smaller pieces, to cut out a part perimeter, or to make holes in a part. Bending and drawing are used to form sheet metal parts into their required shapes. In this research, we focus mainly on the drawing process.

Stamping product qualities have always been one of the most important concerns of the industries. Any quality issues can be very costly to the manufacturer, creating difficulties in assembly, causing rework or repair in the production floor or the field, and resulting in

1

customer dissatisfaction. Traditionally, the products quality is assured by inspecting parts in full or fractional after they have been manufactured against the specifications and standards such as dimensions, visual characteristics, mechanical and electrical properties.

The inspection can only be done after the part is made. If the part is out of specification, action will be taken based on experience about whether to stop the production line and investigate the root cause. It is usual that before an out of specification signal is sent out, the production has been kept running for a long time and produced a large amount of scrap parts.

The practice of inspecting products after they are made is now being replaced rapidly by the online quality control methods which were pioneered primarily by Demming,

Taguchi and Juran. Among them the statistical processes control (SPC) and related control charts are the most famous and widely used tools in industries. The key concept here is the usage of control limits instead of specification limits denoted in the design drawing or manufacturing instruction card. The control limits, which should be much narrower than the specification limits, are developed statistically based on the natural process variation in a stable status or a good status. The goal is to keep the process under this stable status so that all the parts will meet the spec.

Besides setting up the control limits and charts for the products, some important process parameters could also be monitored, for example, the furnace temperature in hot process. By this much tighter control limit, system abnormality such as mean/variation

2

shift could be detected much faster than the traditional inspection method and sources of quality problems could be rapidly identified.

However, these online quality control methods only concentrate on the manufacturing stage and cannot compensate for poor design quality. It is commonly known that by the

20-80 rule, nearly 80 percent of parts quality issues are due to improper product or process design. Improper sometime the product is not well designed for the manufacturing. When a process is not well designed, it is highly likely that the product quality is very sensitive to the process variation. If the variation is uncontrollable, prohibitively costly process control schemes may be required to improve the process capability, and they cannot guarantee a product robust to deterioration and variability due to uncontrollable process factors.

More specifically, for sheet metal stamping process, a well designed process is much more important than online SPC control. The reason is that when the traditional SPC charts are used to monitor the process, and the out-of-control signals indicating the process mean shift are encountered, a sheet metal stamping process does not have the necessary adjustability in its process variable input settings to allow adjusting the mean response in an out-of-control condition. Hence, the signals often go ignored. This means that additional expense might be incurred due to service costs under warranty and, more importantly, due to the loss of market share because of customer dissatisfaction.

3

Therefore, if the quality concept is moved further upstream to the design process, these costs can be avoided. The need for costly process control, mass inspection, and service costs are minimized if one optimizes product and process design to ensure product robustness.

1.1 Traditional Approach to Process Design

Drawing is a sheet metal forming operation used to make cup-shaped, box-shaped, or other complex-curves, hollow-shaped parts. It is performed by placing a piece of sheet metal over a die cavity and then pushing the metal into the opening with a punch. The blank must usually be held down flat against the die by a blank holder.

A lot of efforts have been put into the design phase in order to produce a defect-free part.

Given material and part shape, the optimum deep drawing design, in general, focuses on three main aspects: the blank design, tooling design and process design. The blank design includes optimizing blank geometries and thickness. The tooling design is to determine the optimal punch and die radii, the punch and die clearance, the drawbead shape and location. The process design tends to find the best setting of process parameters such as the friction (lubricant type and procedure), the punch speed and the blank holder force.

4

However, most design processes mentioned above are based largely on ’s experiences and nowadays widely used deterministic finite element method (FEM) simulations. Designer could check the drawability through the numerical prediction of fracture and final sheet thickness, wrinkling, surface defects, springback and residual stresses. If simulation reveals any potential failure or defect, designer would modify the process according to the specific defect and his previous experiences. A new simulation will be conducted again to verify the new design. If defect still exists, a second round of design modification will be taken. This iteration is basically the traditional engineering trial-and-error process using the computer simulation instead of real drawing experiment.

It is noticed that during this process, no process uncertainty or variation is considered. All the simulation input parameters such as material properties are a sure and fixed number.

Therefore, we say the traditional design process is deterministic in nature.

After the design is put in the production, however, the process variations are actually taken care of by the online quality control methods like statistical process control (SPC).

It is apparent that a gap exists between the process design and the process control. The deterministic design does not consider any process uncertainty/variation while the corresponding process control statistically utilizes them in a fundamental level.

Majeske [2] has researched several leading automobile manufacturers to identify sources of variation in sheet metal stamping. His result shows that within the same batch, which means same die and process setup, the part-to-part variation is around 30 percent of the

5

total part variation in the long run. This is equal to say that no matter how good your deterministic design is, the inevitable process variation may make the design output large variation and thus bad quality.

1.2 New Approach for Robust Process Design

As illustrated in the previous section, a design process considering the uncertainties and variations should be adopted to improve the products quality. Robust design is such a design philosophy. We call it a philosophy because its principle is straightforward: design a process that is insensitive to the noise factors, but its implement is not trivial. In fact, different robust design methods are still being developed nowadays.

The Taguchi robust design is believed to be the most widely used and recognized design method. However, its shortcomings are obvious. First, the optimal design can only be at the experimental points. Basically, the method calculates the signal/noise ratio for each combination of control variables. The configuration with the largest ratio is selected as the optimal design. No point between them could be evaluated. Second, the way Taguchi design the experiment matrix arbitrarily assumes independence between control factors and noise factors. Thus the highly possible coupling of them is ignored, which is not justifiable. Third, the relationship between those variables is still not clear and effect of variation of each noise factor on the part quality could not be quantified.

6

Comparing to Taguchi robust design, the reliability-based optimization is more systematic and quantitative. It seeks to identify design solutions that not only optimize performance (minimize or maximize one or more objectives), but also satisfy constraints on the minimum reliability (or maximum probability of failure). Consequently, a deterministic optimization problem is modified in creating a reliability-based optimization problem by adding (and defining) random variables, and modifying deterministic constraints to become probabilistic reliability constraints. However, the biggest issue of this approach is that the objective function is evaluated at the mean value of design variables and hence it is deterministic in nature instead of stochastic. Most optimization schemes developed for this method could not handle it if we change the objective function to include uncertainties and variation.

In this research, we propose three different probabilistic design approaches. It uses sheet metal forming finite element method (FEM) simulation as the fundamental tool. When the system meta-model is not complex, the design of experiments (DOE) technique and response surface method (RSM) are integrated with FEM to build an explicit function to connect process input to process performance output. With the quantification of uncertainties of input variables, by Monte Carlo Simulation (MCS) or other reliability analysis methods, the probability that the product conforms to its specification would therefore be assessed. Through the right probabilistic optimization formulation that we will talk about in later chapters, the robust optimal design could be found. In this dissertation, we will also explain the method to deal with the case where the number of

7

process inputs is too many so that the design of experiment cannot handle and the system meta-model cannot be approximated. The integration of FEM, multi-objective genetic algorithm and reliability assessment approach will be illustrated in chapter 7. Its main aim is to reduce process variability, reduce defect rates and improve process capability by robust process design.

1.3 Objective of Research

The specific objectives of this research are as follows: y To model an existing sheet metal forming process for:

à Understanding the impact of variation of process input on the variation of process

output.

à Designing the best input parameter settings for robust process design

à Improvement of the process capability and reduction of defect rate for a better

product quality.

The proposed methodology could be applied to most stamping process and other typical manufacturing processes such as forging, , injection molding and die .

Moreover, the analysis from the probabilistic design of the sheet drawing process would reveal very important facts in the stochastic prospective and give us more understanding about the process.

8

1.4 Research Significance and Benefits

Although extensive research has been done in exploring the deterministic effect of each factor on the part, the impact of their variations on the fluctuation of output quality for the stamping process is seldom addressed. The inclusion of uncertainty in the design and optimization cycle should lead to better understanding of the impact of uncertainty associated with system input on the system output. This understanding can then be applied for managing such uncertainties.

The proposed probabilistic design approach, which incorporates process uncertainty, is capable of reducing process performance variation/defect rate and improving process capability/quality. Our approach fully integrates finite element method (FEM), design of experiments (DOE), response surface methodology (RSM), and performance variability assessment methods like Monte Carlo Simulation (MCS)/Sensitivity-Based Variability

Estimation to deal with relatively simple case. For some really complex situations, an integration of FEM, multi-objective genetic algorithm and reliability assessment is illustrated. Though numerical methods have been in practical use for process design and control, few studies in the past have addressed integration of FEM, system modeling and approximation such as RSM, and probabilistic design. Our research will have great utility in the design and evaluation of production processes.

9

The proposed systematical approach would significantly contribute to:

y Designing a production process without undergoing costly trial and error

procedures.

y Finding the robust production process design in terms of reduced defects,

improved quality and hence less costs.

y Understanding quantitatively the impact of process input variation on the

variation of process performance output so that the production system is not a

black-box any more.

1.5 What is My Contribution?

There are three aspects in this research that are my contributions.

1. Very few researches have been done to systematically and quantitatively include the

process uncertainties into the process design and analysis cycle in the sheet metal

forming area. There have some Taguchi robust design works done before. But as

mentioned previously, the Taguchi method has many shortcomings.

2. The proposed integrated probabilistic design approach is very new in the sheet metal

forming area and even in other metal forming areas like forging and rolling. It

combines sheet forming numerical simulation, process modeling/approximation,

10

uncertainties modeling and probabilistic design optimization formulation and leads to

a robust setting. Hence, the traditional trial-and-error process design efforts, which

add to the cost of manufacturing, will be reduced.

3. This research also proposes a new method to deal with complex system robust design

where the number of process input is so large that the system can not be modeled by

the DOE and RSM approach. The integration of FEM, multi-objective genetic

algorithm, and reliability assessment provides a good solution.

1.6 Dissertation Outline

Chapter 1 outlines the motivation behind this work along with the objectives and research approach. Chapter 2 presents a brief overview of FEM, DOE and system approximation techniques. Chapter 3 gives introduction to current used Taguchi robust design, reliability based optimization and our proposed probabilistic design. Chapter 4 presents a simple cylindrical cup drawing experiment showing that at the fixed level of process setting the part quality characteristics can have a very large variation. This experiment serves as a justification to conduct the probabilistic design for the sheet metal forming process. The simple probabilistic design methodology using the quality index as objective is illustrated in chapter 5. By considering the material properties variation and process condition variation, an optimal deterministic blank holder force and stochastic friction coefficients

11

means are obtained to minimize the defect rate. A more formal probabilistic design formulation is explained in chapter 6, where the temporal varying blank holder force itself is treated as like what we see in the real world. By incorporating process variations, both the reliability constraints and Taguchi type objective are considered in the design optimization formulation. The probabilistic design is also compared with the deterministic design and the PI design which is usually adopted to find the variable blank holder force profile. In chapter 7, spatial varying constraints on the sheet are studied with the introduction of the very new discrete friction concept. The finite element simulation is integrated with the multi-objective genetic algorithm and drawing process heuristics to find the optimal deterministic design configuration. Then the reliability analysis method is used to find the probabilistic design optimum. Chapter 8 concludes our research work and address the future work.

12

CHAPTER 2

2. BACKGROUND AND LITERATURE REVIEW

To understand the effect of process uncertainties on the product quality, we need to have a system model which connects the input to output. A basic tool in knowing output to a certain input for a manufacturing process would be the numerical simulation based on finite element method. Currently, for most manufacturing processes like forging, rolling, stamping, injection modeling, die casting and , proved and effective FEM commercial software have been widely adopted in the industries. For forging/rolling simulations, DEFORM and FORGE3 are two major FEM packages. For stamping,

PAM-STAMP and LS-DYNA are dominant. For injection molding, we have

MOLDFLOW. For die casting, we have PROCAST. For machining, DEFORM has developed a dedicated module to simulate the chip formulation and breakage.

The widely used FEM manufacturing process simulation software not only provide an effective tool in developing process for a much shorter time cycle, but also offer us an economic way to investigate the impact of process uncertainty on the product quality. We can image the case where we need to understand how the variation of the die radius would affect the drawing quality. By real experiments, it will be so expensive and 13

even prohibitive to fabricate different configurations of dies, because each set of die can easily cost a couple of thousands dollars. However, by using PAM-STAMP, the industry proved simulation software, we only need several hours of computation to identify the relation between die radius and part wrinkling/fracture.

Although simulation is much cheaper than the real experiment in the monetary terms, the time needed by numerical computation still pose a severe constraint in exploring the manufacturing system behavior. Normally, a three dimensional simulation for a typical industrial like forging case could take one day to finish. In some extreme case, for example, the simulation of the chip formation in the machining process may consume up to one month. Therefore, systematic way of running simulations by certain principles instead of randomly searching the process design is really important and necessary.

Design of Experiments (DOE), which includes several design techniques, could help us figure out the economic number and design of simulation run so that a desired level of system approximation model could be built.

Closely related to the DOE usage are the system approximation methods. DOE tells you how to design a smart experiment to investigate a system, while approximation methods are analyzing/processing tool to build a compact mathematical system model which captures and represents the system itself.

14

There are three major system approximation methods: the Response Surface Method

(RSM), the Kriging method, and Artificial Neural Network (ANN). In this chapter, we will first give background information on finite element methods and its application in sheet metal forming. Then we will talk about the common Design of Experiments methods and the chosen design for our FEM simulations. Following DOE, the system approximation methods will be addressed. All these three topics are the fundamentals to understand later on how we incorporate process uncertainty to investigate/design the process.

2.1 Finite Element Method and Its Application in Sheet Metal Forming

2.1.1 Introduction to Sheet Metal Forming

Sheet metal is simply metal formed into thin and flat pieces, which are usually less than

6mm. There are different that can be made into sheet metal. Aluminum, , , cold rolled steel, mild steel, , and are just a few examples of metal that can be made into sheet metal. Sheet metal has wide applications in car bodies, airplane wings, medical tables, roofs for building and many other things.

Sheet metal forming refers to various processes used to convert sheet metal into different shapes for a large variety of finished parts. The typical include

15

stretching, drawing, cutting, bending and flanging, and , spinning, press forming and roll forming. In this research we will primarily focus on the sheet metal drawing process.

A sheet metal forming system comprises all the input variables relating the blank

(geometry and material), the tooling (geometry and material), the conditions at the tool-material interface, the mechanics of plastic deformation, the equipment used, the characteristics of the final product, and finally the plant environment in which the process is being conducted. The design, control, and optimization of sheet forming processes require analytical knowledge regarding metal flow, stresses as well as technological information related to lubrication, material handling, die design and manufacturing, and forming equipment. The mechanics of deformation provides the means for determining how the metal flows, how the desired geometry can be obtained by plastic deformation, and what the expected mechanical properties of the produced part are [3].

The state of deformation in a plastically deforming metal is fully described by the displacements, velocities, strains, and strain-rates. The basic mechanisms in sheet metal forming are stretching, drawing, and bending. Depending on the shape and the relative dimensions of the blank and the tool, one or more basic mechanisms is predominantly involved. The limits of sheet-metal forming are determined by the occurrence of defects, such as wrinkles and fracture in the blank [3]. An important development in checking the of sheet metals is the forming limit diagram. In this diagram the major and

16

minor surface strains at a critical site are plotted at the onset of visible, localized necking in a deformed sheet, and the locus of strain combinations that will produce failures in an actual forming operation can be drawn. Experimental methods are used to construct the diagram.

2.1.2 Basis of Finite Element Method

The concept of the finite element procedure may be dated back to 1943 when Courant approximated the warping function linearly in each of an assemblage of triangular elements to the St. Venant torsion problem and proceeded to formulate the problem using the principle of minimum potential energy. Similar ideas were used later by several investigators to obtain the approximate solutions to certain boundary-value problems. It was Clough who first introduced the term “finite elements” in the study of plane elasticity problems. Since then numerous studies have been reported on the theory and applications of the finite element method.

The basic concept of the finite element method is one of discretization. The finite element model is constructed in the following manner. A number of finite points are identified in the domain of the function, and the values of the function and its derivatives, when appropriate, are specified at these points. The points are called nodal points. The domain of the function is represented approximately by a finite collection of sub-domains called finite elements. The domain is then an assemblage of elements connected

17

together appropriately on their boundaries. The function is approximated locally within each element by continuous functions that are uniquely described in terms of the nodal-point values associated with the particular element [4].

The path to the solution of a finite element problem consists of five specific steps: [4]

1. identification of the problem;

2. definition of the element;

3. establishment of the element equation;

4. assemblage of element equations; and

5. numerical solution of the global equations.

2.1.3 Application of Finite Element Method in Sheet Metal Forming

The use of finite element simulation technology for stamping applications is growing rapidly these days. When looking at the recent history of virtual stamping, one can distinguish several main time periods. The first period, before 1990, was in fact a pre-industrialization period. In this period, people started using computers to understand how to improve the forming process. At this stage the different attempts were mainly focused on approaches such as expert system and knowledge based methods. An important breakthrough occurred with the success of forming simulation applying finite element method to stamping related problems.

18

Using the elastic-plastic approach, complete solutions of stretch-forming and deep-drawing problems, taking into account the contact problem at the blank holder, die, die profile, and punch head, were obtained by Wifi [5]. On the basis of the nonlinear theory membrane shells, Wang and Budiansky [6] developed a procedure for calculating the deformations in the stamping of sheet metal by arbitrarily shaped punches and dies.

Onate and Zienkiewicz [7] presented a finite-element formulation based on an extension of the general viscoplastic flow theory for continuum problems to deal with thin shells.

Toh and Kobayashi [8,9] analyzed sheet-metal forming processes, axially symmetric and non-symmetric, by the finite-element method based on the membrane theory. The finite-element model takes into account the rigid-plastic material characteristics and includes the normal anisotropy of the sheet metal as well as the finite deformation that occurs during the sheet-forming process.

The main usage of stamping simulation software concentrated on strain predictions and the introduction of stamping-related know-how. The user wanted to have answers to the following questions: is this part feasible? Where does it fail? Where will wrinkling happen? What does my forming limit curve look like? Over the years, stamping simulation has helped to reduce the costs and lead-time of various components considerably. One can identify the main parameters that are influencing the stamping process: the part geometry and the die run-off design, the materials’ selection, the manufacturing hardware and process and final quality control.

19

In our research, the FEM simulation package Pam-stamp is used intensively. Pam-stamp

2G is a calculation code that uses the finite element method (FEM). All the components of a calculation (metal sheet, tools, drawbeads) are segmented as meshes, i.e. a discrete representation of the geometry. For non-deformable tools, the mesh is only a representation of the geometry, and the finite elements are only used for contact description. On the other hand, for the blank or a deformable tool, the finite elements that form this mesh represent small pieces of the material with a prescribed deformation behavior. The mechanical phenomena that occur in a blank are faithfully reproduced using a large number of these elements. The finer the mesh to be generated, the better quality of the results, while the higher the number of elements, the longer the calculation time.

Depending on the calculation type (implicit or explicit) the calculation is sub-divided into increments or time-steps. Generally, implicit increments are large with respect to the explicit time-steps. Positions, velocities, accelerations and forces are permanently calculated at the nodes, which are points linked to the material. Within the elements, strains are calculated from positions. Corresponding stresses are then obtained, which result in forces on the nodes. This calculation is repeated over all the elements for the entire duration of the calculation. Boundary conditions are used to remove degrees of freedom, while velocities and forces further define the kinematic behavior of the finite element model. To describe the actual deformation process, material properties and thickness, can be assigned to an element [10].

20

2.2 Design of Experiments Techniques

Design of Experiments includes the design of all information-gathering exercises where variation is present, usually under the full control of the experimenter. Often the experimenter is interested in the effect of some process or intervention on some objects.

Design of experiments is a discipline that has very broad application. In the following part, we will introduce the most frequently used DOE techniques.

2.2.1 Full-factorial Design

A full-factorial design is one in which all combinations of all factors at all levels are evaluated. It is an old engineering practice to systematically evaluate a grid of points,

requiring nnn123∗∗∗... ni (i is the number of factors, ni is the number of levels for factor i ) design point evaluations. This practice provides extensive information for accurate estimation of factor and effects. However, it is often deemed cost-prohibitive due to the number of analyses required [11].

2.2.2 Orthogonal Arrays

The use of orthogonal arrays can avoid a costly full- in which all combinations of all factors at different levels are studied. A fractional factorial

21

experiment is a certain fractional subset (1/2, 1/4, 1/8, etc.) of the full factorial set of experiments, carefully selected to maintain orthogonality (independence) among the various factors and certain interactions. While the use of orthogonal arrays for fractional factorial design suffers from reduced resolution in the analysis of results (i.e., factor effects are aliased with interaction effects as more factors are added to a given array), the significant reduction in the required number of experiments can often justify this loss in resolution as long as some of the interaction effects are assumed negligible.

In fractional factorial designs, the number of columns in the design matrix is less than the number necessary to represent every factor and all interactions of those factors. Instead, columns are “shared” by these quantities, an occurrence known as .

Confounding results in the dilemma of not being able to realize which quantity in a given column produced the effect on the outputs attributed to that column. In such a case, the designer must make an assumption as to which quantities can be considered insignificant

(typically the highest-order interactions) so that a single contributing quantity can be identified [11].

2.2.3 Latin Hypercube Design

Another class of experimental design which efficiently samples large design spaces is

Latin Hypercube . With this technique, the design space for each factor is uniformly divided (the same number of divisions ( n ) for all factors). These levels are

22

then randomly combined to specify n points defining the design matrix (each level of a factor is studied only once). For example, figure 2.1 illustrates a possible Latin

Hypercube configuration for two factors (,x12x ) in which five points are studied.

Although not as visually obvious, this concept easily extends to multiple dimensions.

An advantage of using Latin Hypercubes over Orthogonal Arrays is that more points and more combinations can be studied for each factor. The Latin Hypercube technique allows the designer total freedom in selecting the number of designs to run (as long as it is greater than the number factors). While, the configurations are more restrictive using the

Orthogonal Arrays.

A drawback to the Latin Hypercubes is that, in general, they are not reproducible since they are generated with random combinations. In addition, as the number of points decreases, the chances of missing some regions of the design space increases [11].

23

Figure 2.1. Latin Hypercube design [11].

24

2.2.4 Central Composite Design

Central Composite Design (CCD) is a statistically based technique in which a 2-level full-factorial experiment is augmented with a center point and two additional points for each factor (star points). Thus, five levels are defined for each factor, and to study n factors using Central Composite Design requires 221n + n + design point evaluations.

The corner points are for the assessment of linear and 2-way interaction terms. Center points are used to detect curvature and sometime replicated in experimental DOE to estimate pure error. Star points are for the assessment of quadratic terms, see figure

2.2(a). Although Central Composite Design requires a significant number of design point evaluations, it is a popular technique for compiling data for Response Surface Modeling due to the expanse of design space covered, and higher order information obtained [11].

2.2.5 Box-Behnken Design

Box and Behnken developed a family of efficient three-level designs for fitting second-order response surfaces. It exists only for 3-7 factors. Number of runs is very close to CCD for the same number of factors. The Box-Behnken design doesn’t have any corners and it is suitable for the situation when corners are not feasible (physical designs), see figure 2.2(b).

25

(a) Central Composite Design

(b) Box-Behken Design

Figure 2.2. Comparison of CCD design and Box-Behnken design.

26

2.2.6 Computer Aided Designs

Other popular methods to select the design points are computer-aided designs. Computer aided designs are generated based on a particular optimality criterion and are generally optimal only for a specified model. The common types of optimality criteria include

D-optimality, A-optimality, G-optimality and V-optimality. Unlike the standard classical designs such as factorials and fractional factorials, the computer-aided design matrices are usually not orthogonal. These methods are particularly useful when standard designs

(e.g. factorial or CCD) cannot be simply implemented. Such situations might arise, for example, when the design space is irregular due to feasibility constraints, or when there are economic constraints on the size N of the experiment. In such cases, optimality criteria and associated numerical techniques provide objective methods for selecting design points.

2.3 Approximating Methods

Approximation concepts were introduced in structural design optimization in the late

1970s to do the following:

à Reduce the number of independent design variables through design variable

linking and reduced basis vectors concepts.

27

à Perform constraint deletion through truncation and regionalization schemes.

à Reduce the number of computer intensive, detailed analyses (or simulation code

evaluations) through the use of mathematical approximations of the design

optimization objective and constraint functions.

These approximations models can be used to reduce simulation codes or analyses that are computation intensive. They can also help to eliminate the computational noise for simulation codes in the case the outputs rapidly oscillate with gradual changes in the values of input parameters. Computational noise has a strong adverse effect on optimization by creating numerous local optima. Approximation models (Response

Surface Models in particular) naturally smooth out the response functions, and, in many cases, help to converge to a global optimum faster. The usage of approximation is not restricted to optimization. It also provides an efficient means of post-optimization or . Their value is very high for computationally expensive engineering methods, such as Monte Carlo Simulation, Reliability-Based Optimization, or

Probabilistic Design Optimization which we will conduct in this research [11].

2.3.1 Response Surface Method

Response surface method is a collection of statistical and mathematical techniques useful for developing, improving, and optimizing processes. In some systems based on the underlying engineering, chemical, or physical principles, the nature of the

28

relationship between y and x ’s might be know exactly. Then a model of the form

y =+gx(12 , x ,..., xk ) e can be written. This type of relationship is often called a mechanistic model. However, the more common situation would be that the underlying mechanism is not fully understood, and the experimenter must approximate the unknown

function g with an appropriate empirical model y = fxx(12 , ,..., xk ) + e. Usually the function f is a first-order or second-order polynomial. This empirical model is called a response surface model.

The model then can be used in optimization studies with a very small computational expense, since evaluation only involves calculating the value of a polynomial for a given set of design variables. Accuracy of the model is highly dependent on the amount of information collected for its construction (number of exact analyses), shape of the exact response function being approximated (like the order of polynomial), and volume of the design space in which the model is constructed (the covered by the RSM). In a sufficiently small volume of the design space, any smooth function can be approximated by a quadratic polynomial with good accuracy. For highly non-linear functions, polynomials of 3rd or 4th order can be used. If the model is used outside of the design space where it was constructed, its accuracy is impaired, and refining of the model is required [11].

The response surface model relies on the fact that the set of designs on which it is based is well chosen. Randomly chosen designs may cause an inaccurate surface to be

29

constructed or even prevent the ability to construct a surface at all. Because simulations are often time-consuming or the experiments are expensive, the overall of the design process relies heavily on the appropriate selection of a design set on which to base the approximations. CCD design, Box-Behnken design, D-optimal design are the widely used DOE methods to generate the design set for constructing a response surface model.

2.3.2 Kriging Meta Models

Kriging (named after the South-Afican mining engineer Krige) is an interpolation method that predicts unknown values. More precisely, a Kriging prediction is a weighted linear combination of all output values already observed. These weights depend on the distances between the new and the observed inputs. The closer the inputs, the bigger the weights are. Kriging models are extremely flexible due to the wide range of correlation functions which can be chosen for building the approximation model. Furthermore, depending on the choice of the correlation function, the model either can provide an exact interpolation of the data, or an inexact interpolation [11].

The most popular DOE for Kriging is Latin Hypercube Design. LHS offers flexible design sizes n (number of scenarios simulated) for any value of k (number of simulation inputs). Geometrically, many classic designs consist of corners of k-dimensional cubes, so these designs imply simulation of extreme scenarios. LHS, however, has better space filling properties.

30

2.3.3 Neural Networks

Artificial Neural Networks (ANN) has been studied for many years in the hope of mimicking the human brain’s ability to solve problems that are ambiguous and require a large amount of processing. Human brains accomplish this data processing by utilizing massive parallelism, with millions of neurons working together to solve complicated problems. Similarly, ANN models consists of many computational elements, called

“neurons” to correspond to their biological counter-parts, operating in parallel and connected by links with variable weights. These weights are adapted during the training process, most commonly through the back-propagation algorithm, by presenting the neural network with examples of input-output pairs exhibiting the relationship the network is attempting to learn. The most common applications of ANN involve approximation and classification. Approximation models attempt to estimate input-output transformation functions, while classification involves using the known inputs to determine class membership. Until now, we found there is no much literature about the optimal experimental design for neural networks or even verification of the effectiveness of the traditional regression model based optimal design methods on the neural net.

However, we have conducted a comparative study and the result shows that for building the approximation model by neural network, the Bayesian D-optimal design (one kind of computer aided DOE) gives the better prediction accuracy than the other experimental design methods such as Latin Hypercube design or D-optimal designs. The details are shown in the appendix A.

31

2.4 Literature Review of Deterministic Sheet Metal Forming Process

Optimization Employing FEM, DOE and Approximation Methods

Ayed et al. [12] presented a FEM combined with RSM approach to optimize the blank holder force considering the drawing of a front door panel. The numerical simulations were performed using ABAQUS Explicit. The parameters of the finite element model

(mesh density, speed of punch) were set to achieve a good prediction with a minimum simulation time. The objective function was defined to minimize the work of punch.

Three inequity constraints functions were defined to avoid necking and wrinkling. To avoid necking, the major stress of the blank was limited to a value, which was determined by using the modified maximum force criterion. To avoid wrinkling, under the blank holder, the angle between the blank holder surface and an element of the blank was limited to a value set by user, as proposed by Gelin and Labergere[13]. In the useful part of the workpiece, the major stress was limited to a value. A central composite design experiment was applied to generate response surface model. For n independent variables, the central composite design requires 221n + n + simulations: 2n factorial designs augmented by 2n axial points and one center point. Thus for seven blank holder forces

143 numerical simulations were necessary. Then a SQP algorithm was used to find the optimum blank holder force.

Kim and Huh [14] carried out optimization of the process parameters for process design in sheet metal forming processes. The scheme incorporated rigid-plastic FEM for the

32

deformation analysis and RSM for the optimum searching of process parameters. The algorithm developed was applied to design of the draw bead force and the die radius in deep drawing processes of rectangular cups. The algorithm showed the capability of designing process parameters which enabled the prevention of the part being weak or fracture during stamping processes. Kim et al. [15] used rigid-plastic FEM with modified membrane elements for an analysis tool and RSM for constructing the approximation surface for searching the optimum draw bead force in the sheet metal forming process.

The algorithm developed was successfully applied to a design of the draw bead forces in the deep drawing process.

Tezuka et al. [16] used rigid plastic FEM and RSM for process parameter determination in the sheet metal forming process. Using this methodology, process parameters such as the optimum bead force in the deep drawing process were effectively calculated.

Lepadatu, et al. [17] presented a sheet metal bending process optimization method for springback minimization that combined finite element analysis, response surface method and gradient optimization algorithm. In his work, the optimization computation was carried out with a FORTRAN program using the gradient method. In the first phase, polynomial models were generated with the available DOE data obtained by finite element simulation. In the second phase, the optimizer used the objective function during the search for the optimum until the final converged solution was obtained. Springback of sheet parts during bending process was simulated using finite element model including

33

damage evolution effects within the sheet. This simulation was based on a constitutive law of large elasto-plastic strains coupled with damage type Lemaitre. Die corner radius, punch-die clearance and blank holder force were the three main variables considered.

Central composite experimental design (CCD) was selected to generate data for fitting the response surface.

Wang, et al. [18] reported a strain path controlled forming process through adjusting the blank holder force located in various flange areas to achieve a facture free product. Due to the wide application and the high efficiency of the finite element method, it was the means used instead of experiment to carry out the controlled forming process. Deep drawing processes with two materials, SPCC and Al 5754, were simulated with yield criteria Hill 48 and Hill 90, respectively. The empirical equation, built through response surface method (RSM), was developed based on the finite element method (FEM) simulation results. Central composite design (CCD) was used to guide the simulations.

The equation, working as system model, explored the relationship between the principal strain of fracture risk elements and the process parameters, especially the blank holder force located in various flange areas. The empirical equations were developed based on a set of numerical simulation results within two elements which have the high fracture risk.

Three dimensional principal strain surfaces of one element were drawn which displayed the obvious trend varied with the space variant blank holder force. The model adequacy was checked and confirmed using ANOVA method. Four extra sets of numerical simulation were carried out to compare with the value predicted using RSM. Good

34

consistency confirms the effectiveness of this empirical equation with the plasticity field.

With the assistance of the empirical model, it was feasible to acquire a fracture free product.

Forsberg, et al. [19] investigated the accuracy of response surface and Kriging modeling.

For RSM, the true response is usually replaced with a low-order polynomial. In Kriging the true response is replaced with a low-order polynomial and an error correcting function. In both cases the D-optimality criterion has been used to distribute the design points. Crashworthiness simulations were carried out at the design points. From the investigation, they found that Kriging better than RSM resolved abrupt changes in the response, e.g. due to buckling, contact or plastic deformation. However, as seen from the derivation of the D-optimality criterion, it was not valid for the approximation using the

Kriging technique. Therefore, the conclusion about Kriging was better than RSM was not fair enough. The space-filling design like the Latin hypercube should be included in the investigation.

Huang, et al. [20] illustrated an efficient method to optimize the intermedial tool surfaces in the multi-step sheet metal stamping process to obtain improved quality of a product at the end of forming. The proposed method was based on a combination of finite element method (FEM) and the response surface method (RSM). The objective of the optimization was to minimize the thickness variation within the part at the final stage.

The optimal radius of the intermedial surface and fillet radius were found.

35

Yamazaki, et al. [21] tried to apply the response surface approximate method to develop aluminum beverage can ends. Geometrical parameters of the end shell were selected as design variables. The analysis points in the design space were assigned using an orthogonal array in the design of experiment technique. Finite element analysis code was used to simulate the deforming behavior and to calculate buckling strength and central panel displacement of the end shell under internal pressure. On the basis of the numerical analysis results, the response surface of the buckling strength and panel growth were approximated in terms of the design variables. By using a numerical optimization program, the weight of the end shell was minimized subject to constraints of the buckling strength, panel growth suppression and other design requirements.

Lin, et al. [22] established an effective prediction model of the spring-back of material during the processing of an L-shaped bend by artificial neural networks (ANN). FEM simulation of an L-shaped bend was first carried out for various thickness of material, punch-round-radii and die-round-radii. The results of spring-back from FEM simulation were then input to a neural network to establish a model for the L-shaped bend variables.

He concluded that the neural networks approximating fitted the experiments and the optimal design could be achieved by this model.

Ji, et al. [23] used finite element method and neural network to inversely design the rolling process parameters. The neural network could accurately predict the seam

36

forming and grain size in the rolling process. Then an inverse neural network was constructed to find the optimal process setting given the grain size and constraint of no seams.

Hambli, et al. [24] presented a similar approach that combined finite element simulation with neural network modeling of the leading blanking parameters in order to predict the burr height of the parts for a variety of blanking conditions. The numerical results obtained by finite element computation including damage and fracture modeling and tool wear effects were utilized to train the developed simulation environment based on back propagation neural network modeling. The comparative study between the results by neural network and the experimental ones showed good match.

37

CHAPTER 3

3. RESEARCH METHODOLOGY

Before we start talking about our approach of the probabilistic design for sheet metal forming, we need to give introduction to the current design methods that incorporate uncertainties. The most widely used and famous method was created by the quality guru

Taguchi: the Taguchi robust design.

3. 1 Taguchi Robust Design

3.1.1 Introduction to Taguchi Robust Design

The motive of robust design is to improve the quality of a product or process by achieving performance targets and minimizing performance variation. The variables are usually classified as the following categories:

à Control Variables X . These variables can be designed and controlled in the

manufacturing process.

38

à Noise variables Z . They are either not controllable, or too difficult or expensive

to control in the manufacturing process. Noise variables can cause the variation of

responses Y and lead to quality loss.

à Response variables Y . They are dependent performance characteristics.

Responses are the system outputs, and are functions of the control and noise

variables.

The robust design is to seek the settings of the control variables to reduce the variation of system performance responses caused by uncertainty of noise variables and achieve performance targets. One note here is that the control variables may have variation too at the manufacturing process and thus cause variation of response variable. In this case, we are seeking the best mean values of the control variables.

Taguchi's robust design evaluates the mean performance and its variation by crossing two arrays: an inner array, designed in the control variables, and an outer array, designed in the noise variables. As shown in figure 3.1, a two level factorial design is adopted for both the inner and outer array. For each row of the inner array, response values are generated for each noise variables combination. For example, inner array row 1 with

outer column 1 leads to the response value y11 , inner row 1 with outer column 2 leads to

response value y12 , and so on. This design then leads to multiple response values for each combination of control variables, from which a response mean, μ , and or , σ , can be computed.

39

Given the mean and variance for each inner array row, the experiments can be compared to determine which set of control settings best achieves “mean on target” and “minimized

variation” performance goals. Taguchi uses the signal-to-noise ratio (shown as SN/ yi for each control experiment), and quality loss (measured using a loss function) to combine the effects of mean performance and performance variation, which can then be used to compare the set of designs represented by the control variables combinations.

The SN/ ratio calculation depends on the particular response being investigated [11]:

μ 2 à The SN/ ratio is 10log if the response is a desired value. 10 σ 2

n ⎛ 1 2 ⎞ à The SN/ ratio is −10log10 ⎜ ∑ yi ⎟ if the response is desired to be a lower ⎝ n i=1 ⎠

value.

⎛ 1 n 1 ⎞ à The SN/ ratio is −10log ⎜ ⎟ if the response is desired to be a higher 10 ⎜ ∑ 2 ⎟ ⎝ n i=1 yi ⎠

value.

The second performance characteristic used by Taguchi Robust Design Techniques, the loss function, is generally used to measure the loss of quality associated with deviating from a targeted performance value, as shown in figure 3.2.

40

Figure 3.1. Taguchi robust design matrix [11].

41

Figure 3.2. Taguchi loss function [11].

42

Conventionally, the acceptable quality range is defined by the lower and upper specification limits. All values within these two limits are assumed to have no quality loss, and all values outside the limits are defined as having 100% quality loss. The best example of this concept is GD&T. If the part dimension is larger or smaller than the limits, the part is rejected. However, in Taguchi robust design, quality loss is measured by the deviation from the target. This means loss of quality occurs gradually when the quality characteristic moves in either direction from the target value, rather than as a sharp cutoff with the conventional approach. The standard form of the loss function

Ly() is given as follows:

L(y) = k(y − T) 2 (3.1)

In the equation (3.1), y is the quality characteristic, such as a dimension or performance parameter, T is the target value for the quality characteristic, and k is the loss constant, which is used to convert the deviation from the target to appropriate measure of quality loss.

3.1.2 General Taguchi Robust Design Optimization Formulation

The SN/ ratio, loss function, and μ,σ can be combined to use in the case of multi-objectives. The general Taguchi Robust Design formulation is then to search through all the control design combinations and find the setting that minimizes the

43

following objective:

Robust design objective = [−S / N ] + [L ] + [(±)μ + σ 2 ] yi yj yk yk (3.2)

In this general form, yi , y j , yk represents all the responses considered in the Taguchi

robust design problem. The plus sign is used when lower yk is desired, and minus sign

is used when higher yk is desired. Weight and scale values can also be used with each objective.

3.1.3 Disadvantages of Taguchi Robust Design method

Nevertheless, the shortcomings of Taguchi’s robust design are obvious. First, the optimal design can only be at the experimental points. The configuration with the largest ratio is selected as the optimal design. No point between them could be evaluated. Second, the way Taguchi design the experiment matrix arbitrarily assumes independence between control factors and noise factors. Thus the highly possible coupling of them is ignored, which is not justifiable. Third, the relationship between those variables is still not clear and effect of variation of each noise factor on the part quality could not be quantified.

Although the objective function in Taguchi robust design is formulated well, the lack of system model to connect x , z and y limits its application range.

44

3.2 Reliability Based Optimization

Reliability based optimization was developed based on reliability analysis. So before delving into the optimization formulation, we need to explain its foundations: the reliability analysis.

3.2.1 Reliability Analysis

The reliability analysis was first developed and applied in the structure safety field. It incorporates uncertainties associated with geometrical and material properties, loading and boundary conditions, and operational environment into structural analysis and design by defining these random variables, their associated probabilistic distribution functions, and statistical properties [11]. Due to the variation of input variables, the performance of the designed structure component or system will experience variation too, some of which may violate the constraints. The structural reliability is then defined as the probability of satisfying a constraint, and is equal to 1- probability of failure, under the input uncertainties. The concepts of reliability and probability of failure are illustrated in figure

3.3.

There are many methods developed in recent years to estimate the probability of failure or reliability (estimating the areas inside and outside the constraints). We will mainly introduce the following three commonly used methods. 45

Figure 3.3. Reliability based analysis [11].

46

Figure 3.4. FORM reliability analysis method [11].

47

à First Order Reliability Method (FORM)

à Mean Value First Order (MVFO) Method

à Monte Carlo Simulation (MCS) Method

These methods are able to evaluate reliability for the current design point. Each method is summarized in the following sections.

3.2.1.1 First Order Reliability Method

The idea of FORM is based on the desirable properties of the standard normal . Hasofer and Lind [25] defined the reliability index as the shortest distance from the origin of the standard normal space (U -space) to a point on the failure surface.

The reliability index β can be determined by a minimization problem with one equality constraint (3.3):

β = min U U (3.3) s.t. g(X ) = g(T −1 (U )) = g(U ) = 0

In this formulation, first the original random vector X is mapped to the standard, uncorrelated normal vector U by a transformation T . Then a minimization searches the closet point U * on the failure function gU() to the zero point. U * is called the

Most Probable Point (MPP). If the failure function gU ( ) is linear in terms of the

48

normally distributed random variables Ui , the failure probability is calculated as

Pf = Φ(−β ) , see figure 3.4. In this equation, Φ is the standard normal distribution function. If the failure function is nonlinear, the equation above is still a good approximation, provided that the curvature of the failure surface at the MPP is not too large in magnitude.

3.2.1.2 Mean Value First Order Method

The MVFO reliability method utilizes the first order Taylor's series expansion of failure

functions gX() at the mean values μ X . Then the variance of the gX() is the sum of the multiplication of variance of each dependent variable and its of derivatives at

μ X . The mean-value reliability index, which is the same as β in the FORM method, is then calculated by dividing the mean and standard deviation of gX() [11]:

g(μ ) μ β = X = g 2 σ ⎛ ∂g ⎞ g (3.4) ⎜ ⎟ (σ ) 2 ∑⎜ ⎟ X i ⎝ ∂X i ⎠

Similarly, the probability of failure can then again be calculated as Pf = Φ(−β ) . In terms of the number of function evaluations, or simulation program executions needed to calculate the reliability, MVFO is the most efficient reliability analysis method since it

49

requires only one time failure function evaluation to calculate the mean and sensitivity evaluations to calculate the derivative. However, the mean-value reliability index is accurate only for linear failure functions with normally distributed random variables.

3.2.1.3 Monte Carlo Simulation Method

Monte Carlo Simulation is a very straight forward method to calculate the probability of

failure. The reliability level R and probability of failure Pf can be calculated as in equation (3.5) [11]:

⎛ #simulations in failure region ⎞ ⎛ #simulations in safe region ⎞ R = 1− Pf = 1− ⎜ ⎟ = ⎜ ⎟ (3.5) ⎝ total #system simulations ⎠ ⎝ total #system simulations ⎠

This calculation is done by the following steps:

1. Identify random variables by assuming appropriate distributions, and defining

properties for each (mean, standard deviation, or ).

2. Specify the number of simulations to be executed (often 1000 - 10,000

simulations are necessary for accurate prediction of response statistical

properties).

3. Generate uniformly distributed random numbers for each random variable.

50

4. Convert each uniform random number to a random variable value corresponding

to appropriate distribution.

5. Evaluate failure function(s) using random variable values, and determine whether

simulation point is a success (gX ( )> 0 ) or failure (gX ( )< 0 ) for each failure

function gX( ) .

6. Repeat step 3 through step 5 for the number of simulations specified in step 2.

7. Compute reliability R for each failure function.

3.2.2 Reliability Based Optimization Formulation

While the reliability-based design analysis methods seek to determine the reliability or probability of failure of the current design (act on a single design point), reliability-based optimization seeks to identify design solutions that not only optimize performance

(minimize or maximize one or more objectives), but also satisfy constraints on the minimum reliability (or maximum probability of failure) [11].

Therefore, we can formulate a reliability-based optimization problem by modifying a deterministic optimization problem by adding random variables, and reliability constraints. The reliability-based optimization problem is formulated as follows (3.6):

51

Find the set of design variables X that:

Minimize : F(X ,Y ) Subject to : g det (X ,Y) ≤ 0 i rel (3.6) g j (X ,Y, β j ) ≤ 0 X l ≤ X ≤ X u

In this formula, Y is the set of random variables, gi is the ith deterministic constraint,

g j is the jth reliability constraint, and β j is the reliability index value of the jth reliability constraint. The reliability index value is calculated from the standard normal distribution function by the desired reliability level. By incorporating the reliability index to the constraints, the probability of failure must be equal or smaller than the desired reliability. For example, a reliability index value of 2.0 corresponds to a reliability of

97.725% and a probability of failure of 2.275%.

FORM is usually embedded in reliability-based optimization to evaluate the satisfaction of the reliability constraints. Since it also requires the solution of an optimization problem to calculate the reliability, using this reliability analysis method within a reliability-based optimization approach is considered a “double-loop” method (FORM optimization loop within outer reliability-based optimization loop) [11].

A “single-loop” method, as documented in [26], and is significantly more efficient than double-loop methods. In this approach, the critical (failure) point for each reliability constraint is calculated using derivatives of the constraints, with respect to the random

52

variables, and the desired reliability index. The reliability constraints are then evaluated at the critical point, while the objectives are evaluated at the mean value point (defined by the mean values of the random variables).

3.3 Proposed Probabilistic Design approach

The Probabilistic Design combines elements from each to create a complete formulation for accessing and improving reliability and robustness. Each of the methods discussed in this chapter has its specific focus with respect to incorporating uncertainty.

Monte Carlo simulation is an investigative tool; no constraints or objectives are formulated. Reliability analysis is focused on constraints; any objective included is evaluated only at the mean value point (the current design point, with any random variables set to their mean values). Reliability-based optimization is again focused on the constraint formulation of an optimization problem, in converting deterministic constraints to probabilistic constraints; again the objective retains its deterministic formulation and is evaluated at the mean value point. Finally, Taguchi Robust Design places attention on desired response values through the formulation of an objective function that includes the desired mean performance and the “minimize variation” element; constraints are not explicitly formulated.

53

Probabilistic design optimization includes in its formulation uncertainty information related to variables, constraints, and objectives. The focus then is not only to identify solutions that are reliable or robust with respect to constraint satisfaction, but also to reduce the variability associated with objective components. Further, by defining the variance or standard deviation of uncertain input design parameters not as fixed, but as design variables themselves, tolerance design/optimization can be implemented by seeking standard deviation settings for these parameters that produce acceptable performance variation (objective components, constraints).

In the following chapters, first, we will present a simple cylindrical cup drawing experiment, which shows that at the fixed level of process setting the part quality characteristics can have a very large variation. This experiment serves as a justification to conduct the probabilistic design for the sheet metal forming process. Then the simple probabilistic design methodology using the quality index as objective is illustrated in chapter 5. By considering the material properties variation and process condition variation, an optimal deterministic blank holder force and stochastic friction coefficients means are obtained to minimize the defect rate.

A more formal probabilistic design formulation is explained in chapter 6, where the temporal varying blank holder force itself is treated as random variable like what we see in the real world. By incorporating process variations, both the reliability constraints and

54

Taguchi type objective are considered in the design optimization formulation. The probabilistic design is also compared with the deterministic design and the PI design which is usually adopted to find the variable blank holder force profile.

In chapter 7, spatial varying constraints on the sheet are studied with the introduction of the very new discrete friction concept. The die surface is segmented into 10 discrete areas and punch is divided to 4 areas. Since the number of design variables under research is fifteen, the methodology used in chapter 5 and 6 is not applicable here. In this complex case, two phases design method is developed. The first step is to integrate the finite element simulation with the multi-objective genetic algorithm to find the optimal deterministic design configuration for those fifteen variables. Then the reliability analysis method: Mean Value First Order is used to assess the defect rate for all the points within the deterministic constraints in the last Pareto fronts which are obtained through the multi-objective genetic algorithm.

55

CHAPTER 4

4. EXPERIMENT OBSERVATION ON THE PART QUALITY VARIATIONS

4.1 Experiment Objective

The basic assumption of conducting probabilistic design for sheet metal drawing is that although the process seems fixed under the pre-designed setting, there could still exist great variations from various resources: the sheet material properties, sheet thickness, lubrication, or blank holder force, etc.. These input variations can then cause large variations of drawn parts quality characteristics, as schematically shown in figure 4.1.

The objective of this experiment is to observe and study the variations of drawn parts quality at the fixed level of process setting so that our basic assumption about the probabilistic design can be tested. The information about these variations will not only justify our probabilistic design approach, but also provide statistical data to conduct probabilistic design. 56

Input variations Output variations

Material Quality

Process Quality

y y y y Other Quality

Figure 4.1. Illustration of process output variations caused by input variations for sheet

drawing process.

57

4.2 Experimental Design

4.2.1 Criteria to Select Right Part to Draw

In this experiment, we only want to see the impact of variation of process parameters on the parts quality variation. These process parameters variations include: sheet material properties variation such as strength hardening coefficient, sheet thickness variation, lubrication variation and blank holder force variation. The quality characteristics include three aspects: wrinkling, fracture and springback. The criteria of selecting the experimental part to draw should be based on two concerns:

1) Only the process parameters abovementioned should be considered in the drawing process. No complex die/punch/blank geometry should be involved because their variations may be coupled with the process variations we want to study.

2) The measurement of the three quality characteristics should be easy. The benefit of easy measurement is the higher measurement gage R&R. Thus the measurement result will almost reflect the part quality variation instead of measurement variation.

After careful consideration, the simple cylindrical cup drawing is selected in this experiment since it satisfies both two criteria.

58

4.2.2 Drawing Die Design

The die/punch components designed for the cylindrical cup drawing are illustrated in the figure 4.2. The hydraulic press is shown in figure 4.3. The die/punch parameters are listed in table 4.1. In this test, we designed four components for the die part. The first is the bottom plate, which is used to connect the die block to the hydraulic press bottom stage. The die block is positioned above the bottom plate and functions as the die ring holder. After die ring is put onto the die block, a centering ring is placed on the top of the die ring and is pressed against the die block by the four bolts. There are four extra screw holes on the centering ring, and their function is to lift the centering ring up by screwing the bolts through the holes. By this design, we can easily take the die ring out, and put in different die ring for other experiments. Therefore, we could save a lot by using the same die block and centering ring.

The punch is connected to the hydraulic press through a screw cap on the top. To avoid the negative pressure between the punch and the blank during the drawing, which may cause problem to separate them afterward, we drilled a small hole through the punch as air vent. In this drawing test, we also designed the blank holder, which is connected to the up plate by the . The up plate is then T-slot connected to the up platen of the hydraulic press. The sheet used in this experiment is HSLA350. The sheet thickness is

1mm. The coupon is cut to the disk shape with diameter of 5.8 inches.

59

Figure 4.2. The cylindrical cup drawing die design.

60

Figure 4.3. The hydraulic press used in the drawing test.

61

Punch diameter 3.250in Punch corner radius 0.600in Die ring radius 1.675in Die corner radius 0.157in Blank diameter 5.8in Blank thickness 1mm Blank material HSLA350 Lubrication Drawing oil (mid-state lub M2C) Table 4.1. The die/punch design and test setup

62

4.2.3 The Measurement of Quality Characteristics

The three quality characteristics usually concerned in the sheet metal drawing are the wrinkling, fracture and springback. One of the reasons that we choose the cylindrical cup drawing is that the measurement of these three aspects is not difficult. The figure 4.4 shows schematically the front view of a cut open part after drawing.

1) The wrinkling can be measured by two metrics. The first is the number of wrinkles along the flange. The second is the maximum height of the wrinkling mark on the cup outside sidewall (the figure 4.4 shows the inside).

2) The fracture tendency is represented by the minimum sheet thickness along the cut open cup. In the figure 4.4, the thinning happens at the transition area from cup bottom corner to the straight side wall.

3) The springback can be measured by the expansion angle from cup bottom to cup top after drawing. In the figure 4.4, Dtop is the diameter measured at the intersection of flange and the sidewall. Dbottom is the diameter measured at the intersection of cup bottom corner and sidewall. H is the vertical distance between them. The springback

Dtop − Dbottom angle can be calculated by a tan( ) . 2H

63

Wrinkling

Dtop h H Wrinkling mark

t Dbottom

Figure 4.4. Illustration of measurement of three quality characteristics of drawn cups.

64

The height of wrinkling mark and the maximum thinning are measured by the vernier caliper. The Dtop and Dbottom are measured by micrometer. The gage R&R for those two measurement methods in this case are around 10%. Besides, for each metric, 3 measurements will be conducted to minimize the measurement error and variation.

Therefore at this level of accuracy, we could assume that the measurement result will represent mostly the variation of the drawn part quality instead of measurement error.

4.2.4 The Experiment Procedure

After we select the right part, design the drawing die, and figure out the measurement methods for the quality characteristics, next we need to plan the experiment procedures.

Step One: First, we need to run some initial trial to find the feasible region of cup drawing. Since the sheet material we only have is HSLA350, which is known for its good strength but relatively poor limit drawing ratio (LWR). We want to determine the process setting, which includes the right blank holder force, the right amount of lubricants applied on the blank and right blank size, so that a cup without any defect can be drawn.

Step Two: After we find this good setting, we will keep it at fixed level and then repeat drawing one by one. This means we will adopt the same way to put the drawing oil on the same spots of the blank, place the blank on the same location on the die ring, use the

65

same blank holder force, and apply the same hydraulic press operation sequences. By this way, we can reduce as much variation as possible induced by the human factors.

Step Three: After drawing enough cups according to the sample size (in this case 30), we then measure the three quality characteristics of those cups. The measurement should be taken in two steps. The first is to count the number of wrinkling, measure the maximum height of wrinkling mark and the springback angle. After this, the cups will be cup open in half and the minimum sheet thickness could be measured.

Step Four: At last, we need to analyze the data. The is a good tool to visualize the data distribution. The mean and standard deviation should be calculated too.

Normality test is also necessary to test whether the quality data follows normal distribution. Finally, the coefficient of variation should be calculated, which represents the ratio of the variations of the quality characteristics relative to their means.

4.3 The Experiment Result

During the experiment, we first found that the part quality was more sensitive to lubrication than to the blank holder force. To show the effect of lubrication, two blank disks were drawn at 3000lbs and 4500lbs BHF without the lubrication. Another two disks were drawn at 3000lbs and 4500lbs with drawing oil applied. Their shapes are shown in the figure 4.5. From these pictures, we see that when no lubrication was applied, even

66

reducing the BHF from 4500lbs to 3000lbs could not make a good cup without fracture.

Once lubricant was used, at both BHFs, no fracture appeared. The only difference was that under smaller BHF, the wrinkling magnitude was higher.

To keep a compromise between wrinkling and fracture, we fixed the blank holder force at

4500lbs, applied lubricant (drawing oil), and then drew 30 pieces of blanks to the depth of 44mm. The part shape after drawing is shown in figure 4.6(a). Within those 30 cups, we found 3 of them were defective due to fracture, which are shown in figure

4.6(b)(c)(d).

4.3.1 Wrinkling Measurement Result

First we examine the wrinkling measurement result. The numbers of wrinkles along the flange for the thirty pieces of drawn cups are plotted against test number in figure 4.7(a).

The histogram of those thirty data is shown in figure 4.7(b). Beside the histogram we could see the calculated mean and standard deviation. The ratio between them (standard deviation divided by mean), called coefficient of variation (COV), gives the level of wrinkling variation under fixed process setting (BHF=4500lbs, drawing depth=44mm, same drawing oil applied by the same pattern for every cup drawing). From the figure, the COV for number of wrinkling is 1.737/25.53=6.8%. The figure 4.7(c) shows the plot. The p-value is 0.087. Therefore at the 95% confidence level, we could say the number of wrinkling follows roughly normal distribution. 67

(a) BHF=3000lbs, without lubrication (b) BHF=4500lbs, with lubrication

(c) BHF=3000lbs, with lubrication (d) BHF=4500lbs, with lubrication

Figure 4.5. Parts drawn without and with lubricants at two levels of BHFs.

68

(a) the part shape after drawing (b) defective part 1 with fracture

(c) defective part 2 with fracture (d) defective part 3 with fracture

Figure 4.6. The parts after drawing and the defective parts.

69

Plot of wrinkling number vs. test number

29

28

27

26

25

wrinkling number wrinkling 24

23

22

3 6 9 12 15 18 21 24 27 30 Inde x

(a) plot of number of wrinkling vs. test number

Histogram of wrinkling number Normal 9

8 Mean 25.53

7 StDev 1.737 N30 6

5

4 Fr e que nc y 3

2

1

0 22 23 24 25 26 27 28 29 wrinkling numbe r

(b) histogram of the number of wrinkling and the fitted normal distribution

Probability Plot of wrinkling number Normal

99

95 90

80 70 60 Mean 25.53 50 40 StDev 1.737 Percent 30 N30 20 AD 0.638 10 P-Value 0.087 5

1 21 22 23 24 25 26 27 28 29 30 wrinkling number

(c) normality test of the number of wrinkling Figure 4.7. The measurement result for number of wrinkling along the flange.

70

Then we will check the variation of the maximum height of wrinkling mark along the cup sidewall. The maximum heights of wrinkling mark along cup sidewall are plotted against the test number, shown in figure 4.8(a). The histogram with normal distribution fit is shown in figure 4.8(b) with the mean of 0.5308 and the standard deviation of 0.06093.

Therefore, the COV for the wrinkling height is 0.06093/0.5308=11.5%. The normality test plot is given in figure 4.8(c). At 95% confidence level, we could say the wrinkling height follows normal distribution since the p-value 0.11 is larger than 0.05. From the results of number of wrinkling and height, we can see that the part quality in terms of wrinkling has large variation. Assuming the drawing process is stable (plus/minus 3 sigma), the number of wrinkling from the production will range from 25.53-3*1.737

(around 20) to 25.53+3*1.737 (around 31). You could imagine how different the drawn parts may looks like.

4.3.2 Fracture Measurement Result

The fracture tendency is represented by measuring the minimum sheet thickness along the cut open cup sidewall. The plot of minimum thickness against the test number is shown in figure 4.9(a). The histogram and the normality test are shown separately in figure 4.9(b) and (c). From the figures, we see that the thickness does not follow normal distribution since at 95% confidence level the p-value from normality test 0.018 is much smaller than 0.05. However, we could still use COV to show its magnitude of variation, which is equal to 0.002383/0.03418 (around 7%).

71

Plot of wrinkling length vs. test number 0.70

0.65

0.60

0.55

wrinkling le ngt0.50 h

0.45

0.40 3 6 9 12 15 18 21 24 27 30 Inde x

(a) plot of maximum height of wrinkling mark along cup sidewall vs. test number

Histogram of wrinkling length Normal

9 Mean 0.5308 8 StDev 0.06093 7 N30 6

5

4 3

2

1

0 0.40 0.44 0.48 0.52 0.56 0.60 0.64 0.68 wrinkling lengt h

(b) histogram of maximum height of wrinkling mark along cup sidewall with normal distribution fit

Probability Plot of wrinkling length Normal

99

95 90

80 70 60 Mean 0.5308 50 40 StDev 0.06093 Percent 30 20 N30

10 AD 0.598

5 P-Value 0.110

1 0.40 0.45 0.50 0.55 0.60 0.65 0.70 wrinkling le ngth

(c) normality test of maximum height of wrinkling mark along cup sidewall Figure 4.8. The result for the maximum height of wrinkling mark along cup sidewall.

72

Plot of thinning vs. test number

0.038

0.037

0.036

0.035

0.034

0.033 thinning

0.032

0.031

0.030

0.029 3 6 9 12 15 18 21 24 27 30 Inde x

(a) plot of minimum thickness along cup sidewall vs. test number.

Histogram of thinning Normal

6 Mean 0.03418 5 StDev 0.002383 N30 4

3 Frequency 2

1

0 0.030 0.032 0.034 0.036 0.038 thinning

(b) plot of histogram of minimum thickness along cup sidewall and fitted normal distribution

Probability Plot of thinning Normal

99

95 90

80 70 60 50 40 Mean 0.03418 Percent 30 StDev 0.002383 20 N30 10

5 AD 0.910 P-Value 0.018 1 0.0300 0.0325 0.0350 0.0375 0.0400 thinning

(c) normality test of minimum thickness along cup sidewall.

Figure 4.9. The fracture measurement result.

73

Again, if we assume the process is stable (plus/ minus 3 sigma), the minimum thickness will range from 0.03418- 3*(0.002383) = 0.027031 to 0.03418+ 3*(0.002383) =

0.041329.

4.3.3 Springback Measurement Result

The springback is measured by the angle illustrated in figure 4.4. The plot of angle vs. test number is shown in figure 4.10(a). The histogram of angles and fitted normal distribution is in figure 4.10(b) with the mean of 0.7961, and the standard deviation of

0.2188. From the normality test plot in figure 4.10(c), we could conclude that the springback angles follow normal distribution since at 95% confidence level, the p-value

0.298 is much larger than 0.05. The COV for the angles is equal to 0.2188/0.7961=27%.

One common known issue of HSLA sheet material is the high springback. From this experiment, we also see that the variation of springback (27%) is very high comparing to wrinkling (6.8%, 11.5%) or fracture variations (7%).

74

Plot of angle vs. test number

1.3

1.2

1.1

1.0

0.9

angle 0.8

0.7

0.6

0.5

0.4 3 6 9 12 15 18 21 24 27 30 Inde x

(a) plot of springback angle vs. the test number

Histogram of angle Normal

7 Mean 0.7961 6 StDev 0.2188 N30 5

4

3 Fr e que ncy

2

1

0 0.4 0.6 0.8 1.0 1.2 angle

(b) plot of histogram of springback angle with normal distribution fit

Probability Plot of angle Normal

99

95

90

80 70 60 50 Mean 0.7961 40

Percent StDev 0.2188 30 20 N30

10 AD 0.424 5 P-Value 0.298

1 0.2 0.4 0.6 0.8 1.0 1.2 1.4 angle

(c) normality test of springback angle

Figure 4.10. The springback angle measurement result.

75

4.4 Experiment Conclusion

In this simple cylindrical cup drawing test, we first found the optimal process setting to deep draw the cup. The lubricants must be used to prevent the fracture. Thirty cups were drawn at blank holder force 4500lbs, which gave the best compromise between wrinkling and thinning. Then we measured different metrics to represent the three aspects of the drawn cup quality: the wrinkling, fracture and springback. The number of wrinkling along the flange and the maximum height of wrinkling mark along the cup sidewall represent the wrinkling tendency. The angle from the cup bottom to top represents the springback tendency. We then cut those thirty cups open and then measured the minimum sheet thickness along the cup sidewall. This represents the fracture tendency. After the data are analyzed, we found all of these metrics have large variation. Their coefficients of variation range from 6.8% to 27%. The fracture metric does not follow normal distribution, while the other metrics seem following normal distribution.

Through this simple drawing test, we not only justified the necessity of the probabilistic deign approach by the large product quality variations observed, but also get some quantitative information about the magnitude of those variations.

76

CHAPTER 5

5. PROBABILISTIC DESIGN OF ALUMINUM SHEET

DRAWING FOR REDUCED RISK OF WRINKLING AND

FRATURE

The exclusion of inherent process variations in the current deterministic design methods for sheet metal forming can lead to very unreliable result that may cause high scrap rate, frequent rework, machine shut down and thus huge loss of profit. In this chapter, a general approach is presented to quantify the uncertainties and to incorporate them into

RSM model so as to conduct probabilistic based optimization. As an application, deep drawing process of Hishida part is analyzed. Given the blank shape and tooling, a probabilistic design is successfully carried out to find the optimal combination of blank holder force and friction coefficient under the presence of variation of material properties.

The result shows that by the probabilistic design, the quality index (average defect rates of wrinkling and fracture) improved (reduced) 42% over the traditional deterministic

77

design. It also shows that by further reducing the variation of friction coefficient to 2%, the quality index will improve to 98.97%. In a mass production environment, this achieved quality improvement is huge.

5.1 Introduction

Deep drawing is a process transforming flat sheets into cup or box shaped articles without fracture or excessive localized thinning. Enormous efforts have been put into the system design phase in order to produce a defect-free part. In the previous work, given material and part, the optimum deep drawing design, in general, focuses on three main aspects: blank design, tooling design and process design. The blank design includes optimizing blank geometries and thickness [27]. The tooling design is to determine the optimal punch and die radii, the punch and die clearance, the drawbead shape and location [28] [29]. The process design tends to find the best setting of process parameters such as the friction (lubricant type and lubrication procedure), the punch speed and the blank holder force.

However, even if the drawing process is optimally designed, the part can have significant scatter in dimensions and properties, as shown in the simple cup drawing test mentioned in chapter 4. Majeske [2] analyzed data from several leading automobile manufacturers, and highlighted the fact that the high scrap rate still remains as a predominant issue

78

especially in making complex shaped part for them. He reported that within the same batch (same die and process setup) the part-to-part geometric variation can be as high as

30%. This high variation inevitably causes high scrap rate, frequent rework, machine shut down and huge loss of profit.

Of course there are many factors such as inadequate or inaccurate modeling, unknown failure mechanism, unpredictable human effect etc. causing the part fluctuation, but variation that occur in the forming of each part is a significant and yet not well recognized reason for the lack of process consistency.

Recently, a few researchers have investigated process variability. Gantar [30] identified twelve most important influencing input parameters in the deep drawing process and measured their variations. Karthik [31] investigated the variability of sheet material properties and carried out measurements using more than 45 coils of same material. It was shown that the strain hardening coefficient “ n ” can have coil-to-coil variation up to

14%. To give an idea about how much the material property can vary the result from

Gantar and Karthik is listed together in table 5.1. Cao [32] also pointed out that during sheet bending process the variation of material strength “ K ” can be as high as 20%, the strain hardening coefficient 16% and the friction coefficient 65%.

79

Extensive research has been done in exploring the deterministic effect of each factor on the part, but the impact of their variations on the fluctuation of output quality for the drawing process is seldom addressed and quantified. The inclusion of uncertainty in the design and optimization cycle should lead to better understanding of the impact of uncertainty associated with system input on the system output. This understanding can then be applied for managing such uncertainties. Such designs that incorporate uncertainty are more popularly referred to as probabilistic design [33]. To date, there is no report on the probabilistic design of aluminum sheet deep drawing process.

80

ST-13[2] 409

Values Variation(%) Values Variation(%) n 0.22 ± 0.036 16.3 0.2 ± 0.03 13.4

K 546 ± 50 9.1 751 ± 43 5.7 r 1.90 ± 0.20 10.5 1.5 ± 0.12 7.9

Table 5.1. Variation of material properties [28][29]

81

5.2 General Approach for Probabilistic Design

The concept of reliability based design was first introduced and developed in the reliability calculations of structures. A limit state function, basically a failure criterion with a deterministic model connecting the system input variables to the output variables, will segment the multi-dimensional space spanned by the random variables into the failure and safe domain. Given the joint probability density function of the random variables, the probability of failure can be calculated by the portion of points in the failure domain [34]. Each interested response will have its own limit state function. Then the optimization is to search for the right means of random variables that minimize the objective function while satisfying the reliability requirements.

Two problems need to be resolved when applying probabilistic design in the metal forming process: construction of the limit state function and the joint probability. Since metal forming is a highly nonlinear process influenced by many factors, there is no way to analytically predict the system response. Nowadays, numerical simulations of sheet metal forming processes based on finite element method have become a powerful tool to predict the forming process. However, simulating each point according to the joint probability to check its failure or safe is impossible. Even by some sampling technique, this task is still too time-consuming. The only practical solution to overcome this

82

problem is to build the meta-model by the response surface method (RSM) to approximate the limit state function. Through proper design of experiments (DOE), only fraction of experiments or simulations is required. Then if the joint distribution of random variables is known, statistical method like Monte Carlo Simulation (MCS) can evaluate the probability of failure for the interested responses [35]. The second difficulty is to determine the joint distribution. Since the experimental data available for statistical modeling of the random variables in metal forming are limited, it is not possible to make any clear identification of correlation between the variables. Therefore, the independence between them is usually assumed and accordingly the joint probability density function is directly established as a product of the marginal density functions.

5.3 Quality Index to Measure Risk

As mentioned before, drawing process can be affected by the selection of blank, tools and process parameters for given material and part. In traditional design, those factors are taken as certain variables, meaning they don’t have random features at all. However, many of them are indeed random variables. For example, the sheet metal material properties can vary coil by coil or stock by stock. The lubrication condition for one part can also be quite different from another.

83

To illustrate the necessity of incorporating uncertainties into the traditional optimum design, the deep drawing process (figure 4.1) is schematically described where the variations of input parameters are linked somehow to the fluctuation of output quality.

To date, there is no literature found on probabilistic design application for aluminum sheet deep drawing processes. However, Sahai [33] adopted a very similar strategy in designing a sheet metal flanging process. In his paper, he treated the sheet thickness “t” and gap “g” as random design variables, the die corner radius “r” as a deterministic design variable and the Young’s modules “E” and Yield Stress “Y” as random parameters.

Then the objective was to find a combination of sheet metal and tooling configurations that would minimize the difference of springback to the target under the probabilistic constraint that 99.99 percent of maximum absolute strain of the flanged sheet metal should not exceed a specified value.

The issue is that being the quality feature, the springback should have a scatter because of the variation of the system. Therefore, the intuitive objective should be minimizing the probability of sheet springback angle exceeding the tolerance, namely the defect rate instead of the deterministic objective function. Nevertheless, changing the objective to be probabilistic will invalidate the optimization method he introduced. Additionally, the author simply assumed the variation of random variables and did not investigate the effect of enlarging or reducing the variable variations on the defect rate, which will be tackled in this chapter.

84

In this chapter, the wrinkling and fracture are selected as the quality features. A new measure, quality index, is established as the weighted sum of probability of no wrinkling and probability of no fracture. It is written as:

QI=+−weight *Pr[no wrinkling] (1 weight ) Pr[no fracture] (5.1)

Then the objective is naturally to maximize the quality index. The outline of used strategy is drawn in figure 5.1. A non-symmetrical Hishida geometry, which has four corners with different radii and four tapered walls with different sloped angles on each side, is chosen to demonstrate the concept.

5.4 The Numerical Simulation Model

The industrial/simulation model for the Hishida parts forming process is shown in Figure

5.2. The aluminum car body sheet alloy Ecodal-608-T4 (AA6181A) with 1mm thickness

p n is used for this study. The Krupkowsky-law hardening equation: σεεy =+K ()0 is used for modeling the material behavior. The drawing process is simulated using

PAM-STAMP.

85

5.5 Selection of Input Variables

Siekirk [36] has identified more than 25 variables that influence sheet metal forming.

However, it is impossible to include all the variables in the model. Instead, it will assume that the blank shape and tooling have been designed and produced. The interests are hence the optimal process parameters. After checking the sensitivity analysis result from

Gantar [30] and Jaisingh [37], the input variables are selected and classified into three categories which are listed in table 5.2. The deterministic design variable is the blank holder force – BHF. It has no variation since it can be easily close-loop-controlled. The random design variable is the friction coefficient between blank/die and blank/binder –

Lub1, and the friction coefficient between blank/punch – Lub2. A uniform lubrication over the whole blank surface is presumed. Although it is by and large the lubricant type and lubrication procedure that decide the friction coefficient, many other factors like the blank or tool surface roughness, sliding velocity and contact pressure [38] also have significant effects on it. Cao [32] estimated that in sheet metal forming, the variation of friction coefficient can be as high as 65%. Gantar [30] used 10% in his stability evaluation of deep drawing process. In this probabilistic design problem 10% is used initially. The random parameters that cannot be designed (noise variables) are the strain hardening coefficient n , the strength coefficient K , blank dimension variation along the horizontal and vertical direction - pm1 , pm2 .

86

5.6 Prediction of Wrinkling and Fracture

There are two forms of wrinkling, the sidewall wrinkle (SW) and flange wrinkle (FW).

Since the flange area is trimmed after drawing, only sidewall wrinkling is measured by geometrical consideration of part cross-sections cut into planes perpendicular to the forming direction at a 25% parts height [39], as shown in figure 5.3.

The criterion for wrinkle defect is:

{:wrinkling if normalized magnitude of wrinkle > 1} (5.2)

Fracture happens when the strain at the local region is concentrated and subsequent deformation changes from a smooth and continuous one to a markedly non-uniform one.

In this study, the maximum sheet thinning is measured to indicate the risk of fracture. The criterion for fracture defect is:

{:fracture if normalized maximum sidewall thinning > 1} (5.3)

87

Figure 5.1. General probabilistic design approach.

88

Figure 5.2. Simulation model of the Hishida forming process.

89

Designable signal variables Un-designable noise variables

BHF Lub1 Lub2 n K pm1 pm2 Variation − 10% 10% 13% 6% 5% 5%

Low(DOE) 495 0.05 0.05 0.2375 446 0 0

High(DOE) 605 0.12 0.12 0.2625 493 1 1

Table 5.2, The variation and DOE levels for selected parameters

90

5.7 DOE and RSM

For the seven variables, a Box-Behnken DOE is selected. This design allows efficient estimation of the first and second order coefficients. Also because Box-Behnken has fewer design points, it is less expensive to run than other design methods such as central composite designs under the same number of factors. After a series of finite element simulations, the data of magnitude of SW wrinkling and maximum thinning are collected

2 and fitted by the second order polynomials. The R for them are 94% and 98% respectively. To check the accuracy of the response surface models, ten additional simulations are run at different points. The responses obtained by the simulations with

PAM-STAMP are compared with the interpolation based on response surface models.

The results correlate very well. The deviations in the response surface predictions are within a few percent. Hence, the response surface models are considered to be sufficiently accurate for the subsequent probabilistic design of the aluminum sheet deep drawing process.

In the table 5.1, the variation data are for steel material. In this DOE and research we used Aluminum material. The reason is that it is more difficult to draw an Aluminum part without defects. However, we have conduct a testing to see whether the RSM generated from the Aluminum material DOE could be used to predict the behavior of the steel

91

material (ST-12), since the thinning and wrinkling are mostly determined by the n and K if we keep other process parameters the same. After testing, we found the Aluminum

RSM could predict the steel material very well. This means that the response for thinning and wrinkling are monotonous. As long as the ranges of n and K are not far away from the DOE range, the prediction is acceptable. Please see Appendix B for details.

Before proceeding to the probabilistic assessment, the main effect plot is drawn to show how sensitive each of the variables on the two responses, see figure 5.4(a) and (b). It is observed that the friction coefficient between die/blank and binder/blank has the most significant impact on the wrinkling and fracture. When it becomes large, the magnitude of wrinkling decreases whereas the maximum thinning increases. The explanation is that due to the increased friction coefficient, larger friction force under the same blank holder force will resist the material flowing into the wall area, and therefore change the strain distribution in the FLD to the area where wrinkling is less likely to occur. At the same time, more punch force is needed to draw the material into the die. With less material going to the wall the maximum thinning along the sidewall is inevitable. The blank holder force, which is the second largest factor to the wrinkling and facture defects, can be explained similarly. Other observations include: (1) the geometric variations along the initial blank are not sensitive so that they can be neglected in the following analysis. (2)

The strain hardening coefficient and the strength coefficient have effect on the quality features but are limited comparing to friction coefficient and BHF.

92

Figure 5.3. The measurement of sidewall wrinkling.

93

Sensitivity of parameters to the magnitude of wrinkling

Lub1

BHF K n Lub2 pm1 pm2

(a), The sensitivity plots for the wrinkling.

Sensitivity of parameters to the maximum thinning

Lub1

BHF Lub2 n K pm1 pm2

(b), The sensitivity plots for the thinning.

Figure 5.4. Sensitivity plot of wrinkling and thinning.

94

5.8 Probabilistic Assessment for Wrinkling and Fracture

The response surface models built in section 5.4 are deterministic in nature. Given a design vector, the prediction is one certain value. This is unrealistic since many variables contained in the model, like material constants, friction coefficients and geometric dimensions, etc. are known to have a certain degree of scatter around their nominal values. The variation of system input variables should be incorporated into the deterministic model.

To incorporate uncertainties, it is assumed that the vector of random variables

Ψ=[,XX12 ,...,,, Xnm ZZ 12 ,...] Z representing the aluminum sheet forming process random nature. X refers to the designable random variable while Z refers to the un-designable random parameters like n and K . The realization of Ψ is ψ = [,x12xxzzz ,...,,nm 12 ,...] representing a specific metal forming process and the joint probability density function

fΨ ()ψ measures the probability of that event. Suppose gwr ()Ψ is the response model for the wrinkling and we know the criterion for wrinkling defect is

{:wrinkling if normalized magnitude of wrinkle > 1},then the probability of wrinkling can be calculated by

PP=Ψ∈Ω={}{Norm(())10}() P gψψψ −≤= fd fwr fwr wr ∫Ω fwr Ψ (5.4)

95

where Ω fwr is defined as Ω fwr =−≤{:Norm(ψψgwr ())10}. The evaluation of the probability of fracture is conducted in the same way. For the probabilistic design, it is assumed that n ~ N(0.245,0.0322 ) and K ~ N(470,302 ) . The standard deviation for friction coefficient is 0.01. All the random variables are normal distributed and there is no correlation between them. Hence, in every iteration of the probabilistic optimization,

Monte Carlo Simulation (MCS) can be applied to evaluate the quality index for each design vector.

5.9 Optimal Design Approach

In previous section it has already given the criteria of defining wrinkling and fracture, the two major forms of defects in the manufacturing of Hishida parts. The optimal design objective, therefore, is to find the right combination of BHF, Lub1 and Lub2 so as to minimize the sum of two defects rates or maximize the quality index. There are two approaches to achieve this objective.

The traditional way formulates the problem as

minimize {weight * Norm( Magnitude of wrinkling ) (5.5) +(1 -weight ) * Norm( Maximum thinning)}

96

where weight is a coefficient between 0 and 1 that is decided by the significance of each defect. Norm() is the function to normalize the magnitude of wrinkling and maximum thinning so that they will have the same range during the optimization to avoid severe bias.

The other approach that is adopted in this chapter captures the variations and expresses the objective function in the form of probabilistic way as shown below:

maximize{weight * Prob[ no wrinkling] (5.6) +−(1weight )*Prob[ no fracture]}

where prob[] is the probability assessment function described in section 5.5. The first optimal design approach is called “deterministic design” (DD) and the latter one

“probabilistic design” (PD).

For the PD two optimization techniques: the gradient based method and grid method are tried. In the first method, for each iteration, several MCSs are conducted to calculate the gradient at the current design point to determine the future search direction. It will converge when the residual error is small enough. The second technique will segment the design space into equal lattice and each intersection point will be evaluated and the best design point who gives the largest quality index will be selected. It is found that the first method is not very reliable since it is easy to be trapped in the local maxima. Although the second method needs a much larger CPU time, for the sake of finding the global

97

maxima, the second method is used and all the results are reported by this method. To illustrate the grid method, when the weight is 0.5, first the mean of Lub1 is fixed as 0.09 and mean of Lub2 as 0.12, and then the quality index is plotted against the different BHF, see figure 5.5. In this case, the optimal BHF is easily determined at 605. In figure 5.6, the mean of Lub2 is fixed as 0.12 and the BHF as 605, and then the quality index is plotted against the different mean of Lub1. From the plot, the optimal mean of Lub1 is 0.09.

Based on the different value of weight, a series of optimization using both the DD and

PD operations is conducted. The results are tabulated in table 5.3 and 5.4.

First of all, it is seen that the deterministic design result is very different with probabilistic design at the same weight. DD is prone to the extremes on the constraints boundary. The BHF is either the minimum or the maximum. The friction coefficient also has the similar pattern. The outcome is that either its probability of wrinkling is very large or the probability of fracture is very large. In contrast, the PD makes a compromise between the wrinkling and fracture very well. The weighted sum of the probabilities of two defects is kept at a much lower level than the DD. When the weight equals 0.5, the quality index for DD is 52.445% and the index for PD is 94.815%. The PD is much superior to the DD. Secondly, from figure 5.5 and 5.6, it is found that the key to improve the quality is the right design of lubrication between blank/die and blank/binder.

Considering the material properties’ variation, the quality index is more sensitive to friction coefficient than the blank holder force.

98

Design of the optimal BHF 2

1.8 2*Quality index (weight=0.5) 1.6 Optimal BHF 1.4

1.2 Probability of no fracture 1

0.8

0.6 Probability of no wrinkling

0.4

0.2

0 480 500 520 540 560 580 600 620 BHF (mean of lub1=0.09, mean of lub2=0.12)

Figure 5.5. Probabilistic design of BHF.

99

Design of the optimal mean of Lub1 2 2*Quality index (weight=0.5) 1.8

1.6 Optimal mean of Lub1 1.4

1.2

1 Probability of no fracture 0.8

0.6

0.4 Probability of no wrinkling 0.2

0 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 Mean of Lub1(BHF=605, mean of Lub2=0.12)

Figure 5.6. Probabilistic design of mean of Lub1.

100

Weight wrinkle Weight fracture Lub1: binder/blank Lub2: punch/blank Pr[no Pr[no BHF wrinkle]% fracture]%

0.1 0.9 0.05 0.12 495 0.22 100 0.2 0.8 0.05 0.12 605 4.89 100 0.3 0.7 0.05 0.12 605 4.89 100 0.4 0.6 0.05 0.12 605 4.89 100 0.5 0.5 0.05 0.12 605 4.89 100 0.6 0.4 0.09 0.12 605 93.78 95.85 0.7 0.3 0.12 0.12 605 99.98 42.63 0.8 0.2 0.12 0.12 605 99.98 42.63 0.9 0.1 0.12 0.12 605 99.98 42.67 Table 5.3. Deterministic design result

101

Weight Weight Lub1: binder/blank Lub2: punch/blank Pr[no Pr[no wrinkle fracture BHF wrinkle]% fracture]%

0.1 0.9 0.08 0.12 605 77.06 99.03

0.2 0.8 0.09 0.12 590 91.16 96.80

0.3 0.7 0.09 0.12 605 93.78 95.85

0.4 0.6 0.09 0.12 605 93.78 95.85

0.5 0.5 0.09 0.12 605 93.78 95.85

0.6 0.4 0.1 0.12 565 96.73 91.48

0.7 0.3 0.1 0.12 585 98.20 89.17

0.8 0.2 0.1 0.12 595 98.52 88.23

0.9 0.1 0.1 0.12 605 98.93 86.50

Table 5.4. Probabilistic design result

102

5.10 Effect of Variation on The Optimum Design

In the previous probabilistic design, it has assumed that the variations of friction coefficient, strain hardening coefficient and strength coefficient are 10%, 13% and 6%.

The real spread during manufacturing may be much larger. So in this section, the effect of variations of random variables on the quality index to the optimum design will be investigated.

The mean values and type of probability density function for the random variables were kept constant, while the percent of variation (and accordingly the standard deviations) were increased at different levels from 2% to 30%. Using MCS, the quality index at each level can be evaluated. The relative importance of the various random variables is plotted in figure 5.7. It is observed that the different level of variation of material properties does not have effect on the quality index (also the defect rates). However, the variation of lubrication can cause the quality index drop from 98.97% to 77.41% if its variation increases from 2% to 30%.

103

Quality index vs. variation of random variables 1

0.95

Friction coefficient 0.9 strain hardening n strength K

0.85

0.8

0.75 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Percent of Variation [%]

Figure 5.7. Effect of variation of random variables on the quality index.

104

5.11 Conclusion

In this chapter, deep drawing process of Hishida part is analyzed. Given the blank shape and tooling, a probabilistic design is successfully conducted to find the optimal combination of blank holder force and friction coefficient under the presence of variation of material properties. The result shows that by the probabilistic design, the quality index

(average defect rates) improved 42% over the traditional deterministic design. It is also noticed that by further reducing the variation of friction coefficient to 2%, the quality index will improve to 98.97%. In a mass production environment, the achieved quality improvement is huge.

105

CHAPTER 6

6. INVESTIGATING RELIABILITY OF TEMPORAL

VARIABLE BLANK HOLDER FORCE CONTROL IN

SHEET DRAWING UNDER PROCESS UNCERTAINTIES

The blank holder force, which regulates the amount of metal drawn into the die cavity, stands to be a very effective measure for the success of deep drawing. It has been shown that properly designed temporal varying BHF profile can make a part with fewer defects.

Extensive research has been carried out in the determination of this optimum profile usually in a deterministic way that does not consider the inherent process variations. This exclusion of variations, however, leads to unreliable design. In this chapter, a probabilistic design approach is presented to incorporate these variations. As a demonstration, the cylindrical cup drawing process is analyzed. Under the presence of the variation of sheet thickness and frictions between sheet/die/binder/punch, the probabilistic design successfully finds the optimal variable BHF. The result shows that by the probabilistic design, the yield (probability of good parts) improved to 99.98% from

48.04% by the traditional deterministic design. In mass production environment, the achieved process robustness is huge. 106

6.1 Introduction

Deep drawing is a process transforming flat sheets into cup or box shaped articles without fracture or excessive localized thinning. It is one of the most common processing techniques utilized in mass production. Nowadays, more and more complex parts are being deep drawn. A proper designed drawing process is crucial in order to produce a defect-free part. In the previous work, given material and part, the optimum deep drawing design, in general, focuses on three main aspects: blank design, tooling design and process design. The blank design includes optimizing blank geometries and thickness.

The tooling design is to determine the optimal punch and die radii, the punch and die clearance, the drawbead shape and location. The process design tends to find the best setting of process parameters such as the friction (lubricant type and lubrication procedure), the punch speed and the blank holder force.

Within all aforementioned design variables, the blank holder force, which regulates the amount of metal drawn into the die cavity, stands to be a very effective measure for the success of deep drawing. During drawing, the BHF could be kept constant or varied with time. The advantage of constant BHF control is its simplicity, but practices do prove that properly designed variable BHF profile, i.e. variation of BHF with punch stroke, can make a part with less defects and higher quality. This improvement is due to the fact that it allows the BHF to vary according to the status of stress state in the sheet so that the optimal process conditions are maintained at each time step.

107

To determine a good variable BHF profile, an experimental or computational approach is usually used. Hirose [40] examined the blank holder profile experimentally while forming full scale automobile panels. He found the profiles that did succeed were the ones starting at lower binder forces and then increasing during the latter stage of the process. Hardt [41], instead of figuring out the profiles directly, developed a PI controller to adjust the blank holder force in-process to ensure a previously determined optimal forming-punch force trajectory or a normalized average thickness trajectory was followed and replicated. However, to obtain that target punch force profile, a constant BHF was applied in his experiment. These approaches, although giving acceptable results, require extensive trial and error, and consume too much material and time. Therefore, the computational approach is preferred. Sim and Boyce [42] first predicted the optimal BHF profile for a round cup drawing through a closed-loop controlled simulation system using the same idea as in [41]. The issue is that since the sidewall wrinkling was not considered in the controller, the maximum cup height might have been overestimated. Cao and

Boyce [43], nevertheless, included both the wrinkling and fracture criteria in their work.

By applying a similar PI control strategy and using the major principal strains and amplitudes of wrinkles at the location of the die radius as state variables, a variable BHF profile for conical cup drawing was predicted and the failure-free drawing depth was increased over that achieved with a constant BHF. Sheng [44] further improved the optimal BHF profile prediction by taking the maximum thinning and flange wrinkling, as well as sidewall wrinkling amplitudes as control indices. His adaptive simulation method

108

can give optimum variable BHF profile for drawing conical cups with acclaimed increased failure-free drawing depth and uniformity of the wall thinning distribution.

Although extensive research has been done in the exploration of the optimum variable

BHF in the deep drawing process, researchers have seldom included the existing sheet and process variations in their models and discussed the impact on the part quality.

In fact, it is observed that even if the drawing process is optimally designed, the produced part can still have significant scatter in dimensions and properties. Majeske [2] analyzed data from several leading automobile manufacturers and highlighted the fact that the high scrap rate still remains as a predominant issue especially in making complex shaped part.

He reported that within the same batch (same die and process setup) the part-to-part geometric variation can be as high as 21%.

To date, most predictions of variable BHF profiles by the computational approach basically exclude these possible sheet and process variations and assume them at constant values. This kind of design without uncertainties or variations is characterized as deterministic design.

Although many authors have verified their BHF profile through experiments and concluded that the design improved the drawing quality, however, these experiments were conducted at the well controlled laboratory conditions. There may be high risk in

109

real production. The reason is that this deterministic optimization tends to push the design towards one or more constraints boundaries until the constraints are active, thus leaving the designer with a design for which even slight uncertainties in the problem formulation or changes in the operating environment could produce failed products.

Therefore, the inclusion of unavoidable variations in the design of optimum variable

BHF should lead to better understanding of the impact of uncertainty associated with system input on the system output. This understanding can then be applied for managing such uncertainties. Such designs that incorporate uncertainty are more popularly referred to as probabilistic design [33].

6.2 General Approach for Probabilistic Design Optimization

The concept of including uncertainties into design was mainly developed from the communities of structural reliability, which focuses on accessing the probability of failure of a design with respect to the structural constraints and evaluates the variation of these constraints. A limit state function, basically a failure criterion with a deterministic model connecting the system input variables to the output variables, will segment the multi-dimensional space spanned by the random variables into the failure and safe domain. Given the joint probability density function of the random variables, the probability of failure can be calculated by the portion of points in the failure domain.

110

Thus a deterministic constraint can be converted to probabilistic constraint by this approach.

Reliability-based optimization is then searching for the right means of random design variables that minimize the objective function while satisfying the reliability requirements [34]. However, the objective function is evaluated at the mean value point.

The reliability refers to the constraints only and the objective itself is regarded as deterministic. That is why it is named reliability-based optimization. Its focus is on constraints, shifting constraint distributions away from the constraint boundaries, but not on the spread of the objective response distributions and the possible variation reduction.

On the other hand, the robust design, first developed by Taguchi, aims at driving the objective mean performance towards a target and minimizing the variance of objective performance. The Taguchi method uses the “crossing design matrix”, “Signal-to-noise ratio” and “Loss function” to evaluate potential designs and select the best alternative from among those evaluated. However, the constraints are not formulated as typically done with optimization formulations and the optimization is only performed at the discrete design points. Reducing the objective response variation, balancing “mean on target” and “minimize variation” are its interests. The term “robustness” in the engineering design context is defined as the sensitivity of performance parameters to fluctuations in design parameters, particularly uncertain design parameters.

111

The probabilistic design, however, combines the features of both the reliability-based optimization and the robust design by incorporating input constraints (bounds of random design variables), output constraints (reliability constraints) and objective robustness

(minimize mean and variation). Buranathiti [45] was the first to conduct a probabilistic design of a wheelhouse stamping process in sheet metal forming by maximizing the total mean values of margins to failure and minimizing the variance of the margin. It efficiently took the process uncertainties into account and created a system-level robust probabilistic design model. Weighted three-point-based method was used to estimate the means and standard deviations for the quality features. Li [46] also proposed a similar approach in the sheet metal forming design. However it utilized the dual response surface models for those means and standard deviations.

The general formulation for the probabilistic design is given as follows:

Find the set of design variables X that: k ww Minimize: F(μ ( X ),σμσ ( X ))=−+ [μσii ( T )22 ] yy∑ ss yiii y (6.1) i=1 μσii Subject to: giy(μσ ( X ), y ( X ))≤ 0 XnLXXUX+≤≤−σμ Xn σ

Here y ,ik= 1,.. are the output responses that designer cares about. w and w are the i μi σi weights and s and s are the scale factors for mean and variation of performance μi σi

response yi which has the target Ti . Like the probability constraint for input vector X ,

gi is the probability constraint for performance response yi , which is used to 112

confine its distribution within the upper and lower specification limit. For the case in

which the mean performance is to be minimized rather than directed towards a target, Ti will be put equal to zero.

The key to implement the probabilistic optimization mentioned above is to estimate the

mean and variance for all the performance responses yi at each evaluated design point during the optimization iterations. Monte Carlo Simulation (MCS) is the most straight-forward method to acquire the knowledge of the probability distribution of responses of uncertain systems given distributions of inputs with high accuracy. The procedure is to first sample the values of the random variables by the given probabilistic distributions and then run the system simulations according to the sample points.

However, to get a good estimate of the mean and variance, researchers have shown that at least one hundred sample points should be evaluated even with variance reduction sampling techniques. Therefore, for instance, if our optimization requires 10 iterations, the total number of metal forming simulations would be at least 1000. Considering the computation and time effort for each nonlinear metal forming simulation, this method is obviously not good. The practical solution to overcome this problem is to build a meta model Yx() by the response surface method (RSM) to approximate the FEM simulation.

Through proper design of experiments (DOE), only fraction of simulations is required.

Then by the Propagation of Error, the mean of this performance response can be calculated by setting the uncertain design parameters to their mean value,

113

1 m dY2 μ =+Y()μσ2 (6.2) yx∑ 2 xi 2 i dxi

And the standard deviation of Yx ( ) is given by the second order Taylor’s expansion:

2 2 mmm⎛⎞∂∂YY1 ⎛⎞2 σσ=+()222⎜⎟ ()() σσ (6.3) yx∑∑∑⎜⎟ iij⎜⎟ xx iij=1 ⎝⎠∂∂∂xxxiij2 ⎝⎠

where σ is the standard deviation of the ith parameter and m is the number of xi uncertain parameters which include designable random variables and un-designable random variables.

6.3 Parameterization of The Variable Blank Holder Force

The probabilistic design of variable blank holder force cannot start from nowhere. In reference [44], the author predicted its optimum profile for the drawing of conical cup in a deterministic manner. In this chapter, however, this case is taken as an example to investigate its performance under the presence of sheet and process uncertainties. To illustrate the probabilistic design approach the same simulation model is adopted and it is assumed that even when variations exist, the basic shape of the robust profile is the same as the one determined in the reference [44]. Based on the features of the optimized

114

variable BHF in the referenced [44], the profile is characterized by two connected

Gaussian functions in the form of

xx−−μμ −−()1222 ( ) σσ12 (6.4) yAe=+−I(1I)12 Ae

where I is an indicator variable following the rule

⎧1 if x ≤ x* I = ⎨ (6.5) ⎩0 otherwise

where x* is the intersection point of the two Gaussian functions. After fitting the data, the

BHF curve can be approximated as

xx−−11.79 27.7 −−()22 () yx=≤I( 16.3)348.9 e6.751 +−≤ (1 I( x 16.3))473 e 13.12 (6.6)

The fitted curve is very close to the original optimized BHF in the reference [44], as shown in the figure 6.1. The objective of parameterizing the predicted profile is to extract the simple control parameters which can be used later in the design optimization.

115

Figure 6.1. The blank holder control profile from [44] and the fitted Gaussian

approximation for sheet drawing.

116

6.4 The numerical Simulation Model

The simulation model built in PAM-STAMP is shown in figure 6.2 and the model inputs in table 6.1. This figure also includes the part drawn at the last step. The stroke is 47mm, the same as in the reference [44]. The BHF profile in the simulation is inputted from the fitted Gaussian curve. Due to the symmetry, only quarter of the tooling and part are modeled. An elastic-plastic material model with isotropic hardening is considered. The other inputs are kept the same as in the reference [44] and listed in table 6.1.

117

Binder Punch

Die

Figure 6.2. The FEM simulation model of the sheet drawing process and the final drawn

part.

118

Material AKDQ steel

Blank diameter 248mm/9.7in.

Blank thickness 0.86mm/0.034in.

0.2 Flow stress (MPa) σ =+795(0.0052ε )

Friction coefficient Punch/blank,0.25;binder/blank,0.15;die/blank,0.15

Table 6.1. Circular blank dimensions, material properties, and friction coefficients used in the simulation of drawing the conical cup from AKDQ steel.

119

6.5 Selection of Input Variables and Output Variables

Siekirk [36] has identified more than 25 variables that influence sheet metal forming.

However, it is impossible to include all the variables in the model. Instead, it is assumed that the blank shape and tooling have been designed and produced. The interest is to find the optimum process parameters, mostly referred to BHF, so that the drawing process is desensitized to the unavoidable condition variations. After checking the result from

Gantar [30], Jaisingh [37] and Zhang [47], the process parameters initially selected for sensitivity analysis are the punch speed: ps , friction between die/blank and binder/blank:

fd , friction between punch/blank: fp , and sheet thickness: t . To calculate the sensitivity of each parameter, it is varied at three levels: low, medium and high while keeping all other parameters at the medium level. There are three output responses of

interest: the maximum thinning y fr which represents the tendency to fracture, the sidewall

wrinkling ysw and the flange wrinkling y fw . The sensitivity is calculated by

⎡⎤ii ii yyhigh− med yy− ⎢⎥med low 3 i i 1 ⎢⎥ymed ymed + (6.7) 6 ∑ ⎢⎥xx−− xx i=1 ⎢⎥high med med low x x ⎣⎦⎢⎥med med

i where yi,= 1,2,3 corresponds to y fr , ysw , y fw . The results are tabulated in table 6.2.

120

Variable name Lower Middle Higher Sensitivity

Sheet thickness t 0.76 0.86 0.96 0.83

Punch speed ps 9 10 11 0.21

Friction fp 0.2 0.25 0.3 0.61

Friction fd 0.1 0.15 0.2 0.89

Table 6.2. The sensitivity of initial selected process parameter.

121

From it, it is seen that the punch speed only has little effect on the maximum thinning of the final cup comparing to the other three variables. Therefore punch speed is ignored.

6.6 Deterministic Design vs. Probabilistic Design

There are four variables considered in the design of experiments for the further construction of RSM and optimization. They are sheet thickness:t , friction between punch/blank: fp , friction between die/blank and binder/blank: fd , and the scale of the fitted variable BHF: s , which is in the range of zero to one to enlarge or reduce the blank holder force profile accordingly. In probabilistic design, the variables considered in the

DOE and RSM are classified into control variables and noise variables. Control variables are those process parameters that can be easily changed and controlled at a certain level.

However, it is very hard or even impossible to set or control noise variables in the ordinary manufacturing conditions. To illustrate, the adjustment a technician makes to the office copier belongs to control factor while the humidity that fluctuates in the office environment is noise factor. Ideally, processes should be adjusted to be insensitive to the noise factors via the control factors, so that variation of noise variables does not cause variation of product quality. Back to the copier example, a good designed copy machine should handle paper properly during humid summer months as well as in the winter, when conditions are dry and prone to static electricity.

122

In the study, the objective is to consistently make conical cup without defects. Therefore the variables are classified as following:

I. Control variable: the scale of BHF: s

II. Noise variables: the sheet thickness:t , the friction between punch/blank: fp ,

the friction between die/blank and binder/blank: fd .

It should be emphasized that this classification is nowhere unchangeable. If some noise variable is found to be the key to the success of the objective, a method may be figured out to control it. For example, if the humidity is found to play an important role in the right functionality of copy machine, a humidifier may be added to control the humidity in the machine. Then the former noise variable changes to control variable. Of course, sometime it is also impossible or too expensive to change it.

For the design of experiment (DOE), the Box-behken DOE plan is selected. This design allows efficient estimation of the first and second order coefficients. Also because

Box-Behnken has fewer design points, it is less expensive to run than other design methods such as central composite designs under the same number of factors [48]. With

four variables, there are total 27 runs. For each run, the maximum thinning y fr , sidewall

wrinkling ysw and flange wrinkling y fw are recorded. The full quadratic response models are built after all the twenty-seven simulations are done. The default values fort , fp , fd in the reference [44] are used and the performance responses plotted versus s,

123

shown in figure 6.3. From this figure, it is seen that when the BHF is increased, the maximum thinning is going up while the sidewall wrinkling is the opposite. In the reference [44], the criteria for the fracture failure and wrinkling are maximum thinning being larger than 0.25, sidewall wrinkling 0.20 and flange wrinkling 0.05. Since all the flange wrinkle magnitude within design range is much smaller than 0.05, its curve is not shown in figure 6.3. Constrained by the failure criteria, the feasible range for s shrinks to a very narrow band around 1 as illustrated in the shaded area, which is exactly the optimum BHF generated by the PI control strategy in the reference [44]. After examining the control strategy, it is found that it tends to maximize the drawing depth at the expense of letting wrinkling especially sidewall wrinkling approaches its limit. By reducing the

BHF to just keep the wrinkling magnitude on the limit bound, the maximum thinning can be achieved at a lower level so that drawing depth is increased. From the deterministic point of view, no part will fail although their sidewall wrinkle magnitude reaches the maximum allowed value. However, as discussed previously, the sheet properties and process variations are unavoidable. Even a minor change of s ,t , fp or fd will make the wrinkling or fracture out of bound easily. Suppose s has a normal distribution with mean equals unity and other parameters are constant, the probability for wrinkling failure then will be about 50%. Adding the effect of variations of other variables, the defect rate may be even higher. Therefore, the deterministic PI control strategy is not robust, and sometime it is even very dangerous. The benefits of it, though, come from its ability to predict a rough profile which grasps the metal flow characteristics evolution during forming.

124

ling/thinning magnitude rink D bottom W

Scale of BHF: s

Figure 6.3. Maximum thinning and sidewall wrinkling (y-axis) at different s of BHF

(x-axis).

125

The probabilistic based optimization then should be able to take the process variation into account and improve its robustness based on the predicted deterministic BHF profile.

Based on the current failure criteria, the feasible design space is so narrow that s equals unity is the only choice. The probabilistic constraints will never get satisfied because no change can be made to s and thus no solution available for the probabilistic optimization. After literature checking and discussion with the author of the reference

[44], it is found that the sidewall wrinkling criterion is too strict. Therefore, it was relaxed somehow to enable the solution of the probabilistic design optimization.

The deterministic optimization problem is formulated as:

ys() y ()s Minimize: F=+ wsw w fr swss fr sw fr (6.8) Subjeect to: ysw≤ c sw ycfr≤ fr 0.8≤≤s 1.2

The probabilistic optimization problem is formulated as

wwwwμσ Minimize: F =+++μσswμ 2222frμσσ sw fr ssssswfrswfr μμσσsw fr sw fr Subject to: μσsw+≤ 3 sw c sw (6.9) μσfr+≤ 3 frc fr s +≤ 3σ s 1.2 0.8≤ s-3σ s

126

And the PI control optimization strategy can be described as

Minimize: F= yfr ( s ) Subject to: ysw≤ c sw (6.10) 0.8≤≤s 1.2

In the figure 6.4 and 6.5, the sidewall wrinkling limit csw is relaxed from 0.21 to 0.23 gradually. The weights of fracture and sidewall wrinkling are chosen the same in the deterministic optimization. The coefficient of variation for s is 1%, for the other three is 5%. It is observed that PI control strategy and deterministic optimization always have the same result, which is the intersection point of SW criterion line with the SW magnitude curve. As explained earlier, PI control strategy tends to minimize the fracture by letting the sidewall wrinkling magnitude approach limit. Therefore, as the criterion for wrinkling is relaxed a bit, the blank holder force need be reduced to allow more wrinkling.

For the deterministic design, if the weights for the two defects are the same, it is easy to find out from the figure that the addition of those two curves has the lowest point at s equals 0.8 and increases monotonously until it equals 1.2. The intersection point not only satisfies both the fracture and wrinkling constraints, but also has the smallest addition of their magnitude. However, concerning the process parameters variation, neither of these two methods provides a reliable solution.

127

Wrinkling/thinning magnitude Wrinkling/thinning

Scale of BHF: s

Figure 6.4. The design solution of PI control, deterministic design and probabilistic design

for csw = 0.21.

128

Wrinkling/thinning magnitude Wrinkling/thinning

Scale of BHF: s

Figure 6.5. The design solution of PI control, deterministic design and probabilistic design

for csw = 0.23 .

129

Csw=0.21 Csw=0.23

PI control Deter. Opt Prob. Opt PI control Deter. Opt Prob. Opt

S 0.925 0.925 0.9625 0.845 0.845 0.9125

Pr{no wrinkling} 45.78% 45.78% 91.04% 48.04% 48.04% 99.98%

Pr{no fracture} 100% 100% 99.86% 100% 100% 100%

(The coefficient of variation for s is 1%, other three are 5%)

Table 6.3. The deterministic and probabilistic design result with constraint reliability.

130

Csw=0.23 400

300

200 Frequency

100

0 0.217 0.224 0.231 0.238 0.245 0.252 0.259 Sidewall wrinkling magnitude

Cfr=0.25 350

300

250

200

150 Fr e que nc y

100

50

0 0.1870 0.1955 0.2040 0.2125 0.2210 0.2295 0.2380 0.2465 Maximum thinning magnitude

Figure 6.6. Histograms of sidewall wrinkling and maximum thinning for the

deterministic designs.

131

Csw=0.23 400

300

200 Frequency

100

0 0.198 0.204 0.210 0.216 0.222 0.228 Sidewall wrinkling magnitude

Cfr=0.25 300

250

200

150 Frequency 100

50

0 0.210 0.216 0.222 0.228 0.234 0.240 0.246 Maximum thinning magnitude

Figure 6.7. Histograms of sidewall wrinkling and maximum thinning for the probabilistic

designs.

132

On the contrary, the probabilistic design always pulls the optimum point away from the deterministic constraint boundary. By this means, the probabilistic constraints can be fulfilled with the larger fracture magnitude as tradeoff compared to the deterministic design. In figure 6.4 where the sidewall wrinkling critical value is 0.21, it is also noted that the probabilistic optimization gives no solution since the wrinkling reliability constraint cannot be satisfied no matter what value s takes within its feasible region.

The best achievable probability of no wrinkling is 91.04% when s equals 0.9625. As the sidewall wrinkling critical value is relaxed to 0.23, the probabilistic optimization has solution and the probability of no wrinkling is 99.98% and the probability of no fracture is 100%. The probabilistic design results are shown in table 6.3. The probabilities shown here are calculated by the Monte Carlo Simulation and the histograms of the sidewall wrinkling and fracture magnitude are compared side by side for the deterministic and probabilistic optimization solutions, as drawn in figure 6.6 and 6.7. The sample size for the Monte Carlo Simulation is 5000. It is easy to see that the distribution of thinning is far away from the upper limit in the deterministic design. So it is very safe in terms of avoiding fracture. On the other hand it turns out that half of the distribution of sidewall wrinkling is outside of limit. It means half of the parts will be scrapped due to the winkling problem. The probabilistic design, in contrast, pushes the distribution of thinning a bit close to limit but gets rewarded by pulling most distribution of wrinkling within limit. Therefore, the traditional deterministic design is not robust by forcing the design solution on the constraint boundary.

133

Deterministic simulations were also conducted to evaluate the probabilistic design vs. the deterministic design. The results are shown in figure 6.8. Figure 6.8(a)(b) shows the thinning and wrinkling at the probabilistic design point. When the process is stable, we could assume plus/minus 3 sigma. The most risky cases happen at these extremes. Figure

6.8(e)(f) shows the thinning and wrinkling at one of these most risky cases. The deterministic design simulation result is shown in figure 6.8(c)(d). For the worse case of scenario of probabilistic design, the wrinkling and thinning are still within the criteria limits.

6.7 Conclusion

In this chapter, a probabilistic design for the variable BHF in the cylindrical cup drawing process is conducted and compared with the traditional deterministic design. The result shows that the BHF predicted by the deterministic method is not robust under the presence of process variations. However, given the same failure criteria, the probabilistic design improved the yield (probability of good parts) to 99.98% from the 48.04% obtained by the traditional deterministic design. In mass production environment, the achieved process reliability is huge.

134

(a) thinning for probabilistic design

Cross-section cut of drawn cup at 25% drawing height to measure the sidewall wrinkling of the Prob. design

SW=0.212

(b) sidewall wrinkling for probabilistic design

Figure 6.8. Evaluation of probabilistic design vs. deterministic design. (to be continued)

135

Figure 6.8 Continued

(c) thinning for deterministic design

Cross-section cut of drawn cup at 25% drawing height to measure the sidewall wrinkling of the Deterministic design

SW=0.229

(d) sidewall wrinkling for deterministic design

136

Figure 6.8 Continued

(e) thinning for probabilistic design at one of the extreme cases where all noise variables at 3sigma away from mean

Cross-section cut of drawn cup at 25% drawing height to measure the sidewall wrinkling of the Prob. Design at 3sigma extreme case

SW=0.226

(f) sidewall wrinkling for probabilistic design at one of the extreme cases where all noise variables at 3sigma away from mean

137

CHAPTER 7

7. SPATIALLY VARYING CONSTRAINTS AND

PROBABILISTIC DESIGN BY MULTI-OBJECTIVE

GENETIC ALGORITHM

In chapter 6, the drawing quality is improved by the introduction of temporal varying blank holder force. However, many researches on the application of spatial varying constraints have been done to successfully to improve the drawing quality. Traditionally, these spatial varying constraints can be achieved through placing multiple beads at various locations on the drawing die. It is especially useful in the drawing of complex shaped parts. Besides drawing beads, other forms of spatial varying constraints are also being studied. They are segmented elastic binder and very new discrete friction concept, which will be focused in this chapter. The idea of segmented binder is that the blank holder can be segmented and different pressing forces are then applied at each sub-binder so that distributed restraining forces are generated on the sheet to regulate the metal flow spatially. The discrete friction is a very new research field. The basic assumption is that by introducing the micro-texture or tiny pockets on the die surface the fiction 138

condition can be altered at different locations. From former cylindrical cup drawing test and study, we know that the part quality is more sensitive to the lubrication than the blank holder force. So in this chapter, we select the discrete friction method as one promising form of spatial varying constraints and apply it to Hishida parts to study its effectiveness in improving deep drawing quality. Due to the part geometry, to investigate the advantage of discrete fiction concept, fifteen design variables need to be determined.

Meantime, the design objective is to reduce both the wrinkling and fracture defects simultaneously. Traditional DOE and RSM combined method cannot resolve this problem because a CCD design for 15 factors needs 32799 simulation runs, which are obviously prohibited. Instead, the optimization is realized through the integration of

Multi-Objective Genetic Algorithm (NSGA-II) and numerical simulation code to find the optimal configuration of these local friction coefficients and other drawing process parameter. The drawing quality by the uniform friction and the discrete friction are compared. The results show that at the same draw depth and wrinkling level, around 12% reduction of the maximum thinning has been obtained from discrete friction. If the objective is to reduce both the wrinkling and fracture, an overall 33% improvement can be obtained at the certain Pareto optimal setting from discrete friction comparing to uniform friction. To incorporate process uncertainties and variations, the probabilistic design search is implemented based on the deterministic design result from the multi-objective genetic algorithm. The reliability analysis is carried out at each feasible point from the last Pareto fronts. The design configuration with the highest reliability is chosen as the optimal probabilistic design. For a complex case like this, the traditional

139

probabilistic design approach mentioned in chapter 5 and 6 is not applicable. However, the proposed two phases design method illustrated in this chapter is able to effectively find the optimal deterministic design pool and then search out the best probabilistic design.

7.1 Introduction

Increase the forming window of the drawing process is an eternal topic in the sheet metal forming industry. A typical stamping process is composed of three steps:

1. Binder holding blank: binder moves down to clamp the sheet;

2. Forming part: punch moves down to drag the sheet into the die cavity to take

shape of die and punch;

3. Releasing part: punch and binder move up and remove the formed part from the

die set.

Treating sheet metal forming process as a system, the forming window of the deep drawing process is determined by intrinsic or constitutive material properties and extrinsic factors in the manufacturing process, such as internal friction, forming speed, tooling design, blank shape.

140

At a given sheet material and blank shape, tooling and process parameters, such as forming speeds, temperatures etc., the metal flow into the die is determined by the restraint imposed by the blank holder or the drawbead penetration. Previous studies have shown that spatial distribution of these restraint forces could largely improve the drawability of a given part.

The drawbeads are currently wide used in automobile industries to form complicated auto-body panels. During the drawing, the blank holder force is usually uniformly distributed along the binder. Its magnitude can be adjusted to roughly control the sheet material flow into the die. The drawbeads, which are installed on the various spots of the binder, could help more accurately control the material flow by exerting strong restraining forces. One setup of such drawbeads for Hishida part drawing is shown in figure 7.1.

Numerous researches have been done to model the bead, design the bead and study the effect of bead on the part quality. Triantafyllidis et al. [49] considered the effects of the drawbead using one-dimensional elasto-plastic shell element and compared numerical results with experimental results. Cao and Boyce [50] analyzed the restraining force with respect to the depth of drawbead. Naceur et al. [29] presents an optimization procedure of drawbead restraining forces spatial distribution in order to improve the formability in deep drawing process.

141

Figure 7.1. Example of one setup of drawbeads on the Hishida part drawing [51].

142

Siegert [51] first studied the segment-elastic binder and applied this concept in the

Hishida part drawing, as shown in figure 7.2(a). In this configuration, the binder has a pyramidal support structure. On each pyramid a local blank holder force is introduced.

The blank holder forces are the pin forces of a multipoint cushion system. The pins, which are adjustable by CNC, transfer the blank holder forces from the cushion plate to the lower binder. Six pins are seen to support the binder, three on each long side of the die. By spatially applying different forces along the binders, as illustrated in figure 7.2(b) the limit drawing depth without any defects increased from 55mm to 63mm. Ayed [12] applied the same concept on the optimization of the blank holder force distribution in the stamping of a car front door panel. Although his work was focusing on the RSM method to find the optimal force for the seven sub-binders, the result showed that the segmented binder could reduce the inclination angle from 5 degree to 1 degree without wrinkling and fracture defects. The segmented binder could alter the restraining force distribution by changing the normal force applied on the blank to affect the flow pattern of the sheet metal. We know that the restraining force is determined by both the normal force and the friction coefficient. This reminds us that if we could control the friction condition between die/punch and sheet, the drawability of sheet could also be improved without using the complex segmented binder structure. Besides, the different friction can be applied not only to binder but also to the die and punch. This actually expands the forming window and gives us more control options. Therefore, in the following part we will study the discrete friction concept and its application in Hishida part to form spatial restraining force distribution.

143

(a) the segment-elastic binder

(b) the optimal pin forces for the segment-elastic binder

Figure 7.2. The segment-elastic binder used in the Hishida part drawing [51].

144

In sheet metal forming, the friction condition is affected by many parameters, such as material properties of workpiece, tool and lubricant, sliding velocity and contact pressure, and their effects are global, e.g. change the lubricant will change the friction everywhere in the part interface.

However, the frictional conditions are also significantly dependent on the surface topography of the tool [52] that can be locally tailored by applying micro texturing techniques [53]. Excimer laser material processing is such a technique that offers the opportunity to produce micro textures with high precision and flexibility on almost any material – metal, ceramics and polymers [54]. The rectangle texture pocket formed by

Excimer lasers on the ceramics and its cross-section are shown in figure 7.3(a).

As illustrated in the figure 7.3(b), the friction zones in the textured stamping tooling can be divided into textured area and non textured area. In the textured area, if the pressure is higher enough to support the asperities of sheet blank, hydrodynamic lubrication situation will be formed [53]. At the non-textured area, mixed lubrication regime happens normally in the metal forming [52].

The micro texture tailored surface improves the friction condition by: a). forming hydrodynamic pressure film to change real contact area; b) supplying lubricant by working as small reservoirs. According to the experiments in [53], 14% of friction coefficient reduction can be achieved by just varying the pocket length and depth. An

145

additional potential improvement is possible by changing the width and arrangement of textures as well as the portion of the textured area. Besides adding the pocket texture to the tooling surface, altering the surface roughness by micromachining is also another effective measure to locally change the friction condition. Reported in [55], there is maximum 54% friction reduction for die material M300 when the die surface roughness

Rz goes from 0.8 to 5.3 μm , see figure 7.4.

146

(a), rectangle texture pocket and the cross-section profile

(b), rectangle texture pocket and the cross-section profile

Figure 7.3. Micro-texture to alter the friction condition [53].

147

Figure 7.4. Topological effect at the roughness level [55].

148

7.2 Setup of Spatially Varying Constraints in Hishida Part Drawing

Based on the above discussion, friction condition on the stamping tooling interfaces can be tailored by micro-texture process or micro machining process. A discrete friction concept, which improves the deep drawing process by applying locally different friction conditions, is proposed and demonstrated in this section. It is assumed that the friction coefficients are design variables and the desired values can be realized at different zones on the tool surfaces.

In the drawing of non-symmetrical geometry, the metal deformation changes with the geometry and thus the required friction forces to control the metal flow are different.

Generally, the material at corner area is hard to flow in and prefers lower restraining force while the straight edges have less compressive deformation caused thickening tendency and thus need more retraining forces. The proposed discrete friction concept tries to divide the deep drawing tooling into different friction zones and improves the forming window by selecting different friction condition at the each friction zone. The

Hishida part, which has four corners with different radii and four tapered walls with different slope angles on each side, is chosen, see figure 7.5 [39][47].

149

Figure 7.5. The Hishida Part.

150

Since the ideal friction condition depends on the drawing geometry, the segmentation of these discrete zones follows the rule that each individual zone tends to have one unique geometric feature and they are not overlapped. Based on this rule, the Hishida die/binder and punch are divided into 10 zones and 4 zones, as denoted as μ1,L, μ10 and

μ11,L, μ14 respectively, see figure 7.6. Since metal flow is also largely controlled by the blank holder force, BHF is added as another design variable. So for this new discrete

r friction application, the design vector can be described as x = {μ1, μ2 ,L, μ14 , BHF} . The question following is how to find the optimal configuration of these friction conditions.

The next session will introduce the multi-objective genetic algorithm and the reason why it should be used in this case.

151

Figure 7.6. Spatial distribution of discrete friction zones.

152

7.3 Multi-Objective Genetic Algorithm: NSGA-II

There are several reasons why multi-objective genetic algorithm should be used to find the optimal configuration of the friction values in this discrete friction case.

First, the traditional optimization techniques like sequential quadratic programming (SQP) can only locate the local minimum. When the design space is high dimensional and the response is not smooth or linear, it is more likely that the global optimal solution will be missed. In this discrete friction scenario, the die is segmented to 10 friction zones.

Including the punch/sheet, die/binder/sheet friction coefficient and one blank holder force, there are total fifteen design variables. Since the discrete friction zones on the die are connecting to each other, the metal flow is affected jointly by the friction values. This makes the system behavior highly nonlinear. Then traditional optimization method may fail to find global minimum. MOGA, one of the exploratory optimization methods, is very good at finding the global optimum.

Second, a good sheet metal forming process is judged by multiple criteria such as wrinkling, fracture, and insufficient stretching. Traditional optimization methods usually convert the multiple objectives to one single objective through a set of predetermined weight coefficients. Most time the coefficients are determined based on expert

153

knowledge or trial-and-error and may not be right. However, the MOGA can compute multiple objective functions simultaneously in one optimization run without converting them into single objective by weighted linear combination. So no arbitrary weight coefficients are needed.

Third, the multiple criteria in this sheet metal forming problem are conflicting with each other. It is apparent that reduction of the tendency to fracture would increase the probability of wrinkling. Therefore, there exists strong trade-off between these objectives.

The presence of multiple objectives then gives rise to a set of optimal solutions (known as Pareto-optimal solutions), instead of a single optimal solution. Simply setting the weights for them will likely to exclude the potential superior solution to the engineering problem. However, the MOGA is capable of finding multiple Pareto-optimal solutions in one single simulation run by the identification of the Pareto front.

Besides, unlike the numerical optimization techniques, the exploratory MOGA does not require the calculation of the local gradient information of the objective to find the search direction. In sheet metal forming design, getting this local gradient is very difficult or even impossible.

154

7.3.1 Pareto optimal and Pareto front

* For minimum problem, a vector of decision variables x is Pareto optimal if there does not exist another x such that

* fi (x) ≤ fi (x ) for all i = 1,Ln

And (7.1) r * f j (x) < fi (x ) for at least one j, 1 ≤ j ≤ n . n is the number of the objectives.

* In words, this definition says that x is Pareto optimal if there exist no feasible vector of decision variables x which would decrease some criterion without causing a simultaneous increase in at least one other criterion. Unfortunately, this concept almost always gives not a single solution, but rather a set of solutions called the Pareto optimal

* set. The vectors x corresponding to the solutions included in the Pareto optimal set are called non-dominated. The plot of the objective functions whose non-dominated vectors are in the Pareto optimal set is called the Pareto front, as shown in figure 7.7, each design objective corresponding to a coordinate . The multiple-objective genetic algorithm can find the Pareto front through the following basic procedures. Initially, the algorithm randomly selects multiple design points according to the chosen population size and evaluates them against each of the objectives. 155

Figure 7.7. The Pareto optimal and Pareto front.

156

Then it finds the Pareto front to the current population by the so called domination criterion. It is similar to the definition of Pareto solutions. A solution “a” is said to dominate another “b” in the population if it is at least as good as b in every dimension and better than b in at least one dimension (objective). The first set of Pareto front has

rank one, denoted by F1 . Then the next level of Pareto front F2 is found after the first set of Pareto optimal is discounted from the population. This process continues until all the design points are ranked and denoted by F1, F2 ,L, Fk . After the ranking, the good genes are selected, mutated and crossed-over to generate the next better population just like the nature selection process by which a superior creature evolves whilst inferior creatures fade out from their population as generations goes by.

7.3.2 Non-dominated Sorting Genetic Algorithm-II

Within the last two decades, a number of different EAs were suggested to solve multi-objective optimization problem. Of them, Fonseca and Fleming’s MOGA [56],

Srinivas and Deb’s NSGA [57], Horn et al.’s NPGA [58], Zitzler and Thiele’s SPEA [59], and Knowles and Corne’s PAES [60] enjoyed more attention. NSGA-II, the improved version of NSGA, outperforms PAES and SPEA in terms of finding a diverse set of solutions and in converging near the true Pareto optimal set [61]. Also, NSGA-II is more computationally efficient by using elitism and crowded comparison operator that keeps diversity without specifying any additional parameters.

157

The NSGA-II uses a fixed population size of N . In generation t , an offspring

population Qt of size N is created from parent population Pt and non-dominated fronts F1, F2 ,L, Fk are identified in the combined population Pt ∪ Qt . The next

population Pt+1 is filled starting from solutions in F1 , then F2 , and so on as follows.

Let l be the index of a non-dominated front Fl that | F1 ∪ F2 ∪L∪ Fl |≤ N and

| F1 ∪ F2 ∪L∪ Fl ∪ Fl+1 |> N . First, all solutions in fronts F1, F2 ,L, Fl are copied to

Pt+1 , and then the least crowed (N− | Pt+1 | ) solutions in Fl+1 are added to Pt+1 . This

approach makes sure that all non-dominated solutions ( F1 ) are included in the next

population if | F1 |≤ N , and otherwise the selection based on crowding distance will promote diversity. The procedure is illustrated in figure 7.8.

The non-dominated sorting is described here. First, for each solution “ p ” two entities are

calculated: 1) dominated count np , the number of solutions which dominate the solution

p , and 2) S p , a set of solutions that the solution p dominates. All solutions in the first

front F1 will have their domination count as zero. Now, for each solution p with

np = 0 , we visit each member (q ) of its set S p and reduce its domination count by one. In doing so, if any member q the domination count becomes zero, it is put in a

separate list Q . These members belong to the second non-dominated front F2 . Then the

above procedure is continued with each member of Q and the third front F3 is identified. This process continues until all fronts are identified. 158

Figure 7.8. The procedure of NSGA-II [61].

159

The crowding distance is used to distribute the solutions more uniformly over the true

Pareto front to maintain a diverse population. Randomly select two solutions “a” and

“b”; if the solutions are in the same non-dominated front, the solution with a higher crowding distance wins. Otherwise, the solution with the lowest rank is selected. Without taking any preventive measures, the population tends to form relatively few clusters in multi-objective genetic algorithm. The method used in NSGA-II is:

Step1. Rank the population and identify non-dominated fronts F1, F2 ,L, Fk . For each front j = 1,L,k repeat steps 2 and 3.

Step2. For each objective function n , sort the solutions in Fj in the ascending order.

Let

th s =| Fj | and x[i,k ] represent the i solution in the sorted list with respect to the

objective function n . Assign cdn (x[1,n] ) = ∞ and cdn (x[s,n] ) = ∞ , and for i = 2,L, s

zn (x[i+1,n] ) − zn (x[i−1,n] ) assign cdn (x[i,n] ) = max min zn − zn

Step3. To find the total crowding distance cd(x ) of a solution x , sum the solution crowding distances with respect to each objective, i.e., cd(.x) = ∑ cdn (x) n

160

7.4 Design Optimization Model

The simulation model for Hishida part forming process has been built up in the explicit code PAM-STAMP2G, see figure 7.9. The Krupkowsky-law hardening equation:

p n σ y = K(ε 0 + ε ) is used for modeling the material behavior. The blank material properties of the simulation model are listed in table 7.1.

7.4.1 Design Objectives

The objective functions needed in the NSGA-II to evaluate the quality of drawn Hishida part are 1) fracture, 2) wrinkling. The building of these functions is based on the strains on Forming Limit Diagram (FLD), which value can be obtained from the results file of numerical simulation of Pamstamp 2G quickstamp full FEA.

An extensive literature review on fracture can be found in [62][63]. Generally, these fracture criteria can be classified into the following three types: geometry based criteria, e.g. FLD [64][65] and thinning at the part wall [66]; stress based, e.g. FLSD [67]; and damage based criteria, e.g. Cockroft and Latham criterion [67]. In this study, thinning in the part, a geometrical criterion, which has been traditionally used to estimate the proximity to failure [66], is chosen as a criterion for determining the fracture. This is an approximate method, because the limit of thinning is affected by strain paths.

161

Nevertheless, the thinning criterion is still useful and effective in most deep drawing operations.

For wrinkling, Havranek [68] in his conical cup forming experiments found the distribution of strain at the onset of sidewall formed a narrow band, which can be plotted by a wrinkle-limit curve (WLC) in the negative section of the Forming-limit diagram

(FLD). According to his study on a range of material thickness, 0.25-0.99mm, this wrinkling-limit curve is proved to be independent of sheet thickness. Hosford and

Caddell [69] have also shown that if the absolute value of principle strain ratio

ε min β = (where ε min and ε max are the major and the minor principal strains, ε max respectively) is greater than a critical value the wrinkling is supposed to happen, and the larger the absolute value of β , the greater is the possibility of wrinkling to occur. Chen

[70] has successfully used this critical strain ratio value to predict sidewall wrinkling in his studies on the formation of the tapered rectangular cups and stepped rectangular cups.

Sheng [39] used this wrinkling limit to predict the variable blank holder force for a

Hishida part. Therefore in this study, we will use his result and use 1 as critical wrinkling

ratio. This is equal to ε max= −ε min in the FLD diagram. The wrinkling objective function can be defined as the proportion of strain points within the wrinkling zone of forming limit diagram. The points below this curve represent the strong wrinkling tendency.

162

(a) die with 10 different friction zones

(b) binder with 10 different friction zones

(c) punch with 4 different friction zones

Figure 7.9. The simulation model of discrete friction drawing of Hishida part.

163

Sheet material ECODAL Initial sheet thickness 1mm

Flow stress (Mpa) p 0.25 σ y = 470(0.0013 + ε )

Table 7.1. Material properties used in the simulation

164

7.4.2 Design Variables

The design variables in this discrete friction case are: 1) ten discrete friction coefficients

μ1,L, μ10 for die/binder, 2) four different discrete friction coefficients μ11,L, μ14 for punch, 3) one blank holder force. So there are fifteen design variables.

The multi-objective optimization for the discrete friction problem is formulated as,

Minimize:

F(x) = {Obj f (x),Objw (x)} Subject to: 400KN ≤ BHF ≤ 700KN, (7.2) 0.02 ≤ μi ≤ 0.32, i = 1,L,14 where: x = {BHF, μi }, i = 1,L,14

7.4.3 Integration of NSGA-II Optimization and Pam-Quickstamp Full FEA

In NSGA-II, at each generation, every design point is evaluated against the two objectives: wrinkling and fracture. This calculation is based on the results output from the finite element simulation of sheet metal drawing process. Suppose the population size is thirty and evolves with fifty generations, NSGA-II requires approximately

165

30× 51 = 1530 FEA simulations. If the formal Pam-Autostamp is used, given each simulation needs 10 minutes, the whole optimization process requires at least ten days.

To alleviate the computation cost and find the solution quickly, the Pam-Quickstamp Full

Process code is used instead for its quickness. Pam-Quickstamp is a simplified approach where some components of the press tool are deduced from the initial part and where the real kinematics is not completely defined. Since this is a multi-objective trade-off optimization problem, the Pam-Quickstamp, although with a little bit lower fidelity comparing to Pam-Autostamp, can still yield a good global result. The optimization flow chart is plotted in figure 7.10. In this NSGA-II optimization method, population size is set as fifty and generations are set as eighty. Crossover probability is 0.9, crossover distribution index is 20 and mutation distribution index is 100.

7.4.4 Hishida Part Drawing Process Heuristics

From the previous study, especially the segmented binder research for the Hishida part, we found that the spatial distribution of the restraining forces was not random. In fact, they go after some rules or heuristics which are followed by process engineers. We call them process heuristics. One heuristics example is given here: in the corner area, in order to pull the sheet more easily into the die, the friction between the punch/blank should be larger than the friction between die/blank. We could transform these heuristics to following explicit constraints bounds.

166

μ114< μ μ < μ 411 (7.3) μ612< μ μ913< μ

Therefore after multi-objective genetic algorithm searches out all the possible combinations of these process parameters, we could use these heuristics rules to screen out the designs that make sense to engineers. In addition, new rules can be added in or existing heuristics can be modified. By this way, the sheet metal forming knowledge is brought back to the design rather than using the genetic algorithm solely to find the optimum mathematically.

7.5 Optimization Result and Analysis

After 4050 simulations, the Pareto front and all the other offspring objective evaluations are shown in the figure 7.11. The blue solid dots represent the Pareto front and the light blue dots are all the points evaluated by the genetic algorithm. The brown solid dots are the design points that comply with the heuristics rule we mentioned above. However, in the following analysis, this heuristics rule is ignored. We simply try to find any opportunity to improve the part quality even if the friction settings seem abnormal. The lowest level of thinning and wrinkling can approach to 0.07 and 0.01. Since the two objectives are confronting to each other, it is impossible to achieve low values for both thinning and wrinkling.

167

Start

FEA Modeling & Initial

Update model Pam-stamp FEA NSGA-II Mutation Crossover Objective Evaluation Selection

Terminate?

Drawing End Process Heuristics

Figure 7.10. The optimization flow chart.

168

0.23

Evaluation points in NSGA 0.21 Pareto front

Best uniform design 0.19 Comparable to uniform design 0.17 Best discrete friction design Heuristics 0.15

Thinning 0.13

0.11

0.09

0.07

0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Wrinkling

Figure 7.11. The Pareto front of discrete friction NSGA-II optimization.

169

To see the improvement of discrete friction in drawing of complex shape part, the optimization of the uniform friction drawing is conducted according to the method in the previous work [47] and the results are compared. When the weighs of wrinkling and thinning objectives are equal, the optimum configuration for the uniform friction is that blank holder force equals 605KN, die/binder friction equals 0.09 and punch friction equals 0.12. The resulting maximum thinning and wrinkling are 0.098112 and 0.173657.

However, for the discrete friction case, in the Pareto front of figure 7.11 there is one comparable point that has wrinkling equal 0.172064 but the maximum thinning is only

0.086309. This means that if the wrinkling is controlled at the same level, by applying discrete friction the maximum thinning is improved by 12%. Furthermore, if the thinning can be allowed to increase a little bit (the upper limit is 0.12), there exists a great potential for the reduction of wrinkling. This best design point is shown in the figure 7.11 where the thinning is 0.10818 and the wrinkling dropped to 0.075453. Therefore, the total objective (assume equal weights) is changed from 0.271769 of uniform friction to

0.183633 of the discrete friction case. The percentage of improvement is around 33% and it is huge. The friction coefficient values and strain distributions of the uniform friction and best discrete friction designs are plotted in figure 7.12 and 7.13 for comparison purpose. The other benefit of applying discrete friction conditions is the more uniform strain distributions within the part. Figure 7.14 illustrates the strain distribution, at the corner walls A and B and bottom, the difference between the maximum strain and minimum strain drop respectively from 0.178 to 0.156, from 0.176 to 0.136 and from

0.0214 to 0.013. The deformation on the part wall becomes more uniform.

170

(a) uniform friction design

(b) the strain distribution

Figure 7.12. Uniform friction design and strain distribution after drawing.

171

(a) discrete friction design

(b) the strain distribution Figure 7.13. Discrete friction design and strain distribution after drawing.

172

0.25 0.20

0.15 Uniform 0.10 friction surface

0.05 Discrete

Frist strain at the out out the at strain Frist 0.00 0.00 20.00 40.00 60.00 80.00 Curlinear distance (mm)

a) Strain distribution at the corner A

0.25

0.20 Uniform friction 0.15

0.10 surface 0.05 Discrete fi i Frist strain at the out 0.00 0.00 20.00 40.00 60.00 80.00 Curlinear distance (mm)

b) Strain distribution at the corner wall B 0.03 0.03 Uniform 0.02 friction 0.02

surface 0.01 Discrete 0.01 friction 0.00 Firststrain at theout 150.00 250.00 350.00 450.00 Curvilinear distance (mm)

c) Strain distribution at the bottom

Figure 7.14. Strain distribution at different locations of the part after drawing.

173

7.6 The Probabilistic Design Search

To incorporate process uncertainties and variations, the probabilistic design search is implemented based on the deterministic design result from the multi-objective genetic algorithm. The reliability analysis is carried out at each feasible point from the Pareto front. The criterion of wrinkling for Hishida part is 0.20, and the criterion of thinning is

0.12. Within the 50 points of the Pareto front, 19 of them have both wrinkling and thinning smaller than the limits. They are denoted as feasible points.

For those 19 feasible points, reliability analysis is conducted at each point. The aforementioned mean value first order method (MVFO) is used here to access the reliability. The results are shown in figure 7.16, 7.17, 7.18, where the coefficient of variation for blank holder force is kept at 0.05, and for die/punch friction it is changed from 0.1 to 0.2 and 0.4. The purpose of three different levels of variations is to see when the uncertainties of friction are enlarging, which point is the most robust design point.

All the results are summarized in the figure 7.15. The solid diamonds represent the non-feasible deterministic points in the Pareto front. The solid square stands for the feasible deterministic points in the Pareto front. The hollow triangles correspond to the most reliable points (stochastic) when the BHF COV=0.05 and all other friction

COV=0.1. By referring to the figure 7.16, those 11 points all have zero percent of wrinkling and thinning defect. When the uncertainties of friction is increased to

174

COV=0.2, the most reliable points are identified by the hollow square. In this case, only

3 points have zero percent of wrinkling and thinning defect. If we further increase the friction COV to 0.4, we find there is no point which could have zero percent defect. The best point has reliability for wrinkling equal 0.89 and for thinning 0.92.

The interesting observations from figure 7.15 lie in four aspects: (1), all the reliable points are away from the deterministic constraints. (2), the higher the uncertainties, the farther the robust point should be away from the deterministic constraints. (3), the most robust design is in the middle of the robust points under lower uncertainties. (4), the deterministic optimum is very different with the probabilistic optimum, which again signifies the probabilistic design.

175

Reliability analysis of the Pareto front points

0.18 Non-feasible feasible good at cov=0.1 0.16 good at cov=0.2 best at cov=0.4

0.14

0.12 Best probabilistic design

0.1

0.08 Best deterministic design

0.06

0.04

0.02

0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45

Figure 7.15. Reliability analysis of the Pareto front points.

176

1.000 1.000 1.000 1.000 1.000 0 1.000 1.000 1.000 1.000 1.000 Prob(no wrinkling) reto front when BHF COV=0.05, friction COV=0.1 reto front when BHF COV=0.05, friction COV=0.1 Reliability at COV=0.1 points in the last Pa Prob(no thinning) 1 0.8 1 0.88 1 0.960 1.000 1.000 1.000 1.000 1.000 1.000 0.779 1.000 12345678910111213141516171819 0.52 1 0.96 1 1 1.000 0.994 0.494 1.000 1.000 1.000 1.000 1.000 1.00 g) 1.050 1.000 0.950 0.900 0.850 0.800 0.750 0.700 0.650 0.600 0.550 0.500 0.450 0.400 Figure 7.16. Reliability of feasible no wrinklin ( Prob(no thinning) Prob

177

8 1.000 1.000 1.000 1.000 1.000 0.998 1.000 0.960 0.960 0.968 1.000 Prob(no wrinkling) reto front when BHF COV=0.05, friction COV=0.2 reto front when BHF COV=0.05, friction COV=0.2 Reliability at COV=0.2 points in the last Pa Prob(no thinning) 1 0.72 1 0.727 0.971 0.880 1.000 1.000 1.000 0.996 0.971 1.000 0.709 12345678910111213141516171819 0.48 1 0.88 1 1 1.000 0.920 0.466 0.960 0.965 0.994 1.000 1.000 0.99 g) 1.050 1.000 0.950 0.900 0.850 0.800 0.750 0.700 0.650 0.600 0.550 0.500 0.450 0.400 Figure 7.17. Reliability of feasible no wrinklin ( Prob(no thinning) Prob

178

.902 0.929 0.810 0.905 0.984 0.976 0.894 8 0.618 0.868 0.921 0.786 0.796 0.824 0.898 Prob(no wrinkling) reto front when BHF COV=0.05, friction COV=0.4 reto front when BHF COV=0.05, friction COV=0.4 Reliability at COV=0.4 points in the last Pa Prob(no thinning) 12345678910111213141516171819 0.993 0.6 0.945 0.576 0.798 0.679 0.937 1.000 0.912 0.919 0.839 0.91 0.467 1 0.708 0.987 0.98 0.994 0.799 0.420 0.790 0.811 0.893 0.888 0 g) 1.050 1.000 0.950 0.900 0.850 0.800 0.750 0.700 0.650 0.600 0.550 0.500 0.450 0.400 Figure 7.18. Reliability of feasible no wrinklin ( Prob(no thinning) Prob

179

7.7 Conclusion

Spatial varying constraints can successfully improve the drawing quality. Traditional method is to use drawbeads or segmented binder to generate strong restraining forces on the sheet to regulate its flow. In this chapter, we investigated discrete friction concept and its application in the Hishida part drawing. By this method, more flexible spatial constraints can be applied. Due to the Hishida part geometry, fifteen design variables need to be determined. Meantime, the design objective is to reduce both the wrinkling and fracture defects simultaneously. Traditional DOE and RSM combined method cannot resolve this problem. Instead, the optimization is realized through the integration of

Multi-Objective Genetic Algorithm (NSGA-II) and numerical simulation code to find the optimal configuration of these local friction coefficients and other drawing process parameter. The results show that at the same draw depth and wrinkling level, around 12% reduction of the maximum thinning has been obtained from discrete friction. If the objective is to reduce both the wrinkling and fracture, an overall 33% improvement can be obtained from discrete friction comparing to uniform friction. To incorporate process uncertainties and variations, the probabilistic design search is implemented based on the deterministic design result from the multi-objective genetic algorithm. The reliability analysis is carried out at each feasible point from the Pareto front. The design configuration with the highest reliability is chosen as the optimal probabilistic design.

For a complex case like this, the traditional probabilistic design approach mentioned in chapter 5 and 6 is not applicable. However, the proposed two phases design method

180

illustrated in this chapter is able to effectively find the optimal deterministic design pool and then search out the best probabilistic design.

181

CHAPTER 8

8. CONCLUSIONS AND FUTURE WORK

The exclusion of inherent process variations in the current deterministic design methods for sheet metal forming can lead to very unreliable result that may cause high scrap rate, frequent rework, machine shut down and thus huge loss of profit. Extensive research has been done in exploring the deterministic effect of each factor on the part, but the impact of their variations on the fluctuation of output quality for the stamping process is seldom addressed and quantified. The inclusion of uncertainty in the design and optimization cycle should lead to better understanding of the impact of uncertainty associated with system input on the system output. This understanding can then be applied for managing such uncertainties. To date, there is very few report on incorporating the uncertainties and variations in the design of stamping process.

In this research we propose three different probabilistic design approaches. It uses sheet metal forming finite element method (FEM) simulation as the fundamental tool. When the system meta-model is not complex, the design of experiments (DOE) technique and response surface method (RSM) are integrated with FEM to build an explicit function to connect process input to process performance output. With the quantification

182

of uncertainties of input variables, by Monte Carlo Simulation (MCS) or other reliability analysis methods, the probability that the product conforms to its specification would therefore be assessed. Through the right formulation of probabilistic optimization the robust optimal design configuration could be found. In the case where the number of process inputs is too many so that the design of experiment cannot handle and the system meta-model cannot be approximated, the alternate approach that integrates FEM, multi-objective genetic algorithm and reliability assessment is illustrated. By the proposed probabilistic design approaches, deeper understanding about the relationship between process variables uncertainties with the part qualities is achieved. Ultimately, the process output variability could be reduced, defect rates could be minimized and product quality would be improved. The strategies, their advantages and disadvantages are listed in table 8.1.

183

Strategies Advantages Disadvantages

DOE + FEM + z Quality index is a simple z Cannot handle the problem RSM + MCS + metric to represent the design where the number of input Quality Index to quality and can be easily variables is over 10. The DOE lump understood. runs are too many. multi-objective z Once RSM is build, MCS can z The MCS is evaluated at each be used to accept all forms of design point; therefore great input variables distributions computation effort is needed.

z Reliability is analyzed at the z Since no optimization routine uniformed distributed design is used, the search for the points and no optimization optimal is not efficient.

routine is required. Therefore z The design formulation only no issue of local minima or concerns the defect rate. It maxima exists. does not care about whether the quality metrics is highly consistent within the spec. 22 DOE + FEM + z The weighted sum of μ +σ z Cannot handle the problem RSM + POE for design formulation not only where the number of input σ + Weighted requires the quality metrics variables is too many. The Sum of μ 22+σ should be within the spec. but DOE runs are too many. design also the variation of them z Must assume that the quality formulation should be minimized. This is metrics (the responses) follow reflecting the Taguchi loss normal distribution, which function idea. may be totally incorrect in some cases. z The propagation of error will derive an explicit form of z The multi-objectives are standard deviation of quality lumped by the weighted sum metrics and incorporate them of mean square and variance, into the weighted sum of which are not straightforward 22 as the formulation in the first μ +σ . Then difficult strategy. probabilistic design problem is z The mean square and variance transformed to traditional may be at different scale, optimization problem. which rises difficulty to z No MCS is needed, which assign scale factors and saves a lot computation effort. weights. z The method is very efficient z The optimization routine may since optimization routine is find the local minima or used to search the optimal maxima instead of global design. minima or maxima.

Table 8.1. The summary of prob. & determ. design strategies. (to be continued)

184

Table 8.1 Continued.

FEM + z Can handle very complex z The computation cost is still Multi-objective problem where the number very huge. Each function Genetic of design variables is very evaluation is a finite element Algorithm + large. simulation since no meta-model

Reliability z The genetic algorithm is an or surrogate model is used.

Analysis at explorative method. The z The reliability analysis is Pareto front design space is more conducted after the thoroughly searched than the deterministic design by second strategy. multi-objective genetic

z The multi-objective genetic algorithm.

algorithm does not need the z The selection of optimal design arbitrary setting of weights still requires experience. This is for different objectives. harder than the first strategy Instead, it gives the Pareto where a simple quality index is front, from which the adopted. designer can select the right optimal design based on the ranking of the importance of objectives.

z No DOE and RSM are needed.

z No gradient is needed.

185

The future work includes:

1. Gathering more quantitative information of the process control and noise parameters,

especially their distribution statistics.

2. Investigate the more efficient probabilistic design algorithm when the process

parameters do not follow normal distributions.

3. Integrate the reliability analysis directly to the multi-objective genetic algorithm loop.

4. Investigate other system approximation methods besides response surface model,

such as the suitability of Kriging model in the sheet metal forming behavior

approximation.

186

LIST OF REFERENCES

[1]. Nee, A.Y.C., PC-based Computer Aids in Sheet-metal Working, J.Mech. Work. Technol. 19 (1989) 11-21

[2]. Majeske, K.D., and Hammett, P.C., Identifying Sources of Variation in Sheet Metal Stamping, International Journal of Flexible Manufacturing Systems, 2002

[3]. Kobayashi, S., Oh, S., and Altan, T., Metal Forming and the Finite-element Method, Oxford University Press, New York, 1989.

[4]. Zeinkiewicz, O, The Finite Element Method, 3rd Edition, McGraw-Hill, New York, 1977.

[5]. Wifi, S.A., An Incremental Complete Solution of the Stretch-forming and Deep-drawing of a Circular Blank Using a Hemispherical Punch, Int. J. Mech. Sci., 1976, Vol.18, p. 23

[6]. Wang, N.M., and Budiansky, B., Analysis of Sheet Metal Stamping by a Finite-element Method, General Motors Research Publication GMR-2423, 1978

[7]. Onate, E,. and Zienkiewicz, O.C., A Viscous Shell Formulation for the Analysis of Thin Sheet Metal Forming, Int. J. Mech. Sci., 1983, Vol. 25, p.305

[8]. Toh, C.H., and Kobayashi, S., Finite-element Process Modeling of Sheet Metal Forming of General Shapes, Grundlagen der Umformtechnik I Symposium, Stuttgart, 1983, p.39

[9]. Toh, C.H., and Kobayashi, S., Deformation Analysis and Blank Design in Square Cup Drawing, Int. J. Machine Tools Des. Res, 1985, Vol. 25, No. 1, p.15

[10]. Pam-stamp 2G version, user manual, 2005

[11]. Isight 8.0 version, reference manual, 2004

187

[12]. Ayed, L.B., et., Optimization of the blankholder force distribution with application to the stamping of a car front door panel, Proceedings of Numisheet, 2005, pp. 849-854

[13]. Gelin, J.C., and Labergere, C., Numerical design and optimal control for sheet metal forming and tube process, 7th Int. Conf. in Numercial Mehods in Forming Processes, NUMIFORM 2001, Toyahashi, Japan, 18-20

[14]. Kim, S.H., and Huh, H., Design of the Bead Forces and Die Shape in Sheet Metal Forming Processes using a Rigid-plastic Finite Element Method and Response Surface Methodology, Trans. KSTP (in Korean), Vol.9, No.3, pp.284-292, 2000

[15]. Kim, S.H., Huh, H. and Tezuka, A., Optimum Design of Draw-bead Force in Sheet Metal Stamping using Rigid-plastic FEM and Response Surface Methodology, Proceedings of KSTP Spring Conference (in Korean), pp.143-146, 1999

[16]. Tezuka, A., Kim, S.H., and Huh, H., Process Parameter Design in Sheet Stamping Processes with Rigid-plastic Finite Element Analysis, Trans of JSCES, Paper No.20000011, 2000

[17]. Lepadatu, D., Hambli, R., Kobi, A., and Barreau, A., Optimization of Springback in Bending Processes using FEM Simulation and Response Surface Method, Int. J. Adv. Manuf. Technol., 2005, 27:40-47

[18]. Wang, L., and Lee, T.C., Controlled Strain Path Forming Process with Space Variant Blank Holder Force using RSM Method, Journal of Materials Processing technology, 167, 2005, 447-455

[19]. Forsberg, J. and Nilsson, L., On Polynomial Response Surfaces and Kriging for Use in Structural Optimization of Crashworthiness, Struct. Multidisc. Optim., 2005, 29: 232-243

[20]. Huang, Y., Lo, Z.Y., and Du, R., Minimization of the Thickness Variation in Multi-step Sheet Metal Stamping, Journal of Materials Processing Technology, 177, 2006, 84-86

[21]. Yamazaki, K., Itoh, R., Han, J., Watanabe, M., and Nishiyama, S., Optimum Design of Aluminum Beverage Can Ends Using Structural Optimization Techniques, Proceedings of Numisheet, 2005, pp.719-724

[22]. Lin, J.C., and Tai, C.C., The Application of Neural Networks in the Prediction of Spring-back in an L-shaped Bend, Int. J. Adv. Manuf. Technol., 1999, 15: 163-170

[23]. Ji, M.X., and Shivpuri, R., Reduction of Random Seams in Hot Rolling Through FEA Based Sensitivity Analysis”, Materials Science and Engineering: A, Vol. 425, Issue: 1-2, pp. 156-166, June 15, 2006. 188

[24]. Hambli, R., Prediction of Burr Height Formation in Blanking Processes using Neural Network, Int. J. Mech. Sci. 44, 2002, 2089-2102

[25]. Hasofer, A.M. and Lind, N.C., Exact and Invariant Second Code Format, Journal of Engineering Mechanics, ASCE, Vol.100, No. EM1, February, pp.111-121

[26]. Chen, X., Hasselman, T.K. and Neil, D.J., Reliability Based Structural Design Optimization for Practical Applications, 38th AIAA/ASME/ASCE/AHS/ASC, Structures, Structural Dynamics and Materials Conference, Kissimmee, FL, pp.2724-2732, Paper No. AIAA-97-1403.

[27]. Pegada, V., Chun, Y., Santhanam, S., “An Algorithm for Determining the Optimal Blank Shape for the Deep Drawing of Aluminum Cups,” Journal of Materials Processing Technology, 125-126 (2002) 743-750

[28]. Moshksar, M.M., and Zamanian, A., “Optimization of the Tool Geometry in the Deep Drawing of Aluminum,” Journal of Materials Processing Technology, 72 (1997) 363-370

[29]. Naceur, H., Guo, Y.Q., “Optimization of drawbead restraining forces and drawbead design in sheet metal forming process,” International Journal of Mechanical Sciences, 43 (2001) 2407-2434

[30]. Gantar, G., Kuzman, K., “Sensitivity and Stability Evaluation of the Deep Drawing Process,” Journal of Materials Processing Technology, 125-126 (2002) 302-308

[31]. Karthik, V., Comstock, R.J., Wagoner, R.H., “Variability of Sheet Formability and Formability Testing,” Journal of Materials Processing Technology, 121 (2002) 350-362

[32]. Cao, J., Kinsey, B.L., “Next Generation Stamping Dies –Controllability and Flexibility,” Robotics and Computer Integrated Manufacturing, 17 (2001) 49-56

[33]. Sahai, A., Cao, J., Xia, C.Z., “Sequential Optimization and Reliability Assessment Method for Metal Forming Processes,” NUMIFORM 2004

[34]. Haldar, A., Mahadevan, S., Reliability Assessment Using Stochastic Finite Element Analysis, John wiley & Sons, INC., 2000

[35]. Irfan, K., McMahonand, Xianyi, C. M., “Reliability-based Structural Optimization Using the Response Surface method and Monte Carlo Simulation,” 8th International Machine Design and Production Conference, Sep.9-11, 1998, Ankara Turkey

[36]. Sirkirk, J.F., “Process Variable Effects on Sheet Metal Quality,” Journal of Applied Metalworking, American Society for Metals, (1986) pp.262-269 189

[37]. Jaisingh, A., Narasimhan, K., “Sensitivity Analysis of a Deep Drawing Process for Miniaturized Products,” Journal of Materials Processing Technology, 147 (2004) 321-327

[38]. Lee, B.H., Kenum, Y.T., Wagoner, R.H., “Modeling of the Friction Caused by Lubrication and Surface Roughness in Sheet Metal Forming,” Journal of Materials Processing Technology, 130-131 (2002) 60-63

[39]. Sheng, Z.Q., Shivpuri, R., “Adaptive PI Control Strategy for Prediction of Variable Blank Holder Force,” ESAFORM 2004, Trondheim, Norway, April 28-30, 2004

[40]. Hirose, Y., Hishida, Y., Furubayashi, T., Oshima, M., and Ujihara, S., 1990, “Part II: Applications of BHF-Controlled Forming Techniques,” Proc. 4th Symposium of the Japanese Society for the Technology of Plasticity.

[41]. Hardt, D.E., and Fenn, R.C., 1993, “Real-Time Control of Sheet Stability During Forming,” ASME Journal of Engineering for Industry, 115(3), pp. 299-308.

[42]. Sim, H.B., and Boyce, M.C., 1992, “Finite Element Analysis of Real-time Stability Control in Sheet Forming Processes,” ASME Journal of Engineering Materials and Technology, 114(2), pp. 180-188.

[43]. Cao, J., and Boyce, M.C., 1994, “Design and Control of Forming Parameters using Finite Element Analysis,” Computational Material Modeling, PVP-Vol. 294, pp. 265-285.

[44]. Sheng, Z.Q., Jirathearanat, S., and Altan, T., 2004, “Adaptive FEM Simulation for Prediction of Variable Blank Holder Force in Conical Cup Drawing,” International Journal of Machine Tools and Manufacture, 44, pp. 487-494.

[45]. Buranathiti, T., Cao, J., Xia, Z.C., and Chen, W., 2005, “Probabilistic Design in A Sheet Metal Stamping Process under Failure Analysis,” NUMISHEET, L.M. Smith et al., eds., pp. 867-872.

[46]. Li, Y.Q., Cui, Z.S., Ruan, X.Y., and Zhang, D.J., 2005, “Application of Six Sigma Robust Optimization in Sheet Metal Forming,” NUMISHEET, L.M. Smith et al., eds., pp. 819-824.

[47]. Zhang, W.F., Sheng, Z.Q., and Shivpuri, R., 2005, “Probabilistic Design of Aluminum Sheet Drawing for Reduced Risk of Wrinkling and Fracture,” NUMISHEET, L.M. Smith et al., eds., pp. 247-252.

[48]. Montgomery, D.C., and Myers, R.H., 1995, Response Surface Methodology: Process and Product Optimization Using Designed Experiments, John Wiley & Sons.

190

[49]. Triantafyllidis, N., Maker, B., Samanta, S.K., An Analysis of Drawbeads In Sheet Metal Forming: Part I-Problem Formulation, J. Engr. Mater. Technol. 108, 321-327 (1986)

[50]. Cao, J. and Boyce, M.C., Drawbeads Penetration as A Control Element of Material Flow, Sheet Metal and Stamping Symposium, SAE 930517, Detroit, 1993, pp.145-153

[51]. Siegert, K., Ziegler, M., Wagner, S., Closed loop control of the friction force. Deep drawing process, Journal of Materials Processing Technology 71 (1997) pp.126-133

[52]. Schey, J.A., Friction in Sheet Metal Working, SAE 970712

[53]. Neudecker, T., Popp, U., Schraml, T., Engel, U., Geiger, M., Towards optimized lubrication by micro texturing of tool surfaces, Advanced Technology of Plasticity, Vol. I, Proceedings of the 6th ICTP, Sept. 19-24, 1999, Nurmberg, Germany

[54]. Tonshoff, H.K., Hesse, D., Mommsen, J., Micromachining Using Excimer Lasers, Annals of the CIRP 42 (1993) 1, p.247-251.

[55]. Wagner, S., Tribology in Drawing Car Body Parts, SAE, 1999-01-3228

[56]. Fonseca, C.M., and Fleming, P.J., Genetic for Multiobjective Optimization: Formulation, Discussion and Generalization, In Proceedings of The 5th International Conference On Genetic Algorithms, pp.416, V423, 1993.

[57]. Srinivas, N., and Deb, K., Multiobjective Optimization Using Nondominated Sorting in Generic Algorithm, Evolutionary Computation, Vol.2, No.3, pp.221, V248, Fall 1994

[58]. Horn, J., Nafpliotis, N., and Goldberg, D.E., A Niched Pareto Genetic Algorithm for Multiobjective Optimization, In Proceedings of the First IEEE Conference on Evolutionary Computation, Vol.1, pp.82-87, 1994

[59]. Zitzler, E., and Thiele, L., Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach, IEEE Transactions on Evolutionary Computation, Vol.3, No.4, pp.257, V271, 1999

[60]. Knowles, J., and Corne, D., The Pareto Archived Evolution Strategy: A New Baseline Algorithm For Pareto Multiobjective Optimization, In Proceedings of 1999 Congress on Evolutionary Computation, Vol.1, pp.105, 1999

[61]. K.Deb, A.Pratap, S.Agarwal, and T.Metarivan, A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II, IEEE Transactions on Evolution Computation, Vol. 6, No.2, 2002, pp.182-197. 191

[62]. Sheng, Z.Q., Yang, J.B., Jirathearanat, S., Altan, T., Drawing of Conical Cups-prevention of Wrinkling and Fracture by Controlling Blank Holder Force, ERC Report/ERC/NSM-01-R-47-A, 2001, Engineering Research Center for Net Shape Manufacturing, the Ohio State University.

[63]. Keeler, S.P., and Backofen, W.A., 1964, Plastic Instability and Fracture in Sheet Stretched Over Rigid Punches, ASM Transactions Quarterly, Vol.56, pp.25-48.

[64]. Goodwin, G..M., 1968, Application of Strain Analysis to Sheet Metal Forming Problems in the Press Shop, SAE paper, No.680093.

[65]. Cockcroft, M.G.., Latham, D.J., 1968, J. Inst. Metals, 33-39.

[66]. Shulkin, L.B., Mendelsohn, D.A., Kinzel, G..L., Altan, T., 1997, Blank Holder Pressure (BHP) Control with Flexible Blank Holder in Sheet Metal Forming, ERC/NSM-S-97-14, Engineering Research Center for Net Shape Manufacturing, the Ohio State University.

[67]. Arrieux, R., Determination and Use of the Forming Limit Stress Diagrams in Sheet Metal Forming, Journal of Materials Processing and Technology 53 (1995) 47-56.

[68]. Havranek, J., Wrinkling Limit of Tapered Pressings, J. Aust. Inst. Met. 20(2) (1975) 114-119.

[69]. Hosford, W.F., Caddell, R.M., Metal Forming: Mechanics and , 2nd edn, 1993.

[70]. Chen, F.K., Liao, Y.C., An Analysis of Draw-Wall Wrinkling in A Stamping Die Design, Int. J. Adv. Manuf. Technology (2002) 19:253-259.

192

APPENDIX A

A.1 Objective:

1. To check whether the neural network can have the same level of prediction accuracy

as regression when the same set of data is given?

2. What is the optimal structure for the neural network for the fixed data size?

3. When the data are collected by different experimental design methods, which one

will give the least prediction error for the neural network modeling?

A.2 Approach:

First, the prediction accuracy of regression or neural network can not be simply taken for granted as good or bad. In some cases, one method may be better than the other, while at some time, it is opposite. So in order to have a comprehensive investigation of their performance, we design an experiment with three test functions, four settings of data size and three experimental design methods.

The three test functions are chosen so that their surface complexes are increased one by one. They are: (1) Cubic function (2) Bivariate normal function (3) Cowboy hat function, as shown in figure A.1. 193

(a) Cubic function

(b) Bivariate normal function

(C) Cowboy hat function

Figure A.1. Shape of three test functions

194

The four settings of data size are: (1) 9 data points, (2) 16 data points, (3) 25 data points,

(4) 36 data points. The three experimental design methods are; (1) D-optimal design, (2)

Bayesian D-optimal design, (3) Latin hypercube design.

For the same setting of data size, different data is collected according to the three experimental design methods. Here, we notice that D-optimal can give several design matrixes by the order of the assumed model when the number of runs is the same. For example, when the number of runs is 25, the possible models are: (1) full first order, (2) full second order, (3) full third order, (4) full forth order, (5) full fifth order. Since we don’t know whether the prediction error is higher or lower when the assumed model orders change, we use the table A.1 to illustrate all the model orders with different run numbers.

195

9 runs 16 runs 25 runs 36 runs full 2nd model full 3rd model full 3rd model full 3rd model full 3rd model 4th full Bayesian full 4th model full 4th model full 4th model Latin hypercube 5th full Bayesian full 5th model full 5th model Latin hypercube 6th full Bayesian full6th model Latin hypercube full 7th model 8th full Bayesian Latin hypercube

Table A.1. The model orders with different run numbers.

196

1. 5 1.5

1 1

0.5 0.5

0 Series1 0 Ser i es 1 -1.5 -1 -0.5 0 0.5 1 1.5 -1.5 -1 -0.5 0 0.5 1 1.5 -0.5 -0.5

-1 -1

-1.5 -1.5

Run=9, 2nd full model D-optimal Run=9, 3rd full model D-optimal

1. 5 1

1 0.8

0.5 0.6 Series1 0 Series1 0.4 -1.5 -1 -0.5 0 0.5 1 1.5 -0.5 0.2

-1 0

-1.5 0 0.5 1 1.5

Run=9, 4th full model Bayesian D-optimal Latin Hypercube design

Figure A.2. The data 2-D scatter plots for 9 points.

197

1. 5 1.5 1 1 0.5 0.5 0 -1.5 -1 -0.5 0 0.5 1 1.5 Series1 0 Series1 -0.5 -2 -1 0 1 2 -0.5 -1 -1 -1.5 -1.5

Run=16, 3rd full model D-optimal Run=16, 4th full model D-optimal

1. 5 1. 2

1 1

0.5 0.8

0.6 Series1 0 Series1 -1.5 -1 -0.5 0 0.5 1 1.5 0.4 -0.5

0.2 -1 0 -1.5 00.20.40.60.811.2

Run=16, 5th full model Bayesian D-optimal Latin Hypercube design

Figure A.3. The data 2D for 16 points.

198

1. 5 1. 5

1 1

0.5 0.5

0 Series1 0 Series1 -1.5 -1 -0.5 0 0.5 1 1.5 -1.5 -1 -0.5 0 0.5 1 1.5 -0.5 -0.5

-1 -1

-1.5 -1.5

Run=25, 3rd full model D-optimal Run=25, 4th full model D-optimal

1. 5 1.5

1 1

0.5 0.5

0 0 Series1 Series1 -1.5 -1 -0.5 0 0.5 1 1.5 -2 -1 0 1 2 -0.5 -0.5

-1 -1

-1.5 -1.5

Run=25, 5th full model D-optimal Run=25, 6th full model Bayesian D-optimal

1.2

1

0.8

0.6 Series1 0.4

0.2

0 00.511.5

Latin Hypercube design

Figure A.4. The data 2D scatter plot for 25 points.

199

1. 5 1. 5

1 1

0.5 0.5 Series1 0 0 Series1 -1.5 -1 -0.5 0 0.5 1 1.5 -1.5 -1 -0.5 0 0.5 1 1.5 -0.5 -0.5

-1 -1

-1.5 -1.5

Run=36, 3rd full model D-optimal Run=36, 4th full model D-optimal

1. 5 1. 5

1 1

0.5 0.5

0 Series1 0 Series1 -1.5-1-0.500.511.5 -1.5 -1 -0.5 0 0.5 1 1.5 -0.5 -0.5

-1 -1

-1.5 -1.5

Run=36, 5th full model D-optimal Run=36, 6th full model D-optimal

1. 5 1. 5

1 1

0.5 0.5

0 Series1 0 Series1 -1.5 -1 -0.5 0 0.5 1 1.5 -1.5 -1 -0.5 0 0.5 1 1.5 -0.5 -0.5

-1 -1

-1.5 -1.5

Run=36, 7th full model D-optimal Run=36, 8th full model Bayesian D-optimal

1. 2

1

0.8

0.6 Series1

0.4

0.2

0 0 0.2 0.4 0.6 0.8 1 1.2

Latin Hypercube design Figure A.5. The data 2D scatter plot for 36 points.

200

For each of these data sets, we will fit to the respective regression model. Then we randomly select 1000 data points from the design space and calculate the squared difference between the predicted value and the true value by our known test functions.

The overall prediction error is measured as follows:

1000 (y − y) ) 2 RMSE = ∑i=1 i i 1000

However, to calculate RMSE for the neural network, things become complicated. The reason is that first, the network can have different number of hidden nodes as you want. It is not fixed at all. The second, there are a lot of training methods. Different ones may have different effects. The third, the training convergence error can be set manually. To see what effect can these factors have, for each data set we actually use 16 different combination of number of hidden nodes, training method and training error to build the neural network prediction model. The table A.2 represents the RMSE resulted from different treatment for test function #3 at run 25 and 3rd full model D-optimal design. We have 16 results for each design matrix.

201

Number of Training Training RMSE1 RMSE2 RMSE3 RMSE4 RMSE5 RMSEave RMSEvar hidden method error nodes 3 1 0.01 0.36080.35363 0.44372 0.55582 0.49739 0.43397 0.007607 3 1 0.001 0.39378 0.40901 0.39584 0.3677 0.40945 0.39954 0.000288 3 2 0.01 0.54614 0.6134 0.62496 0.60945 0.60945 0.61077 0.00097 3 2 0.001 0.51948 0.43873 0.43873 0.43873 0.43874 0.43873 0.001304 6 1 0.01 0.37859 0.32452 0.32269 0.48226 0.49886 0.39512 0.007166 6 1 0.001 0.68158 0.4759 0.38801 0.47085 0.45272 0.46649 0.012247 6 2 0.01 0.30436 0.23813 0.25355 0.33544 0.31371 0.29054 0.001712 6 2 0.001 0.27586 0.34779 0.35557 0.45858 0.39668 0.36668 0.004519 10 1 0.01 0.37595 0.6785 0.29018 0.31661 0.35506 0.34921 0.02478 10 1 0.001 0.50662 0.99966 0.34177 0.50057 0.26386 0.44965 0.081993 10 2 0.01 0.37809 0.311 0.37789 0.4026 0.26432 0.35566 0.003286 10 2 0.001 0.35574 0.40952 0.40331 0.53573 0.47556 0.42946 0.004931 15 1 0.01 0.41408 0.37616 0.50912 0.58991 0.61954 0.50437 0.01129 15 1 0.001 0.44542 0.77606 0.57462 0.33361 0.36945 0.46316 0.032371 15 2 0.01 0.31992 0.37212 0.22116 0.5473 0.59756 0.41311 0.024807 15 2 0.001 0.44199 0.41923 0.37942 0.32656 0.49367 0.41355 0.003996

Table A.2. RMSE of neural network from different training methods fro test function #3

at run 25 and 3rd full model D-optimal design.

202

In the table A.2, there are five RMSE values. Each of them is from the same neural network model. The tricky thing about the neural network is that every time the same network is trained by the same data set, the nodes weights are not the same. Thus the prediction errors are varied. Sometime, the same network will even not converge. To overcome this disadvantage, I make thirty same structured networks and train them by the same data. After the training, I choose the five networks that have the least training errors. Then I calculate the RMSE for each of those five networks and average them to give the RMSEave for each network treatment.

For run equals to nine, and test function one, we run the three-way ANOVA, and found that the number of nodes and training methods do matter in prediction error. The other terms like the training error and other interaction terms are not significant. The similar results apply also to other run number and test function combinations.

203

(a). Effect of network methods for test function 1

(b). Effect network methods for test function 2

(c). Effect of network methods for test function 3

Figure A.6. Effect of neural network methods for three test functions at run=9 and 2nd full model D-optimal design.

204

Main Effects Plot (data means) for 3th_ave_1

nodes numer train method 0.5

0.4

0.3

0.2

0.1 3 6 10 15 1 2 train error 0.5

Mean of 3th_ave_1 of Mean 0.4

0.3

0.2

0.1 0.001 0.010 (a). Effect of network methods for test function 1

Main Effects Plot (data means) for 3th_ave_2 nodes numer train method

0.4

0.3

0.2

0.1 3 6 10 15 1 2 train error

0.4 Mean of 3th_ave_2 0.3

0.2

0.1 0.001 0.010 (b). Effect of network methods for test function 2

Main Effects Plot (data means) for 3th_ave_3 nodes numer train method

0.7

0.6

0.5

3 6 10 15 1 2 train error

0.7 Mean of 3th_ave_3

0.6

0.5 0.001 0.010 (c). Effect of network methods for test function 3

Figure A.7. Effect of neural network methods for three test functions at run=9 and 3nd

full model D-optimal design.

205

The plots for the other order D-optimal or Latin hypercube designs are not shown here.

But we found that there is a principal in choosing the structure of network and some of the training parameters. Basically, when the data size is small, we’d better select small number of hidden nodes. When the data size is larger, we’d better select large number of

hidden nodes. The multiplication of input nodes number Ninput and number of hidden

nodes Nhidden in addition of Nhidden is recommended smaller than the data size Ndata .

Also when the number of hidden nodes is larger, the training error is better not set to be too small. We can roughly regard the number of weights coefficient as the number of the unknown parameters and the data size as the number of degree of freedom in . So if the number of hidden nodes is too many when our data size is relative small, the network weights cannot be well estimated. For each data size settings, we recommend corresponding neural network structures and training parameters. They are tabulated in table A.3. So for each design matrix, we have 16 network results. From these 16 results, we select the one by the criteria above. Then by this way, we can compare the accuracy of regression and neural networks given the same data set.

The figure A.8 compares all the regression and network RMSE for different run numbers

(9, 16, 25, 36) with the three test functions. We observed that when the data size is small, say 9 in this case, the regression outperforms the network a lot. However, when the data collected are more and more, the network tends to have the similar performance as the regression.

206

9 runs 16 runs 25 runs 36 runs # of hidden nodes 3 3 6 10 Training method LM LM LM LM Training error 0.001 0.001 0.001 0.01

Table A.3. Recommended neural network structure and training parameters.

207

Comparison of regression and neural net error

1 0.9 0.8 0.7 0.6 neural net 0.5 regression RMSE 0.4 0.3 0.2 0.1 0

9 9 9 5 16 16 16 25 25 25 2 36 36 36 36 36

Figure A.8. Comparison of regression and neural network prediction error.

208

Comparison of regression and ANN, regre RMSE (test function: cubic function) net RMSE 0.16 0.14 0.12 0.1 0.08

RMSE 0.06 0.04 0.02 0 9 9 9 9 16161616252525252536363636363636 Sample number

(a). Comparison of regression and ANN for cubic test function.

Comparison of regression and ANN, regre RMSE (test function2: bivariate normal) net RMSE 0.35 0.3 0.25 0.2 0.15 RMSE 0.1 0.05 0 9 9 9 9 16 16 16 16 25 25 25 25 25 36 36 36 36 36 36 36 Sample number

(b). Comparison of regression and ANN for bivariate normal function.

Comparison of regression and ANN, regre RMSE (test function3: cowboy hat function) net RMSE 1

0.8

0.6

RMSE 0.4

0.2

0 9 9 9 9 16 16 16 16 25 25 25 25 25 36 36 36 36 36 36 36 sample number

(c). Comparison of regression and ANN for cowboy test function.

Figure A.9. Individual plot of regression and ANN prediction error for three test functions.

209

To have clearer view of the comparison, we split the RMSE by the test functions. The figure A.9 (a) is the RMSE for the test function one. Since the full 3rd full polynomial model includes the cubic function, the regression method actually gives no prediction error. This tells us when the system response is simple the regression should be the better choice.

The RMSE for the test function two is plotted in the figure A.9(b). We see that when the response is starting to deviate from polynomial form, the performance of neural network comes close to the regression. There are several peaks in the regression RMSE curve.

These points are corresponding to the Latin Hypercube designs. It is obvious that this design method is not suitable for regression modeling and the regression method gives higher requirement of the data than the neural network does. The test function three is cowboy hat function, which is quite nonlinear. We know the neural network is linear combination of basic activation functions, most of which are nonlinear too. So the regression is supposed to work worse than network does in this case.

However, from the figure A.9(c), we see that regression still gives lower RMSE generally than the network. This phenomenon is even apparent when the data size is small. The explanation we tend to give is that using D-optimal design or Bayesian design, most points gather at the corner of the design space. There is no much information given at the center of the space. Since the network doesn’t know the underlying polynomial form as the regression method, it is just being trained to remember the response at the

210

design space boundary. This is why it gives high prediction error over the whole space.

But when the data size is increased the network is doing better and better.

When the data are collected by different experimental design methods, which one will give the least prediction error for the neural network method? We discussed the answer by three different levels of response complexity. When the response is highly nonlinear like the cowboy hat function, from the main effects plots, we found that given the small size of data, the prediction is very bad no matter what experimental designs methods are used. However, D-optimal design gives relative small prediction error (In the x-axis of figure A.12, 9 means Bayesian D-optimal design, 10 means Latin hypercube design). It also verifies our conclusion aforementioned: when the data is not big enough, we’d better use less hidden nodes in the neural network otherwise the more the unnecessary hidden nodes, the more spurious responses it will catch. When the number of runs is increased from nine to thirty-six, the prediction error decreased from 0.55 to 0.15. We see in all these cases, the Bayesian D-optimal design almost gives the lowest prediction error within all the design methods for the function approximation by neural networks. My explanation is that when the response becomes highly nonlinear, the traditional

D-optimal design still focuses on the boundary and some locations which may carry low-order terms’ information. Bayesian D-optimal, on the other hand, contains lots of high order terms and thus brings more information about the system’s behavior.

Therefore, it gives neural network lower prediction error. The Latin hypercube method only gives good prediction when the number of experiments is big.

211

Main Effects Plot (data means) for ave Main Effects Plot (data means) for ave

model number of nodes model number of nodes 0.5 0.3

0.4 0.2 0.3

0.2 0.1 0.1 2 3 9 10 3 6 10 15 3 4 9 10 3 6 10 15 training method training method 0.5 0.3 Mean of ave Mean of ave

0.4 0.2 0.3

0.2 0.1 0.1 1 2 1 2 Run=9, test function 1 Run=16, test function 1

Main Effects Plot (data means) for ave Main Effects Plot (data means) for ave model number of nodes model number of nodes

0.150 0.10

0.125 0.08 0.100 0.06 0.075 0.04 0.050 3 4 5 9 10 3 6 10 15 3 4 5 6 7 9 10 3 6 10 15 training method training method

Mean of ave of Mean 0.150 ave of Mean 0.10

0.125 0.08 0.100 0.06 0.075 0.04 0.050 1 2 1 2 Run=25, test function 1 Run=36, test function 1

Figure A.10. Effect of DOE and neural network methods on prediction error for test function 1.

212

Main Effects Plot (data means) for ave Main Effects Plot (data means) for ave_1

model number of nodes model number of nodes 0.5 0.30

0.4 0.25

0.3 0.20

0.2 0.15 0.10 0.1 2 3 9 10 3 6 10 15 3 4 9 10 3 6 10 15 training method training method 0.5 0.30 Mean of ave Mean of ave_1 0.4 0.25

0.3 0.20 0.15 0.2 0.10 0.1 1 2 1 2 Run=9, test function 2 Run=16, test function 2

Main Effects Plot (data means) for ave_1 Main Effects Plot (data means) for ave_1

model number of nodes model number of nodes 0.18 0.12 0.16

0.14 0.10 0.12

0.10 0.08 3 4 5 9 10 3 6 10 15 3 4 5 6 7 9 10 3 6 10 15 training method training method 0.18 Mean of ave_1 of Mean Mean of ave_1 of Mean 0.16 0.12

0.14 0.10 0.12

0.10 0.08 1 2 1 2 Run=25, test function 2 Run=36, test function 2

Figure A.11. Effect of DOE and neural network methods on prediction error for test function 2.

213

Main Effects Plot (data means) for ave_2 Main Effects Plot (data means) for ave_2

model number of nodes model number of nodes 0.48 0.80 0.46 0.75 0.70 0.44 0.65 0.42 0.60 0.40

2 3 9 10 3 6 10 15 3 4 9 10 3 6 10 15 training method training method 0.48 Mean of ave_2 of Mean 0.80 Mean of ave_2 0.46 0.75 0.70 0.44 0.65 0.42 0.60 0.40 1 2 1 2

Run=9, test function 3 Run=16, test function 3

Main Effects Plot (data means) for ave_2 Main Effects Plot (data means) for ave3

model number of nodes models number of nodes 0.40 0.40 0.35

0.35 0.30

0.30 0.25 0.20 0.25 3 4 5 9 10 3 6 10 15 3 4 5 6 7 9 10 3 6 10 15 training method training method 0.40 Mean of ave3 Mean of ave_2 0.40 0.35

0.35 0.30

0.30 0.25 0.20 0.25 1 2 1 2

Run=25, test function 3 Run=36, test function 3

Figure A.12. Effect of DOE and neural network methods on prediction error for test

function 3.

214

A.3 Conclusion:

In this research, I try to answer the three questions:

1) To check whether the neural network can have the same level of prediction accuracy as regression when the same set of data is given?

Through the comparative study, I found that the network at most times gives worse prediction accuracy than the regression method. Especially when the system real response is quite linear, the disadvantage of network is more apparent. But when the system’s behavior is more and more nonlinear, the network tends to have the similar accuracy as the regression. Since in this project, only three test function is used, we can only assume that when the response is even more complex, the network may outperform the regression.

2) What is the optimal structure for the neural network when the data size is increasing?

In this project, I found that the number of the hidden nodes is the most important parameter we should take care in the building of neural network. Basically, when the data size is small, we’d better select small number of hidden nodes. When the data size is larger, we’d better select large number of hidden nodes. The multiplication of input

nodes number Ninput and number of hidden nodes Nhidden in addition of Nhidden is

215

recommended smaller than the data size Ndata . Also when the number of hidden nodes is larger, the training error is better not set to be too small. We can roughly regard the number of weights coefficient as the number of the unknown parameters and the data size as the number of degree of freedom in regression analysis. So if the number of hidden nodes is too many when our data size is relatively small, the network weights cannot be well estimated. In function approximation, the LM training method gives the fastest training speed and best result.

3) When the data are collected by different experimental design methods, which one will give the least prediction error for the neural network method?

Through the study, we found that for building the system response model by the neural network technique, the Bayesian D-optimal design gives the better prediction accuracy than the other experimental design methods such as Latin Hypercube design or D-optimal designs. In the future work, the EIMSE will be compared with Bayesian D-optimal.

Reference:

[1]. Choueiki, M., Training Data Development With The D-optimality Criterion, IEEE Transaction On Neural Networks, vol. 10, No. 1, 1999

[2]. DuMouchel, W., A Simple Bayesian Modification of D-optimal Designs to Reduce Dependence on An Assumed Model, Technometrics, Vol. 36, NO. 1, 1994

[3]. Johnson, M, Some Guidelines for Constructing Exact D-optimal Designs on Convex Design Spaces, Technometrics, Vol. 25, NO. 3, 1983 216

[4]. DeVeaux, R., Prediction Intervals for Neural Networks via , Technometrics, Vol.40, NO. 4, 1998

[5] Cohn, D., Neural Network Exploration Using Optimal Experimental Design, Advances in Neural Information Processing Systems 6, Morgan Kaufmann, 1994

[6]. Poland, J., Different Criteria for Active Learning in Neural Networks: A Comparative Study, University of Tubingen, Germany, 2003

217

APPENDIX B

In this section we will briefly show the validation result of using aluminum RSM to predict the thinning of steel material we referenced in table 5.1. The aluminum RSM is from chapter 5.

The high and low levels of the aluminum material properties used in DOE and the variation of the referenced steel material properties are compared and listed in table B.1.

From the table, we could see that their K values are not overlapped at all. Their n values are partially overlapped though.

We first keep the blank holder force and lub1 at the normal level. Then we change the value of n and K from low to high in the Pam-stamp simulation. There are 4 combinations for n and K. After the simulation, the thinning value is extracted. Meantime, we use the known RSM of aluminum material to predict the thinning of steel material at this n and K combination. The results are shown in the first 4 data rows in table B.2. We see that the predictions are quite close. The error percentages are all below 2%.

Then we keep both the n and K values at low level, change the values of blank holder force and lub1, and check whether the RSM will still give the good prediction. From last

218

4 rows in table B.2, we could see that the predictions are also very close to the simulation values. Figure B.1 plots all these data in one figure so that we can easily see the difference.

From this testing, we see that the RSM build from the aluminum can be used to extrapolate the thinning behavior of the steel material as long as their n and k values are not that far away from the range used in the aluminum DOE. I think the fact is due to the monotonous characteristic of the thinning or wrinkling behavior with respect to material properties n and K in this sheet metal drawing process. So even the steel has different n/K value, their trends are the same. This conclusion is very helpful because it could save us a lot of effort to re-run another 57 time-consuming FEM simulations in order to get

RSM for steel material.

219

Steel Aluminum n+ 0.256 0.2625 n- 0.184 0.2375 k+ 596 493 k- 496 446

Table B.1. The range of n and K for aluminum and steel materials.

220

thinning n k bhf lub1 thinning simu error perc pred 1 1 0 0 0.075025 0.074228614 0.010615 1 -1 0 0 0.08073 0.080705725 0.000301 -1 1 0 0 0.062263 0.062307029 -0.00071 -1 -1 0 0 0.067563 0.06878414 -0.01807 -1 -1 1 1 0.095322 0.096296131 -0.01022 -1 -1 1 -1 0.056419 0.054879634 0.027285 -1 -1 -1 1 0.083661 0.086482383 -0.03372 -1 -1 -1 -1 0.052839 0.052050062 0.014931

Table B.2. The comparison of predicted values vs. the simulation values.

221

0.12 0.1 0.08 simulation 0.06 prediction 0.04 0.02 0 12345678

Figure B.1. The comparison of prediction and simulation.

222