COST TOLERANCE OPTIMIZATION FOR PIECEWISE CONTINUOUS COST

TOLERANCE FUNCTIONS

A Thesis Presented to

The Faculty of the

Fritz J. and Dolores H. Russ College of and Technology

Ohio University

In Partial Fulfillment

of the Requirement for the Degree

Master of Science

by

Murtaza Kaium Shehabi

June, 2002

OHIO UNIVERSITY LIBRARY ACKNOWLEDGEMENTS

The definition of a 'Thesis' was once preached to me as a work in one's field of

academic interest. It is a contribution, which addresses a problem that has not been resolved to date. In my endeavor to accomplish what no person has, I am truly indebted

to my advisors Dr. Gerth and Dr. Masel. Their constant encouragement and belief in my

capabilities was invaluable. A great portion of my research involved programming in

Visual Basic. I was extremely fortunate to have received guidance from Dr. Dhamija through some of the most critical phases of my research. Dr. Masel was very supportive,

especially towards the end stages of my work.

During the course of my efforts I found rejuvenating strength in the saying that,

"Hope is a good thing to have, perhaps the best of things - and good things never die".

My parents and brother have been very instrumental in shaping my life and thoughts.

Every friend I have known who has inspired me to believe in myself; this is my opportunity to thank each one of you.

Even as I write this, perceiving the end of my academic career, temporarily, the famous words from Robert Frost ring through my mind -

"Miles to go before I sleep, Miles to go before I sleep" Table of Contents

LIST OF TABLES ...... vi . . LIST OF FIGURES ...... VII

1. INTRODUCTION...... 1

1.1 Tolerance Analysis ...... 2

1.2 Min-Cost Tolerancing ...... 5

2 . LITERATURE SEARCH ...... 8

2.1 Cost Tolerance functions ...... 8

2.2 Optimization Techniques ...... 10

2.3 Heuristics ...... 11

2.3.1 Simulated Annealing ...... 1 1

2.3.2 Tabu Search ...... 13

2.4 Modified NLP ...... 14

3 . PROBLEM STATEMENT & RESEARCH OBJECTIVE ...... 16

3.1 Problem Statement ...... 16

3.2 Research Objective ...... 16

4 . METHODOLOGY ...... 17

4.1 Casestudy ...... 17

4.1.1 C-T functions ...... 19

4.1.2 DOE setup ...... 20

4.1.3 Discussion of Case Study ...... 21

4.2 Total Enumeration Method (TEM) ...... 24 v

4.2.1 Global Optimization ...... 26

4.3 Research Methodology - Hypothesis ...... 26

4.4 TEM compared to Case Study Method ...... 27

5 . TEM .SOFTWARE ARCHITECTURE ...... 29

5.1 Architecture ...... 29

5.2 User Interface ...... 3 0

6 . RESULTS ...... 36

6.1 Case A - The Gerth and Pfieffer method ...... 36

6.2 Case B - Case A augmented with Visual Inspection ...... 37

6.3 Case C - Automated TEM ...... 39

7 . SUMMAIIY...... 42

7.1 Limitations and Disadvantages of TEM ...... 43

7.2 Significant Observations ...... 44

7.3 Future Scope ...... 45

8 . REFERENCES ...... 47

APPENDICES ...... 50

8.1 Appendix A ...... 51

8.2 Appendix B ...... 54

8.3 Appendix C ...... 57 List of Tables

Table 1. Table of component features. tolerances. and partial derivatives for

Transmission Case Study ...... 19

Table 2 . Fractional Factorial Experimental Design Matrix ...... 21

Table 3 . Number of CT curve segments for Experiment 1...... 28

Table 4 . Case A - Gerth and Pfeiffer method ...... 37

Table 5 . Case B - Gerth and Pfeiffer method with visual inspection ...... 38

Table 6 . Case C - Automated TEM ...... 39 List of Figures

Figure 1. Influence of dimensional tolerance on cost of manufacture ...... 2

Figure 2 . Example of a continuous cost-tolerance function ...... 9

Figure 3 . Example of a discrete cost-tolerance function ...... 9

Figure 4 . Example of a piecewise continuous cost-tolerance function, discontinuous at

two points ...... 9

Figure 5. Excel spreadsheet of TEM for Experiment 1...... 22

Figure 6 . Flow chart of the TEM System ...... 30

Figure 7 . Sheet 1: Input Sheet ...... 31

Figure 8 . Sheet 2: CT Data Sheet ...... 32

Figure 9. Sheet 3: Output sheet ...... 34

Figure 10. Graphical representation of the results from the three cases ...... 41

Figure 11 . A convex function ...... 51

Figure 12. Example of a convex set ...... 52 As an integral part of mechanical design, tolerances have a profound influence on the functional performance and costs of the designed product. Tolerance specification is a complex and demanding task and is carried out traditionally on a trial- and-error basis [5]. The problem of specifying component tolerances to produce the least expensive system satisfying the performance requirements is of great importance to design engineers. Tolerance synthesis jointly models, evaluates, and aims to optimize the functional performance and manufacturing costs of a mechanical product [I]. One such tolerance synthesis technique is called 'minimum manufacturing cost design7 or

'minimum cost tolerancing' [5].

Minimum cost tolerancing (MCT) attempts to reduce the overall cost of a product by widening the tolerances on the more expensive component features and reducing the tolerances on the less expensive component features [14]. In order for MCT to function, one must know the cost - tolerance (C-T) relationship of each component feature. The

C-T functions can generally be discrete, continuous, or piecewise continuous. Figure 1 shows a typical continuous function, where cost reduces non-linearly with the widening of the tolerance like a convex curve. Rough Stages of Processes Grinding 1Machining

.c, U q 16 - .I .c, cJ m 12 - 2a 4- ii 'G 8 - 2 L 9 4 - I I 0.004 0.012 0.020

Tolerance, in +-

Figure 1. Influence of dimensional tolerance on cost of manufacture.

1.1 Tolerance Analysis

Tolerance analysis addresses the issue of determining the distributional properties of an assembly's functional performance measure (FPM) as a function of the various component features in the assembly. The functional performance measure expresses the total assembly stack-up equation, and also a scale by which the functionality of the assembly is determined. The mathematical relationship that quantifies the influence of each component feature on the FPM is called a stackup function (see (Equation 1)).

Y = f (x~,x~,x~..'x~) (Equation 1) where Y functional performance measure (FPM),

xi ith component feature,

n number of component features in the stackup.

The FPM or assembly tolerance, may be estimable as a function of secondary units, or sub-level components. For example, in a rocket engine, which consists of turbopumps, combustion chambers, and nozzles, the thrust may be known as a function of the module efficiencies, resistances and system configuration [4]. The stackup function can be a linear or non-linear function. Typically, the analysis involves determining the mean and variance of the FPM. The mean is represented below, (see

(Equation 2)):

1~y= f (~17~2,r~3"'Pn) (Equation 2) where py mean of the FPM, and

pi mean of the ith component feature.

The variance equation is a little more complex. Iff is non-linear, a first order

Taylor series expansion about the nominal design point is used to linearize the function

[6]. The partial derivative of the function with respect to each of the individual component features determines the variance-contribution of that feature towards the total assembly variance. The standard deviation of the FPM (see (Equation 3)) is thus, given by: 4

(Equation 3)

where a, the standard deviation of the FPM

aY -ax; the partial derivative of the FPM with respect to the ith component feature,

a12 the variance associated with the ith component feature.

The component and FPM variances can be related to their tolerances via indices, given by (Equation 4):

USL - LSL CP = (Equation 4) 60 where C, C, the process capability index

USL upper specification limit

LSL lower specification limit

Assuming equal bilateral tolerances for all component features and substituting

(Equation 3) into (Equation 4), and solving for T,, we get the well known statistical tolerancing formula:

(Equation 5) where

TolY the FPM tolerance, c~~ the process capability of the FPM, tali the ith component feature tolerance, and

Cpi the process capability of the ith component feature.

1.2 Min-Cost Tolerancing

Minimum cost tolerancing (MCT) is a method of determining the optimal feature tolerances that will result in a minimum cost assembly [6]. The method considers the upper and lower tolerance for each component feature and the FPM tolerance, which is the assembly tolerance, as design constraints. Besides knowing the component feature and FPM tolerances, the stackup function (to compute the mean), and the partial derivatives of the stackup function (to compute the variance) must also be known. In

MCT, the influence of tolerance values on the manufacturing costs of each of the component features is modeled and used as the objective function of the optimization.

The objective function to be minimized is simply the sum of the individual feature costs.

The cost of each feature is presumed to be a function of the individual feature dimension tolerances. A detailed discussion of stackup as well as objective functions is found in [S].

k Objective Function: MIN[c,,,]= ci(tali) (Equation 6) i=l 6

Subject to: (Equation 7)

(Equation 8) where

CTotnl Total COS~

Ci(tolJ Cost function associated with toli

LLtoIi Lower limit on tali

ULtoli Upper limit on tali

These are two constraints on the objective function:

1. The tolerance stack-up should not exceed the specified FPM tolerance (see (Equation

7)). This ensures that the assembly will function as desired. Since it is a measure of

assembly yield, it is also called the yield constraint [I].

2. The tolerances of the individual components are bound between an upper and a lower

limit (see (Equation 8)). This is done so that the optimization routine does not drive

tolerances to zero or infinity while arriving at the solution. The upper and lower

bounds represent the limits of the various processing options under consideration.

Most of the work that has been done [4], [5], [6], [7], [17] has assumed the cost tolerance relationship to be either continuous or discrete. Little or no work has been done on solving the MCT problem with piecewise continuous cost tolerance (PCCT) functions. The piecewise continuity of C-T functions may result from the necessity to use more than one process, to attain the required tolerance. This thesis presents a new 7 method of solving the MCT problem with PCCT functions. The next chapter explains tolerance analysis and MCT. The Problem Statement will follow it, where the problem that necessitated the research will be outlined. Chapter 4 will state the Research

Objective. Chapter 5 discusses various alternative optimization techniques, such as simulated annealing and tabu search. The Methods chapter introduces the Total

Enumeration Method (TEM), which was developed to solve the MCT problem with

PCCT functions. TEM was evaluated and validated with data obtained from a case study, which is described in Chapter 7. An explanation of the Program Code, which was written to automate TEM, provided in Chapter 7. Finally the Results compare the values from the original case study with the ones generated by TEM. The final chapter concludes the analysis, discusses the various advantages and disadvantages of TEM, and hture scope in the area of piecewise continuous min-cost tolerancing. 2. LITERATURE SEARCH

MCT aims at determining the component tolerance values, which minimize the overall assembly cost function, while keeping the FPM variation within the specified

FPM tolerance. Solving the MCT problem involves two primary factors, namely,

1. the type of cost-tolerance functions used, and

2. the type of optimization technique used.

The two are not necessarily independent, since the type of technique used depends on the type of functions used.

2.1 Cost Tolerance functions

Cost-tolerance relationships are either continuous (Figure 2), discrete (Figure 3), or piecewise continuous (Figure 4). The continuous case has been thoroughly addressed in the literature. For example in [16], five continuous C-T functions were examined: the

Sutherland function, the reciprocal square fbnction, reciprocal function, exponential function, and the Michael Siddall fbnction. The exponential function, which is yet again an example of a continuous function, has been used in [2], [ 111, [ 161, and [IS] to define the cost tolerance relationship. ,- UY S

Tolerance

Figure 2. Example of a continuous cost-tolerance function.

Cost - cl - c4

tl t2 t3 t4

Figure 3. Example of a discrete cost-tolerance function.

A

c.'V) C \ U \- b Tolerance

Figure 4. Example of a piecewise continuous cost-tolerance hnction, discontinuous at two points. 2.2 Optimization Techniques

Various optimization techniques have been employed to solve the min-cost problem [13]. The method of Lagrange multipliers is employed in [14] and [7]. Pseudo- boolean programming coupled with zero-one integer programming is an improved approach in solving problems where there is a need to reduce the number of variables and constraints [7] and [lo]. For C-T relationships, which are continuous, non-linear programming (NLP) can also be used [4], [6], [7], whereas for C-T relationships that are discrete, dynamic programming is used [17].

However a suitable optimization technique has not been developed for piecewise continuous C-T functions. For example, [6] utilizes NLP to solve the problem.

Unfortunately, the initial solution was a local optimum, and manual intervention was required to drive the solution to a better solution. However, even with this technique it is not known whether the solution is the global optimum. Therefore, it was concluded that the piecewise continuous problem could be addressed more directly, if by some means it was represented as a continuous problem. This is possible if each segment of the piecewise C-T function is considered independently, and optimized in combination with single segments of the various features that make up the assembly. Thus a series of optimizations will be required to test all possible combinations of the segments, for the

PCCT functions that represent the features of the assembly. This problem is of a

"combinatorial process selection" type. 2.3 Heuristics

Two heuristic methods were considered to address the MCT with PCCT functions problem - simulated annealing (SA) and tabu search (TS). They were considered because of their increasing popularity in areas of combinatorial selection optimization problems, where they have been implemented successfully. They find widespread application in job scheduling, where the aim is to minimize process time

[15], and also in the chemical industry for separating mixtures into pure products at minimal cost [3]. Both of these cases had constraints within the system. The aim was to minimize time, in the case of scheduling, and cost, for separating mixtures.

Simulated annealing and tabu search, are considered special cases of genetic algorithms and fall under the family of "local search techniques" [12]. Local search techniques usually attempt to find a solution better than the current one through a search in the neighborhood of the current solution. Two solutions are neighbors if one can be obtained through a well-defined modification of the other. Since the method has been frequently employed to solve various scheduling problems, it will be explained within the scheduling context.

2.3.1 Simulated Annealing

Simulated annealing (SA) is a search procedure, which has its origin in a field other than industrial engineering. It was first developed as a simulation model for describing the physical annealing process for condensed matter. The SA process performs a number of iterations. At each iteration k, there is a current solution as well as a best solution obtained so far.

For a single machine problem, schedules are given sequences (permutations) of jobs, say Sk and So. Let G(Sk) and G(So) denote the corresponding values of the objective function. The SA process, in its search for an optimal schedule, moves from one schedule to another. From the schedule at iteration k, Skya search is conducted withn its neighborhood for a new schedule. First, a candidate schedule, S, is selected from the neighborhood. This selection can be done at random or in an organized, possibly sequential way. If S is a better schedule than Sk, a move is made, setting Sk+,= S. If S is better than the best schedule obtained so far, So is set equal to S. However, if S is a worse schedule than Skya move is still made to S with probability:

P(Sk,S) = exP ((G(Sk)-G(S))/Pk) (Equation 9)

With probability 1-P(Sk,S), schedule S is rejected in favor of the current schedule, setting

Sktl = Sk. The PI 2 P2 2 P3 2 ....2 0 are control parameters referred to as cooling constants or temperatures.

From the above description of the SAYit is clear that moves to worse solutions are allowed with a finite probability (see (Equation 9)) This is a major difference with regular neighborhood searches, such as descent method and interchange heuristics.

However, SA requires the analyst to have a good understanding of the system to be optimized. The selection of the cooling constants is critical for an effective solution.

Otherwise the initial solutions for the iterative searches will be randomized, and there will be no organized direction for the search to progress. Also, the decision of selecting 13 the solution that replaces the present solution is probabilistic and not deterministic like tabu search. Ths demands the user to have sound judgement in order to realize a solution that is close to a global optima. These factors do not satisfy the goal of establishing a simple method, which can be applied without extensive knowledge of the system.

2.3.2 Tabu Search

Tabu Search (TS) is in many ways similar to SA. The procedure also moves from one schedule to another, with the next schedule being possibly worse than the preceding schedule. As in SA, a neighborhood is defined for each solution or schedule. The basic difference between TS and SA lies in the mechanism used for approving candidate moves. The mechanism is not probabilistic, but rather deterministic. At any stage in the process, a tabu-list of mutations that the procedure is not allowed to perform is kept.

Example of mutations on the tabu-list may be pairs of jobs that may not be interchanged.

The list has a fixed number of entries (usually between 5 & 9). Every time a move is made through a mutation in the current schedule, the reverse mutation is entered at the top of the tabu-list; all other entries are pushed down one position and the bottom entry is deleted. The reverse mutation is put on the list to avoid returning to a local optimum that has been visited before. Actually, at times, a reverse mutation that is tabu could have led to a new schedule not visited before, with an objective value lower than any one obtained before. Ths may happen when the mutation is close to the bottom of the tabu list and a number of moves have already been made since the mutation was entered in the list. 14

Thus, if the number of entries in the tabu-list is too small, cycling may occur. If the number of entries is too large, the search may be unduly constrained.

There are several disadvantages in employing TS for a min-cost problem. First,

TS has proved to be effective when the function to be minimized is discrete [12], which is not the case here. The cost tolerance function to be minimized in this case is continuous. Second, the amount of computation time needed to obtain such a solution tends to be relatively long in comparison with the more problem-specific approaches

[15]. Third, TS does not provide any information about the quality of search that has resulted, and requires the user to decide if the local optimum is indeed close to the global optimum. Lastly, the length of the tabu-list is a critical factor, which again is dependent on the user's judgement.

2.4 Modified NLP

Based on the knowledge of existing research, it was concluded that heuristic methods are not the most effective way to solve the minimum cost-tolerance problem with piecewise continuous C-T functions. Instead, it is proposed to employ a modified non-linear programming (NLP) approach, wherein, the PCCT functions are represented as continuous. This is made possible by breaking the PCCT functions of all the features into single segments. Each segment of an individual feature is optimized with single segments, from all the other features that constitute the assembly, at a time. All the possible combinations of the single segments representing the feature functions of the assembly are optimized. The method therefore is straightforward and does not involve 15 complex computations, or decisions, which make its application easy. A true global optimum could be obtained by applying the NLP technique that was found in the existing literature [6], as opposed to the heuristic search methods which relied on the user's judgement, and did not guarantee a global optimum. Combining each segment of a

PCCT function with other single segments of other features in the stack-up presented a hurdle. This was because all these combinations would have to be optimized, in order to obtain a true global minimum, making it extremely tedious. This problem is of a

"combinatorial selection" type. Establishing a suitable technique to handle this problem efficiently was the goal. The following chapter outlines the objective of this research to establish an efficient method. 3. PROBLEM STATEMENT & RESEARCH OBJECTIVE

3.1 Problem Statement

To develop and demonstrate a suitable optimization method to minimize the cost of piecewise continuous cost tolerance (PCCT) functions.

3.2 Research Objective

The objective of this research is that the method should be simple for determining the global optimum solution to the minimum cost tolerance problem with piecewise continuous cost-tolerance fbnctions. "Simple" in this context means that the method should only require the user to input a set of values to define the system. This includes data to define the CT function associated with each critical feature dimension and the constraints that characterize the performance requirements of the system. There should be no complicated calculations to be performed, nor should the user be required to make complex decisions, as long as the data input is correct. 4. METHODOLOGY

The modified NLP approach is henceforth referred to as the Total Enumeration

Method (TEM). In order to discuss TEM it is necessary to introduce the case study [6], which was used for obtaining the relevant data to work with. The results obtained from

TEM were also compared with those from [6], wherein the min-cost results are strongly suspected to be local optimums.

4.1 Case Study

Gerth and Pfeiffer in [6], describe a minimum-cost tolerance case study, of a single stage planetary gear transmission. The transmission consists of a drive and output housing, a drive and output shaft, a universal gear, three planets, a planet holder, and a sun. Each of these components has features with associated tolerances. The case study aimed ai reducing the excessive noise that would result from the amount of movement of the sun relative to the planets. Y is considered to be the amount, the sun's end point could move, and is expressed as a stack-up function. The fhction is dependent on the dimensional tolerances of the individual features that make the transmission assembly.

The details of the geometry can be found in [6]. This research does use the stackup function and CT functions from [6]. The stackup function is given by: (Equation 10) where Y FPM.

m slope of the sun gear profile = -2.097

A - L individual component feature dimensions

20 pressure angle in degrees

35: 109 ratio of two measures relevant to the geometry of the stack-up

The description, initial tolerances, and partial derivatives for each feature are tabulated in

Table 1. Table 1. Table of component features, tolerances, and partial derivatives for

Transmission Case Study.

COMPONENT FEATURE TOLERANCE ABBREVIATION VALUE DERIVATIVE

(mm) Drive Shaft Housing Runout A 0.015 0.321 Bearing Drive Universal Runout B 0.05 0.321 Housing Gear Mating C +O.OOO 0.321 Pilot Clearance -0.044 Universal Output Mating D +0.070 0.321 Gear Housing Pilot Clearance -0.000 Drive Mating E +0.070 0.321 Housing Clearance -0.000 Pilot Runout F 0.05 0.321 Output Universal Runout G 0.05 0.32 1 Housing Gear Mating H +O.OOO 0.321 Pilot Clearance -0.044 Output Shaft Housing Runout I 0.015 0.321 Bearing Sun Gear Profile Profile J +/- 0.440 Deviation 0.012 Planetary Gear Profile Profile K +I-0.012 0.440 gears Deviation Planet Sun to Planet Distance L +I-0.02 0.321 Holder center variation

4.1.1 C-T functions

The general form of the C-T function used by Gerth and Pfeiffer was the inverse exponential function model given by:

(Equation 11) where

a, constants of the inverse power hnction

The resulting CT functions are provided in Appendix B. Some features are related or similar, and thus, have the same CT function. In addition to the form of the various C-T functions, their upper and lower tolerance limits are also provided. These are the constraints used in the MCT problem (see (Equation 8)).

4.1.2 DOE setup

The Gerth and Pfeiffer paper was focused on developing a new method of determining which features were sensitive to cast rather than solving the MCT problem.

Hence, they created high and low C-T functions that represented the range within which they felt the actual C-T function would be. Then, they tested the sensitivity of the various features by running a series of computer-simulated experiments for various combinations of the high and low C-T functions. They utilized a 2'2-8, Resolution 111, fractional factorial design resulting in a total of sixteen experiments. The high and low cost estimates are also presented in Appendix C, under the columns labeled "upper" and

"lower7'.

The DOE table is shown below, where the feature labels represent the column headers, and the experiment number represents the rows. The upper or lower C-T furaction is selected for the individual feature, depending on the particular combination of high (upper) and low (lower) states indicated by the particular experiment number given in Table 2. Table 2. Fractional Factorial Experimental Design Matrix.

4.1.3 Discussion of Case Study

Since the thesis utilizes MS Excel's solver routine to solve the MCT problem it is important to understand the structure of the EXCEL sheets to understand what was done.

The EXCEL sheet for experiment 16 will be used to explain the original method applied to the case study (see Figure 5). The sheet is separated into two areas. The upper half is primarily concerned with the tolerancing constraints (see (Equation 7) and (Equation 8)) 22 of the MCT problem. The lower half is concerned with the cost or objective function of

MCT problem.

td LC UC ref partial C+I tdCp mtfib A= 0.0097 0.0025 0.0125 0.0075 0.321 1 1 0.0097 9.m B= 0.02 0.005 0.05 0.025 0.3211 1 O.024.12EC6 C=O.022 0.022 0.087 0.022 0.321 1 1 0.022 4.9E-E Ck 0.022 0.022 0.087 0.035 0.321 1 1 0.022 4.99E-05 E= 0.022 0.022 0.087 0.035 0.321 1 1 0.022 4.99E-05 F= 0.02 0.005 0.05 0.025 0.321 1 1 0.02 4.1- G= 0.02 0.005 0.05 0.025 0.321 1 1 0.02 4.1- kO.022 0.022 0.087 0.022 0.321 1 1 0.022 4.9E-E I= 0.0097 0.0025 0.0125 0.0075 0.3211 1 0.0097 9.m J= 0.02 0.006 0.025 0.012 0.14144 1 0.02 8E-K K= 0.02 0.006 0.025 0.012 0.14144 1 0.02 8E-06 L= 0.02 0.01 0.1 0.02 0.321 1 1 0.02 4.1- 0.0004 Cost fundim:

Case1 Tdsk MnCost 1 0.02 -47.15 mtraint 0.02 cw 1

Figure 5. Excel spreadsheet of TEM for Experiment 1.

The tolerances under the column named "Ref, were the initial starting tolerances

and represent the tolerances which the designers had initially selected. It was felt that this was a good initial starting point, for the true optimum would probably not be far from that point. The column "Tol" is the column of values that are changed by solver to

solve the MCT problem (tali). When solver has found a solution, the optimal tolerances

appear in that column. The Ref values were copied to the "Tol" column to represent the

initial tolerances. The lower and upper constraints on the tolerances are shown in the LC 2 3

(Lower Constraint) and UC (Upper Constraint) columns respectively. The assembly tolerance is calculated according to (Equation 7). The process capability for all processes was assumed to be one.

The column labeled "partial" represents the partial derivatives of the stackup function (Y), with respect to each individual component feature (Xi). The column

"contrib" is the squared product of the columns "partial" and "toVCp". The summation value, 0.004, is the last value under the "contrib" column, and is linked to the cell containing its square-root value 0.02, under "To1 Stk" column at the very bottom of the spreadsheet. The term "Cp," at the bottom, is the desired capability index for the assembly, which is also assumed to be one. The tolerance constraint for the assembly stackup was set to 0.02. This value appears next to "constraint" on the EXCEL sheet.

Solver adjusts the To1 column to values such that the To1 Stk cell value is always less than or equal to the constraint cell value.

The Ci(tolJ values for every feature at the tolerance value given in the To1 column are displayed in the second, and fourth rows of the table "Cost functions". Each row represents either the Lower the Reference or the Upper CT hnction value. The next row shows the particular combination of upper and lower cost values that correspond to the particular experimental run. This example shows the results of experiment 16, in which all the C-T hnctions are at their high level. The correct combination of costs used for the particular experiment is then shown in the bottom row and summed across. This is the value that solver then attempts to minimize while ensuring that the "Tol" values are never below LC or above UC, and the "To1 Stk value is less than the constraint value.

The method applied in [6] is a standard NLP procedure, where the PCCT functions are treated to be the same as continuous functions. This means that the separate

segments of a piecewise continuous function are not individually treated. For instance,

the cost function cell for feature A uses a continuous C-T function, whereas the cost

function cell for feature B uses a PCCT function consisting of three segments (see Figure

5). The solutions thus obtained were suspected to be local optimums, as opposed to true

global solutions. This was attributed to the inadequacy of NLP to handle PC functions.

The following section discusses Total Enumeration Method, which is a modified

approach to solve for min-cost by NLP

4.2 Total Enumeration Method (TEM)

The piecewise continuous nature of some C-T functions presents the greatest hurdle to applying NLP to the MCT problem. The solution is to treat each continuous portion of the function separately, i. e. each continuous curve segment is treated as an individual C-T function for the particular feature. By separating the piecewise continuous function into a series of continuous functions, the piecewise optimization problem has been separated into a series of continuous optimization problems which can be readily solved by NLP.

For example, if a particular feature C-T function is discontinuous at two points, three separate C-T curve segments are generated. Each of these three segments becomes a separate case (Figure 4), and a separate optimization solution is generated for each.

Similarly, if there are two features having CT functions with three segments each, a total of nine possible solutions result. This is because each segment of the first feature will be combined with a segment from the other feature for the optimization. This way, all possible combinations between segments of the two features will be analyzed. The global minimum cost solution is the lowest cost solution of the nine segment solutions.

The total number of all possible PCCT curve segment combinations is given by:

(Equation 12) where Si Number of curve segments associated with the ith feature PWCCT

function

N Number of all possible PCCT curve segment combinations.

The global optimum is obtained for every combination of continuous C-T curve segments. Thus, of all the continuous solutions (4,the minimum is the global solution to the piecewise continuous min-cost optimization problem. The task of producing these results is cumbersome and time consuming, as they may require hundreds of optimization iterations to produce one global solution. Therefore, Computer Solution was developed to simplify the application of TEM. This automates the solution seeking method making it easier and less time consuming for the user. 4.2.1 Global Optimization

NLP can find the global optimum if the objective function is continuous and

convex. If the individual segments of the PWCCT functions are monotonically

decreasing, then they are individually convex. The property of a convex function states

that the sum of set of convex functions is also a convex function [9]. Therefore, the

objective hction is also convex (see Appendix A). Hence, assuming the individual

curve segments are monotonically decreasing, the resulting solution from the TEM will represent a global optimum for the particular combination of continuous curve segments.

4.3 Research Methodology - Hypothesis

It is hypothesized that TEM will identify a better optimum than standard hTP, or

NLP with manual inspection for the MCT problem with PCCT functions. To test this

hypothesis three investigations using the same case study as Gerth and Pfeiffer [6] were conducted.

1. Case A - Results obtained by simply applying standard NLP to the case study.

2. Case B - Results obtained by manually inspecting results from Case A to determine

whether a better optimum could be determined. Since PCCT hctions are involved,

the solution is very sensitive to the initial starting point [6]. Hence, manual

inspection involved determining a different starting point and running NLP again to

find an improved solution. 3. Case C - Conducting the NLP solutions for a large number of C-T curve segment

combinations is very time consuming, a computer program was written to automate

the computations. The results of the computerized TEM are called Case C.

Since the results are based on computer generated results, the hypotheses will be considered proven, if the Case C results show a lower total cost than the Case A or Case

B results. The results of Case C were validated by manually creating an EXCEL spreadsheet and solving for all possible combinations for the first experiment. The first experiment yielded thirty-six combinations. Since this is a tedious process, only the all- possible combinations from the first experiment, i.e. thirty-six runs, will be compared to the results generated in the first experiment for Case C. Results from the three investigations are presented in the next chapter.

4.4 TEM compared to Case Study Method

From Table 3, the total number of curves that define each of the features for the two feature levels, high (1) and low (-I), vary. Thus, according to TEM, each segment curve that defines a particular feature is treated as a separate NLP case. This means that every feature curve segment will be used in combination with every other individual curve segment of the other features. For example, for experiment 1, in Table 3, the number of all possible combinations is thirty-six. Equation 12., illustrates the above example. The first row lists the feature letters, the second shows the particular feature C-

T function level for experiment 1, and the third row contains the corresponding number of curve segments associated with the particular CT function. Table 3. Number of CT curve segments for Experiment 1.

ABCDEFGH I JKL

Low low low low high high high high high high low low

1 3 1 1 1 2 3 1 1 1 1 2

Using equation 12, one obtains:

Thus, for each of the sixteen experiments more than one optimization solution is computed. The number of curve segments for each feature determines the exact number of required solutions, which is dependent on the level conditions for that experiment. 5. TEM - SOFTWARE ARCHITECTURE

The Total Enumeration method (TEM) requires running many optimization runs

to solve even a single MCT problem with PCCT functions. This involves populating the

EXCEL sheet for every optimization run. This is a cumbersome and time consuming process to perform manually. Hence the TEM was automated with the help of a software program. The program utilizes the EXCEL spreadsheet and all its features relevant to the problem and includes custom Visual Basic routines to implement the TEM.

5.1 Architecture

The program code consists of the following sheets:

1. Initial Sheet - called "sheet 1" for the user interface

2. The C-T data sheet - called "sheet 2" for the user interface

3. All Possible Combinations sheet - "matrix generate", is protected, and not visible to

the user

4. The optimization sheet - called "ProcessorSheet", which is also protected, and not

visible to the user.

5. Results sheet - called "Output" for the user interface

Figure 6 shows a flow chart of steps involved, in executing the software program. The initial sheet -The user has to enter the basic parameters that define the system, Sheet 1. Interface Sheet

Generation of all possible combinations matrix.

Cost Tolerance data entry sheet, Sheet 2

Interface Sheet f Calculation of constants for the inverse exponential function. f Creation of the processor sheet.

Creation of the Output sheet. f Population of the Processor sheet, and execut~onof the solver. f Extracting results from the Processor sheet to the Output sheet $. Output Sheet IInterface Sheet

Figure 6. Flow chart of the TEM System.

5.2 User Interface

On the initial screen "sheet 1" (see Figure 7), the user is required to input the number of features that constitute the assembly stack. With that, the user has to input the 3 1 constraint on the assembly tolerance, the number of disjoints or "jumps" for each of the feature functions and their corresponding partial derivatives with respect to the ob~ective function. The process capability indices for the individual features as well as for the assembly are also entered on the same sheet. This data is necessary for the generation of the next screen, Once the data is entered, and the button on the screen clicked, sheet 2 is generated (see Figure 8).

Developed by Murtaza Shehabi - ISE Ohio

erance constraint on stack u

Figure 7. Sheet 1: Input Sheet. Ct Curve For Feature A

Ct Curve For Feature C

Ct Curve For Feature D

Figure 8. Sheet 2: CT Data Sheet The next screen, "sheet 2", requires the user to input the parameters that define the various CT functions. This includes the tolerance constraints on each feature tolerance, (i.e. maximum and minimum), and the reference value (initial starting point).

The corresponding cost for each of these 3 points is also entered. The program automatically generates the required number of CT fields for each feature from the number of jump points entered in sheet 1. These values determine the C-T function for each feature in the model, whch is assumed to be of the inverse exponential form. There are two reasons that support this assumption. First, the results obtained by TEM are to be compared with those from case [6], which also, uses inverse exponential functions to define the C-T relationship. Second, the inverse exponential has been used as a standard in various research papers [2], [6], [14], [16], and [18]. From the above infomation, the program computes the coefficients of the inverse exponential function for each curve segment. the data requirements are now complete. The user then clicks the button for the program to generate the constants for the C-T function. The software program generates its own table, creating a "processor sheet" and sets up a matrix for running the object

"solver" within EXCEL. This step is executed in the background and is not displayed to the user. A series of solutions are generated and copied to the "Output" sheet (Figure 9

Sheet 3: Output Sheet). Figure 9. Sheet 3: Output sheet

The user can view results from all possible combinations, which are sorted in an

ascending order under "cost" in the figure. The series of solutions, whch are the run

numbers, are displayed for the user's benefit. Column "To1 Y", represents the resulting

assembly tolerance. Thus each row of the output sheet represents an experiment, for

which a global minimum cost is obtained, for an assembly tolerance "To1 Y", which is

distributed across the feature dimension tolerances. The minimum cost solution may not be suitable from the point of view of manufacturing processing times. In that case the

second best cost estimate and the respective optimum tolerances may be picked, 35 depending on the user's discretion. The figure is truncated in terms of the number of runs displayed, and the number of features represented (A-L considered for this research), as the output sheet was large and could not be condensed.

The above procedure applies to a single experiment in the sense of the study conducted by Gerth and Pfeiffer. The same procedure would to be followed for each experiment, where the cost-tolerance values for the different features would assume different values depending on the levels required by the experimental design. 6. RESULTS

The results for the case in [6] were obtained using Microsofi Excel's optimization procedure (Solver) with an accuracy of 0.00001 and a 5% tolerance. There were no changes made to the analysis from [6], so that the results ,could be compared with those obtained from the TEM.

6.1 Case A - The Gerth and Pfieffer method

In the original work by Gerth and Pfeiffer all CT finctions were treated as continuous, and NLP was performed using the solver within Microsoft Excel. The results are shown in Table 4. The first two columns represent the experiment number and the minimum cost attained after performing NLP for each experiment. The rest of the columns indicate the optimum tolerance values for each of the features. Since these resulzs serve as a baseline, further discussion of the results is not required. Table 4. Case A - Gerth and Pfeiffer method

6.2 Case B - Case A augmented with Visual Inspection

Visual inspection of the results from Case A clearly indicated that the solution could be improved. For example, the first experiment in Table 5 yields an optimum tolerance of 0.0236 for F, and 0.0174 for L. The cost tolerance fbnctions (Appendix C) indicate that the cost contribution for feature F would be 8 units if it were 0.02, whereas

L contributes 175 cost units if it was to be manufactured at less than 0.02 tolerance. One can conclude that a saving of 167 cost units can be achieved by increasing the tolerance on L and decreasing it by the same amount on F. This would not have been true if the percentage contribution of L far exceeded that of F. It was observed that feature F 3 8 contributed more than 1.5 times the amount feature L contributed. In this manner, all 16

solutions were subject to visual inspection.

Table 5. Case B - Gerth and Pfeiffer method with visual inspection.

The new solutions were obtained by suitably changing the values of optimum tolerances, while keeping them within constraints. All of the costs in Table 5, contrary to

Table 4 are negative values. This can be interpreted as savings. Changing the tolerances manually enabled the solution to be driven out of the local optimum, which was the reason the resultant cost was so high with the original method. The main reason NLP dropped into a local optimum was the PC nature of the CT function. The differences in the minimum cost solutions also justify the development of TEM, which was aimed at improving the results from the original case study. 6.3 Case C - Automated TEM

Gerth and Pfeiffer implemented a 212-8,Resolution 111 experimental design, so there were 16 different experiments to be run. Since the TEM requires a great number of minimum cost solutions for each experiment, it was only considered practical to run

automated TEM, and produce the sixteen minimum costs for the respective experiments.

In addition, the results of the first experiment were compared with manual calculations that were performed by means of a spread-sheet using EXCEL, to verify the ability of the code algorithm to correctly compute the true global minimum. The results are provided in Table 6.

Table 6. Case C - Automated TEM

16 1-54.6046 1 0.00755 1 0.02 / 0.02 1 0.02 10.007 0.023 0.025 0.02 Features C, D, E, and H have optimum tolerance = 0.022 for all the runs 40

The minimum cost corresponding to the first experiment, i.e. - 77.5549, should have been identical to the global minimum from the manual calculations, which is -

77.4392 (run number 29). The variation was observed because the solver crashed on that particular run. This was because the feature tolerances did not conform to the constraints.

The developers of solver were contacted in order to fix the problem, where after, a suitable change was made. The recommendation involved running the NLP as before, and then manually checking for the individual feature tolerance values. The values, which were outside the individual feature tolerance constraints were required to be manually changed to the maximum or minimum value of the constraint, depending on which one is closer. This difficulty was encountered for run number thirty-three also.

The rest of the thirty-six all-possible combinations solutions examined were consistent to the third decimal place. The corresponding optimum tolerance values for each of the costs are the same and verify the equality of the two solutions. This discrepancy can be attributed to the variation in precision, as the automated program was set to double precision, as all excel back end calculations were observed to be double precision. The manual method however was executed at double precision only for generating the constants for the inverse exponential function. Subsequent computations involving solver perform the calculations at single precision, as it reads the values from the Microsoft Excel user display sheet.

The other fifteen min-cost solutions in case C are also global minimum solutions for their respective experiment runs. Experiment nine, returns the overall global 41 minimum cost solution of -93.5961. A graphical representation of the cost estimates attained from the three case is shown below in Figure 10.

275 250 $ 225 , 200 0 175 q 150 125 g 100 2 75 50 U 25 = 0 -25 - -50 -75 -100

+Min-Cost by Inspection

Figure 10. Graphical representation of the results fkom the three cases

It is observed that, despite the fact that inspection lowered the cost significantly it was unable to seek a true global optimum for the problem. Even the significant improvement in cost can be attributed to intelligent engineering guesses and judgements.

TEM on the other hand however, aims to make the method simple and precise to find the true solution to the minimum cost tolerancing problem for PCCT functions. 7. SUMMARY

The minimum cost tolerance optimization problem for C-T functions which are piecewise continuous, has not been dealt with in adequate detail. Most literature has dealt with discrete and continuous C-T relationships. There are several optimization techniques that have been researched and applied to various continuous C-T functions.

However piecewise continuous curves, have very little literature outlining the techniques associated with them. The case study [6] is a precise example of how the unavailability of a correct search technique led to the selection of an incorrect method, which led to incorrect solutions. Often a visual inspection of the solution may suggest that the method applied might have a flaw. The original results for the case study [6] were visually inspected and improved by making intelligent guesses. The resulting solution was still not an overall global minimum. The development of Total Enumeration Method (TEM) was inspired by the unavailability of a technique to obtain a true global optimum.

A number of heuristics have been applied to a spectrum of problems similar to

[6], as enumerated in the relevant sections. However, the Total Enumeration Method

(TEM) has two distinct advantages when compared to the other heuristic methods.

1. It guarantees a global optimum, as all possible combinations are evaluated, and the

non-linear programming returns the global optimum for each given function. 2. Since this method returns the correct solution, and leaves no room for judgement, it

is functionally simple. There are neither any complex constants to be assumed, nor

probabilistic decisions to be made.

TEM's simplicity provides another advantage, that it was possible to automate

the method. The software developed to solve the problem, in addition, requires only its

run time to present the solution. This proves to be very efficient and makes the use of

TEM more convenient, and less complicated.

The results from applying the automated TEM show,

1. TEM provides min-cost (MC) solutions that are significantly lower than those

achieved by standard NLP, and

2. TEM provides MC solutions that are significantly lower than those achieved by

standard NLP augmented by expert visual inspection.

7.1 Limitations and Disadvantages of TEM

TEM would be extremely tedious to implement if not for the automation. The

automation makes it possible to quickly generate results for all possible combinations of

feature curve segments that are considered. The disadvantage is that this is a

combinatorial process selection method, which requires the solution of many NLP problems. For a tolerancing problem with a large number of features and C-T relationships with a large number of discontinuities, the number of all possible

combinations could become extremely large. However, most features can only be

manufactured by a few alternative methods, so a given CT function will usually not have 44

more than 3 or 4 discontinuities. The greater problem is that many complex stackup

functions consist of a large number of component features. However, in such complex

stackup situations, the stackup function is not typically known as a single function.

Tolerance analysis is then usually performed by computer aided tolerancing software,

which uses Monte Carlo simulation to compute the assembly variation.

7.2 Significant Observations

A closer observation of the results fiom Case A (the Gerth and Pfeifer method)

and Case C (TEM) suggested that the main difference between the two solutions was the

way in which feature L of the planet holder was optimized. Feature L is the distance

between the sun and the planet in the original case study [63, and its tolerance is the

distance variation. This feature had a maximum cost contribution to make for all of the

sixteen experiments in the original case study. TEM, because of the all-possible

combination principle, ran cases for levels, where feature L did not contribute to cost, i.e.

zero cost. The various segments of a PCCT function represent alternative processes by

which, the tolerances corresponding to a particular segment can be achieved. Similarly,

the method in [6] resulted in solutions, which indicated that a particular process

(segment) was optimal for L. However this was not true, as the solutions were local optimums, unlike TEM which yielded another segment as optimum.

The tolerance range for which L does not make a contribution can be attained with an alternative process, which is inexpensive. Tightening the tolerance on some of the other features compensates the widening of tolerance on feature L. The increase in cost for tightening tolerances was only marginal compared to the large amount of savings that resulted from making L's contribution to cost, zero. In most cases, the processes which were employed for the respective component features, besides L, remained the same. Even where the optimization made a "jump" to the next segment, i.e. an alternative process, the increase in cost was insignificant. For example, according to the method in

[6], the optimal tolerance for L was 0.0174 (cost 175 units) for one of the experiments.

The optimum tolerance on G was 0.0248 (cost 0 units). The NLP in [6] while handling a

PCCT function of feature L, had made a jump to another segment, an alternative process, as the starting point was 0.02 (cost 0 units) which was the reference. Thus the solution was driven to a local optimum. It was observed that tightening the tolerance on G to compensate for the widening of the tolerance on L would pull the solution out of the local optimum. The new cost estimates would be zero units for both of the features, G and L. Thus TEM overcomes this tendency of the search technique to make jumps to other segments and be driven to local optimums.

7.3 Future Scope

TEM guarantees a global optimum, by checking all possible combinations of individual curve segments that represent, all the feature tolerances in an assembly stack.

This would imply a huge number of runs if the assembly were to become complex.

Further research may develop techniques by which certain redundant runs would be identified and eliminated. For instance, features associated with identical process capabilities and partial differences with the rest of the features, and which do not contribute to cost can be eliminated. The automated TEM in the future could incorporate additional features, such as capability to execute an entire DOE in a single run, or give the user the option of selecting a function that would characterize the C-T relationship.

However, the greatest gains would be achieved by developing a method by which

TEM could be integrated with MC simulation engines so that TEM could be applied to much more complex tolerancing problems. 8. REFERENCES

1. Dong, Z. (1997), Advanced Tolerancing Techniques, John Wiley & Sons, Inc.

2. Dong, Z., and Soom, A. (1990), "Automatic Optimal Tolerance Design for Related

Dimension Chains", Manufacturing Review - volume 3, no. 4,262-267.

3. Floquet, P., Pibouleau L., Domenech, S. (1992), "Separation Sequencing Synthesis:

How to Use Simulated Annealing Procedured?", European Symposium on Computer

Aided Process Engineering - 2, S8 1 - S86.

4. Gerth, R.J. (1994), "A spreadsheet approach to minimum cost tolerancing for rocket

engines", Computers and Industrial Engineering, volume 27, nos. 1-4, 549-552.

5. Gerth, R. J. (1996), "Engineering Tolerance: A review of Tolerance Analysis and

Allocation Methods", Engineering Design and Automation, volume 2, no. 1,3-21.

6. ~erth,R.J., Pfeiffer, T. (1999), "Minimum Cost Tolerancing Under Uncertain Cost

Estimates", IIE Transactions.

7. Greenwood, W.H., Chase, K.W., Loosli, B.G., Haunglund, L.F. (1990), "Least Cost

Tolerance Allocation for Mechanical Assemblies with Automated Process

Selection", Manufacturing Review, volume 3, no. l,49-59.

8. Hillier, M.J. (1967), "The Cost Optimization of a System with Random Inputs and

Subject to Specified Constraints", Technical note - no.6, University of Waterloo,

Ontario: Department of Mechanical Engineering. 48

9. Hillier, F. S., and Lieberman, G. J. (1990), Introduction to Operations Research,

5th ed., McGraw-Hill.

10. Kim, S. H., and Knott, K. (1988), "A pseudo-boolean approach to determining least

cost tolerances", International Journal of Production Reasearch, volume 26, no. 1,

157-167.

11. Ostwald, P. F., and Blake, M. 0. (1989), "Estimating Cost Associated with

Dimensional tolerance - A Study", Manufacturing Review - volume 2, no. 4, 277-

282.

12. Pinedo, M. (1995), "Theory, Algorithms, and Systems", Prentice Hall International

Series in Industrial and Systems Engineering.

13. Sayed, S. E. Y., Kheir, N. A. (1985), "hEfficient Technique for Minimum-Cost

Tolerance Assignment", Simulation, volume 44, no. 4, 189-195.

14. Spotts, M. F. (August 1973), "Allocation of Tolerances to Minimize Cost of

Assembly", Journal of Engineering for Industry (Transactions of the ASME).

15. Taillard, E. (1990), "Some efficient heuristic methods for the flow shop sequencing

problem", European Journal of Operational Research, volume 47, 65-74.

16. Wu, Z.,Eimaraghy,W. H., and Eimaraghy, H.A. (1988), "Evaluation of Cost-

Tolerance Algorithms for Design Tolerance Analysis and Synthesis", American

Society of Mechanical Engineers (ASME), Manufacturing Review - volume 1, n0.3,

168-179.

17. Zhang, H. C., and Huq, M. E. (1992), "Tolerancing techniques: the state-of-the-art",

International Journal of Production Research - volume 29, no.2, 877-884. 49

18. Zhang, C., and Wang, H. P. (1993), "Tolerance analysis and synthesis for cam

mechanisms", International Journal of Production Research - volume 3 1, no. 5,

1229- 1245. APPENDICES 8.1 Appendix A

Convexity of Solution Space:

The concept of convexity is frequently used in operations research work. The validity of the proposed method is also dependent on whether or not the functions, which define the feature C-T relationship are convex.

A function of a single variable, f(x), is a convex function if, for each pair of values of x, say, x ' and x ",

for ail values of A such that 0 5 A 5 1. It is strictly convex if 5 can be replaced by .c.

Therefore, graphically, f(x) is convex if, for each pair of points on the graph of f(x), the line segment joining these two points lies entirely above or on the graph of f(x). To be more precise, if f(x) possesses a second derivative everywhere, then f(x) is convex if and only if dy(x)/dx2 2 0 for all values of x (for which f(x) is defined). The concept of a convex function also generalizes to functions of more than one variable. The graphical representation is shown in Figure 11. Figure 11- A convex function

F(x~,x~,...... x,) is a convex function if, for each pair of points on the graph of

,f(x,,x2, ...... x,,), the line segment joining these two points lies entirely above or on the

graph of f(xl,xz, ...... x,,). It is strictly convex function if this line segment actually lies

entirely above this graph except at the endpoints of the line segment. Concave functions

are defined in exactly the same way, except that above is replaced by below. An

important property of convex functions is that the sum of convex functions is a convex

function [9].

A convex set is a collection of points such that, for each pair of points in the collection, the entire line segment joining these two points is also in the collection [16]

(see (Figure 12)). Figure 12 - Example of a convex set

Since the individual pieces of the piecewise continuous functions are monotonically decreasing, we conclude that they are individually convex. The property of convex functions, which states that the sum of convex functions is a convex function, helps us conclude that the piecewise continuous hnctions for our particular problem are also convex. Hence the resulting solution from non-linear programming will truly represent global optimum. 8.2 Appendix B

Matrices for the individual component features that contribute to the assembly stack.

Cost Tolerance matrix for the Drive and Output Shaft Runout.

Tolerance Value Reference Upper Lower

Loose 0.0025 15 2 0 10

Reference 0.0075 0 0 0

Tight 0.0125 -30 -25 -3 5

Cost Tolerance matrix for Drive and Output Housing Runout.

Tolerance Value Reference Upper Lower

Tight 0.005 100 110 90

Tight Reference 0.02 5 7 0

Reference 0.025 0 0 0

Loose 0.05 - 5 0 -7

J

Cost Tolerance matrix for Universal Gear pilot Runout.

Tolerance Value Reference Upper Lower

Tight 0.005 150 175 125

Tight Reference 0.02 5 8 0

Reference 0.025 0 0 0

Loose 0.05 -10 -5 -15 Cost Tolerance matrix for Sun Gear profile deviation.

Tolerance Value Reference Upper Lower

Q5 (Tight) 0.006 15 2 0 10

Q7(Reference) 0.012 0 2 - 1

Q9 (Loose) 0.025 -40 -3 5 -45

Cost Tolerance matrix for Planet Gear profile deviation.

Tolerance Value Reference Upper Lower

Q5 (Tight) 0.006 2 5 30 2 0

Q7 (Reference) 0.012 0 3 -2

Q9 (Loose) 0.025 -50 -45 -55

Cost Tolerance matrix for Planet Holder. Planet Journal center to Holder center distance.

Tolerance Value Reference Upper Lower

Tight 0.01 200 250 175

Reference 0.02 0 0 0

Loose 0.1 0 0 0 8.3 Appendix C

A continuous curve was created for each set, by solving for the three unknowns

(a,p, and y) using the three data points in the matrix.

Features A and I

1.59.104 'upper = 26.22- t01-l.~~

Features B and G Feature F

Feature J

7.49 1o3 'reference = 25.43- t01-0.~~

Feature K Feature L

200 0.010 r to1 < 0.020 'Re ference = i o 0.020 r toz ~0.010