<<

ESTIMATION OF THE SCALE PARAMETER

IN THE WEIBUIL DISTRIBUTION

USING SAMPLES CENSORED

BY TIME AND BY NUMBER OF FAILURES

by

EUGENE H LEHMAN, JR. and

R. L. ANDERSON

INSTITUTE OF

MIMEOGRAPH SERIES NO. 276

MARCH, 1961 iv

TABLE OF CONTENTS

Pflge 1.0 LIST OF TABLES • •• • • • •• • • • •• • • • • v 2.0 LIST OF FIGURES. • • • •• • • • • •• • • • • .vi • 3.0 INTRODUCTION. • • • • • • • • •• • • • •• • • 1 'J+.O REVIEW OF LITERATURE • • • • •• • • • • • • • • • 3 5.0 PROPERTIES OF THE WEIBtJU. DISTRIBUTION. • • • • • • • • 8 6.0 NOTATION. .13 • •• •• • • • • 0 • • • • • • •

7.0 MAXIMUM LIKELIHOOD OF a o • o • •• • • • .17 Test Procedure ••••••••••••• • .17 Derivation of the Maximum Likelihood Estimator. •• • .18 and of the Maximum Likelihood Estimator of a. ••••• •• 0...... • .20 Nonmonotonic Behavior of V, A, D as P, N Increase. • .27 Asymptotic Properties of the Estimator ••• 0 • • .32 8.0 COST AND PRICE OF OBTAINING THE MAXIMUM LIKELIHOOD ESTIMATE

OF a,. 0 .' • • 0•••••••••••• • .41 Determination of E(d). • • • • • • • • • • • .42 Discussion of Results • • • • • • • • • • • • .46 9.0 COMPUTATIONS. •• • • • • • o •• • o • .' . • .49 Computer Programs •••• • • • • • • • • • .49 Demonstration of the Program. • • .' . • • • • • .53' 10.0 SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS FOR FUTURE RESEARCH .59

LIST OF REFERENCES. •• • •• • • • • •• • • v 1.0 LIST OF TABLES Page

1. Mean (fJ.) and (0-) of life span (t) for .. various IX and M. •• • • •• • • ,.\- • ••• • 65 2. Bias (B), variance (V), and mean square error (D) of estimator (a) for selected N, R or ~, and P • • • • 66

3. Cost (C) and price (U) of estimator (a) for selected M, J, N, R, and P. • •• • • • • • • • •• • • 69

4. Minimum standardized test duration (S) for various P and M• 0 78 1 Information (I = -) obtained from estimator (a) 5. D • • • • • 79 6. Ordinates of the standardized Weibull density function, _sM ... MsM-l e w( s; M) • • • •• • • • • • • • • 0 80 vi 2.0 LIST OF FIGURES

Page

The standardized Weibull density function, M-l _sM 1 (1) 1 w( s; M) "" Ms e for M "" 2' '2 2 2' and 0 ~ 6 :£ 2 • • • 81

... 2. Variance, V, of the standardized estimator, a, as a function of N for various combinations of small R and small P• • 82 Mean square error, D, of standardized estimator, a, as function of P for various (R, N) combinations • • • • 83 Asymptotic mean square error, A, of standardized estimator, a, as a function of P for various (R, N) combinations • • 84 5. Minimum standardized test duration, 5, as a function of P for various M • •••• • • • • • • • • • • • 85

6. Expected life span, J.1, as a function of the scale parameter, a, of the Weibull density function for various values of the , M • • • • • • •• ••• • • 86 .3.0 INTRODUCTION

Researchers in many areas are interested in estimating the life span of individuals, be they human beings, animals, automobiles, or .. picture tubes. Each of these individuals is characterized by having a specific of birth, a specific moment of death, and a finite measurable life span. For instance, an actuary wishes to estimate the longevity of a person, an animal trainer desires to know the expected life of a dog, an electronic technician must have some idea of the usability period of a transistor, or a retailer needs to know how long f his goods may remain on his shelves. Further, it is desirable to learn the distribution of the life span of these individuals. These distributions vary from type to type, but past researchers

have noticed empirically that the probability of death by time t

(at least as great as ~) often can be adequately approximated by the Weibull [1951] distribution: 1 _ exp[- (t-p)M , a J F(t) • (.3.0.1) o , t < p

The rtshape" parameter (M), always positive and observed by previous authors to appear usually in the (0.5, 2.5), controls the general appearance of the corresponding density function, f(t) • F'(t). The nscale" parameter (a), also always positive, indicates the spread of the individual life spars on an absolute scale; a is also the expected value of (T_~)M.

The "location" parameter (\3), or minimum guaranteed life duration, is the starting time of the test, assumed in this dissertation to be zero. It is proposed to investigate the bias, variance, and mean square 2

error of the maximum likelihood estimator (MLE) ~ of a, based on various testing procedures for selected values of M. Starting with a fixed number (N) of items on test, two methods of conducting the

have been studied extensively in the past: 1. Terminate the test after a fixed number (R) of units has failed. 2. Terminate the test after a fixed time (T) has elapsed.

This paper will study plans in which both of these conditions are fulfilled, i.e., stop at time T if R units have failed, otherwise continue the test until R have failed.

Finally we will study the 11 cost per unit information" for each combination of N, R, and T, where "cost" is defined as a linear function of N and the expected duration of the test, and "information" is defined as the reciprocal of the mean square error. 3 4.0 REVIEW OF LITERATURE

Of all the diverse literature on life testing, papers germane to this study are those :r.e rtaining to esti;rL9.tion with a censored sample. A cen­ • sored sample is one in which all values outside of a given range are

ignored but their number is known. It differs from a truncated sample

wherein the number of ignored observations is not known. There are two methods by which is performed: 1. Stop the test after a predetermined number of units (R) have

failed (0 < R < N). 2. Stop the test after a predetermined time (T) has elapsed (T > 0).

In the following, the of interest is time to failure or death. Hald [1949] considered the maximum likelihood of the mean and variance of a for both truncated and censored

samples 0 However, because of the noticeable asymmetry of the distribution

of time to failure, his estimators are of less practical value. Cohen [1950] obtained maximum likelihood estimators for simply and doubly truncated normal populations under censored at a fixed

failure time, the maximum likelihood equations being arranged for use in

connection with ordinary normal tables. Cohen [1951J derived estimators of the parameters in truncated Pearson type distributions by substituting sample moments for the popu­ lation moments in the Pearson equations. He considered these estimators

as first approximations to maximum likelihood estimators. Gupta [1952] studied sampling from a normal population with cen­ soring occurring after a fixed number of units out of a total sample

of given size N had failed, rather than after a fixed time as by 4 Cohen [1950]. He considered maximum likelihood estimators and their asymptotic and , but did not attempt, as in this dissertation, to derive their small sample variances and biases. Sarhan and Greenberg [1956] continued Gupta's work by finding a best linear unbiased estimator of the mean and a best estimator of the standard deviation of a normal distribution under double censoring, that is, the smallest k and the largest k values were counted but not measured. l 2 Epstein and Sobel [1953] derived tests based on time of failure of the first R out of N items from an , the re- maining N-R failure times again being unknown. Epstein [1954] continued his work by giving one test procedure in which censoring occurs after R failures with d, the duration of the test, a random variable, and a second test procedure in which the experi- ment terminates after T hours, so that r, the number of recorded failures, is a random variable. Epstein considered only the Weibull dis- tribution with parameter M=l, that is the ordinary exponential. The estimator in the first case is:

1\ 1. .JL ~ = -R [(N-Rh + ~ t ], R r=l r where t ~ 0 is the life span of the r th successive failure. The quan­ r 2 tity ~'" is a X variate with 2R degrees of freedom. Thus the esti- ex 2 mator ~R has expectation ex and variance ex /R; it has minimum vari- ance and is unbiased. In the second case the estimator is:

AIL. ex = - [(N-r)T + > t ], r f O. r r N i 5 There is no MLE if r :: O. The estimator ~ is neither unbiased nor 2~ r r 2 minimum. variance, but is asymptotically distributed as X with cx 2r degrees of freedom. Since it is a maximum likelihood estimator, it is asymptotically minimum. variance and is consistent. Epstein and Sobel [1954] investigated the properties of maximum. likelihood estimators of cx in the exponential distribution with fixed R, but added nothing to Epstein's earlier [1954] article. Jaech [1955] in an unpublished manuscript at Hanford, Washington, set forth a test to determine if 'two different kinds of tubes, each dis- tributed as Weibull with the same M, have a common cx. He proposed stopping the test when Rj of population j(j=1,2) have failed. He A 2R CX used the estimator ~O A j of this thesis as his cxR.' showed that cx J 2 follows the X density function with 2R. degrees of freedom and de- J rived the power of his test.

Deemer and Votaw [1955] obtained a maximum likelihood estimator for the parameter c in the exponential distribution ce-ex, x nonnegative, for time censored sampling. Herd [1956] considered multicensored sampling, that, ki(~ 0) units removed from the test at the time of the i th ordered failure;, the test terminates after R failures. Tilden [1957J in a Master's thesis at Rutgers considered two very ~ ~ simple statistics, the and the half-range - xp where Xi is the life span of the i th successive failure, and ~ is unconvention- ally defined as XN for even N. 2 Kao [1956] considered censored sampling and stated that in his ex- perience life spans follow the very frequently in- deed, with the exponent M having a value in the neighborhood of 1.7. 6 Mendenhall [1957] considered the case of a population made up of two exponential distributions (that is, M=l) with different scale parameters a. (i=1,2). He derived MLE's and showed they had large biases and vari­ l. ances for small sample size or short test duration. He discussed an "adjusted estimation procedure" suitable in such cases when at least the order of magnitude of the ratio of the a is known. Mendenhall and i Hader [1958] covered substantially this same material under the same title. Mendenhall [1958] compiled a thorough bibliography on life testing research. Zelen [1959] analyzed factorial in life testing. Let there be at least two factors affecting life span - temperature and vol- tage, say - and let life test be conducted using several levels of each factor. Procedures are given to estimate main effects and interactions as in ordinary factorial analysis, confidence limits are presented, and robustness of the tests discussed when the shape parameter in the Weibull distribution (M in this paper) is poorly guessed. (The tests M are EQ1 robust.) In each case the parameter a· E(t) is being esti- mated, and the effect of the factors and interactions upon a studied.

Zelen [1960] gives likelihood ratio tests for analyzing the results of a two way classification with respect to the scale factor of the exponential distribution. When effect A occurs at level i (i • 1, ••• , a) and effect B occurs at level j (j = 1, •••, b) then the scale para­ meter (a in this dissertation) is called Qijo Zelen defines

where In m is the analog of the mean effect, In a and In b are i j 7 analogs of the main effects and ln ~ij is the analog of the interac- tion in the ordinary fixed effects model of . Zelen and Dannemiller [1961J ' emphasize the non-robustness of tests on the scale parameter when the exponential distribution is erron- eously assumed to be correct under four testing plans: fixed sample size, fixed sample size with censoring, truncated nonreplacement, and sequential. Kao [1956J presented a graphical method of estimating the parameters when the population consists of a mixture of two different Weibull-distri- buted variates. Epstein [1960J warns against assuming all life spans are distributed exponentially. He gives a set of tests to determine if the exponential distribution is appropriate to the in hand, and suggests use of the

Weibull distribution with shape parameter M other than 1. Mendenhall and Lehman [196OJ consider the usual MLE for a in the

Weibull distribution with ~ = 0 and fixed time of censoring. The first two negative moments of the binomial distribution E(r-k)(k = 1,2), were needed to compute the mean and variance of ~. A Beta-function approximation to the binomial was used for these computa- tions. 8 5.0 PROPERrIES OF THE WEIBULL DISTRIBUTION

This dissertation is concerned with the estimation of the scale

parameter a in the Weibull cumulative distribution function with no "guarantee" period, i.e., ~ of equation (3.0.1) is zero:

M 1 - exp(-t fa), t non-negative F(t) = o , t negative.

The properties of this function vary considerably as M proceeds from

o to 00. It is easier to study these effects if we examine the char­ acteristics of the accompanying density function f(t) = F' (t) for var- ious positive values of the shape parameter M.

MtM-1 M ----- exp(-t t non-negative a fa), f(t) = o , t negative whence by integration we obtain

00 M E(t) = S t f(t)dt = al/ (~ L) o and

00 2 V(t) = ~ [t _ E(t)]2 f(t)dt = aM [(~ !) _ (~ !)2], o

00 -x where we employ the notation ! \. xK/M e dx because of its M• 5o

brevity over the more conventional symbol \(~ + 1) 0 M Let s = t/al/ be a standardized time variate. The density function 9 for 8 is

M-l (M) M8 exp -8 , 8 non-negative w(s) c o , 8 negative; Table 6 lists ordinates and Figure 1 presents graphs of w(s) for M:: 0.5(0.5)2.5, and 0 ~ s ~ 2. For M< 1, w(O) is undefined, although the corresponding cumula­ tive distribution function W(O) exists and has the value O. For positive s, w'(s) is always negative and w"(s) positive, indicating an ever,ywhere convex monotonically decreasing curve with no . If we compare W(8) for two values of M both less than 1, we note the curve of lesser M exceeds that of greater M for very small values of s, then they cross; thereafter the curve of greater M is above that of lesser M. If M:: 1, w(s) is the ordinary exponential density function, in general appearance similar to the above except it is defined at s = 0 and has a mode there of value 1. For 1 < M< 2, w(O) = 0, and w(s) increases as one moves to the 11M right to a genuine mode at the point s = (M-l) < 1 where max M

w( s) = w max •

The second derivative, w"( s) is negative from s:: 0 through and a little beyond s , indicating concavity at the left portion of the max curve. There exists, however, the point

M [3(M-l) + v'(M-l)(5M-l) ]1/ s = Sip • 2M ' 10 where wl1 (s) = 0, an inflection point, to the right of which the curve becomes convex as it asymptotically approaches the horizontal axis.

This inflection point is to the left of, at, or to the right of S" 1,

according to whether M < ";'2, M = ./'2, or M> ./"2.

As M approaches 2, Smax moves slowly to the right, although

always well to the left of 1, while Wmax increases. If M" 2, there is an inflection point at the origin. otherwise 2 the curve resembles the case 1 < M< 2. Note that y .. 2s follo'Ws

the X 2 distribution with 2 degrees of freedom.

As M continues to increase from 2 to 00, Smax approaches 1, W approaches ~e -- that is, increases without limit in proportion max to M -- and the two roots of w",

s .. ip (5.0.8) are positive; the smaller is less than 1, the larger greater than 1.

Both roots approach 1 as M --> 00. Thus the peak becomes very tall and narrow about 1 until in the limit the entire distribution is concentrated there. A particularly intriguing feature of the Weibull density function is the surprising fact that the probability PEs > 1] of a failure after s = 1 (or after t = a1/ M) is:

00 ~l M M 1 1 - W(l) .. M exp(-s ) ds .. exp(-s ) == - = S s e 1 00

for all M. If M is small, the 36.8% of the distribution that exceeds 1 1 is greatly skewed to the right. For instance, if M= 5' the proba- bility of a very late failure, say beyond S" 32, is: 11

2 1 - W(32) = exp(- (;32) = e- "" .135, indicating 13.5% of all failures occur even later than that extreme value. This is, however, consistent with the variance as given in (5.0.4); for if M "" *(an unusually low value),

V(s) = 10L - (5l)2 "" 3,628,800 - 14,400 = 3,614,400.

If M is large, there is et.ill 36.8% of the area to the right of s =1, but the distribution is much less skewed. For instance if M "" 10 (a very large value),

1 - W(l) = .368 (5.0.12) as always, but

_1.110 2 59 1 - W(l.l) = e = e-· = .075, (5.0.13) indicating that 29.)% of cases fail in that narrow range (1, 1.1). However, this is consistent with the very small variance given by (5.0.4), for if M "" 10:

Thus if a manufacturer has his choice of producing one of several similar items but with different M, he may wish to select that item for which M is greatest, for it will have the most uniform life span.

It is now instructive to study the notion of II hazard. 11 The hazard is the instantaneous tendency to fail - that is - it is the probability of failing in a given time interval after having survived up to the be-

ginning of that interval. Let the hazard [z(t)] be defined thus:

MtM-l z(t) .. r::F(tJ---1ill =- t non-negative. a ,

Then the conditional probability of failure in the interval (t, t + dt) given the individual has survived until time t, is proportional to z(t). If M is less than 1, t has a negative exponent, indicating a decreasing hazard. For instance, in the case of newborn humans, M is less than 1, because the probability of death in the first few moments is high, and decreases rapidly with each hour of life. If M= 1, the hazard is constant. If M exceeds 1, the hazard increases, as in the case of aged humans wherein the probability of death during the following year increases with age. The hazard increases at a decreasing rate, a constant rate, or an increasing rate, according to whether M< 2, M = 2, or M> 2. If M approaches either zero or infinity, so does the hazard. 13 6.0 NGrATION a • Wa [see Section 7.3] A = Cramtr-Rao lower bound on mean square error [see equations (7.5.1) and (7.5.27)] M a = scale parameter in Weibull distribution function; also E(t ) [see equation (5.0.1)] ~ = maximum likelihood estimator of a [see equations (7.2.3) and

B(a) • fractional bias of ~ = E(a) - 1 [see equation (7.3.15)]

~ = starting time of test; ~ = 0 in this dissertation

C = cost of a test • N + JE(d)/a!/M [see equation (8.0.1)] d = duration of test [see Section 7.1] th = T if the R failure occurs on or before T th • t R if the R failure occurs after T D(a) = mean square error = (Bias)2 + Variance = [B(a)]2 + V(a) [see equation (7.3.28)

EO = incomplete expectation operator for the Case RO [see equations

E = incomplete expectation operator for the Cases r [see equation (7.3.2)] r N Ea[x(r)] = > x(r) her), the incomplete expectation of x(r), where x(r) r=R is any function of r. f(t) = Weibull density function [see equation (5.0.2)] F(t) = Wei bull cumulative distribution function [see equation (5.0.1)] g*(t ) marginal density function of t , the Rth failure time R = R R(~) [F(t )]R-l [1 - F(tR)]N-R f(t ) [used in deriving equation = R R

(8.0.5)] 14

N) R-l N-R g()qR = RR( ~ qR h(r) = binomial probability of r failures (out of N on test) up to time T

N H(R) = > h(r), the upper-tail binomial cumulative probability, some­ r=R times abbreviated to simply H

I • 1/D, the definition of information used here [see equation (8.0.2)] J = constant in the cost function, a ratio: the cost per unit time of 15

M p = F(T) = 1 - exp(-T la) = probability that a given life span does not

exceed '1' q = 1 - P = exp(_dM/a)

~ = 1 - Pi = exp(-t~a)

Q = 1 - P = exp(-TMYa)

r = actual number of failures in the test (up to time d) [see Section 7.1J

• R, R + 1, •••,N R = minimum number of failures required before test terminates [see Section 7.1J

RO • case in which r = R, d > T s = t/al/M, a standardized time variate [defined after equation (5.0.4)J M S = '1'/al/ , minimum duration of test expressed in units of the standard- ized time variate [see equation (9.1.2)J Q m Smn = (RN) SpR-n qN-R+n-l ~see1 dq [ equation ()5.11 J o t = continuous time variate t "" time to failure of the rth individual r T = minimum duration of the test [see Section 7.1J U = C/I· CD = cost per unit information or price of the test [see equation (8.0.2)] v(d) = density function of test duration [used in equation (8.0.5)J

• H~ if d "" T v(q) = density function of survival probability q

... g(q) if q = qR < Q • H, if q = Q 16 v = variance w(s) • density function for standardized time variate" s [see equa- tion (5.0.5)J W(s) ··cumulative distribution function for S [see equation (5.0.9)]

R-1 N-R ( IJ) M. t d d' t . y(q) = p q -~ 1/ • ~n egran use ~n compu ~g E(d) [see equation (800.6)J M f(t) Mt -1 z(t) = 1 _ F(t) = a "the hazard function [see equation (5.0.15)J

~ • E(t) = al/M (~1) [see equation (5.0.3)J

2 M 52. V(t). a / [~ l - (~ !)2J [see equation (5.0.4)J 17 7.0 MAXIMUM LIKELIHOOD ESTIMATOR OF a

7.1 Test Procedure This dissertation will consider experimental and analytical procedures to estimate the scale parameter (a) in the Weibull distribution (5.0.1), where the shape parameter (M) is assumed known in advance of testing. Only maximum likelihood estimators (~) will be considered; ~l/M is the MLE M of al/ = E(t)/(~ !).

Given N items put on test at t = 0, the maximum information on a would result from an experiment in which all items were allowed to fail; hence, the time to failure (t ) would be available for all items i (i = 1, 2, •••, N). In our terminology, this would mean that R = N m and T = O. The mean of the t would be the minimum variance unbiased i estimator of a, since E(tm) = a and Var [~..1L t1N)m, = a2/ N. The i i=l 2 Cram{r-Rao lower bound for an unbiased estimator of a is a /N.

However, since the cost of conducting the experiment is a function of the length of time (d) to termination of testing as well as the n~ ber of items (N) on test, it is necessar,y to examine a variety of test procedures and choose one for which the cost per unit information is minimized. In this dissertation, it is assumed that cost is linear in

E(d) and N. Previous test procedures have considered termination either at time T or after R failures. In this dissertation, the test will be continued until both of the following events have occurred:

(1) At least R units have failed.

(2) At least T time-units of testing have elapsed.

(In the event all N units fail before T we consider d = T as the duration of the test even though there are no units remaining to fail between t and T. In a practical situation, T would be set N 18 small enough to make the probability of this event negligible.)

7.2 Derivation of the Maximum Likelihood Estimator Two different cases cover all situations; these will be called Case R and Case r. The former case is for the situation in which less than O R failures have occurred before T; hence, there will be exactly R failures at the termination of testing. The test duration will be greater than T; i.e., d > T. The latter case occurs if R or more failures occur up to time TJ the test duration will be d = T and the number of

failures (r) will be at least R, i.e., r - R, R + 1, •••, N. These

N-R + 1 situations will be designated as Case r. th In Case RO(r = R, d > T), the likelihood for the case of the R

ordered failure occurring at time t R > T is

R M-l 1 ..1L MM ~ (t ) exp [- - (~ t + (N-R)t »). i=l i a i=l i R .

[In this and all other cases where symbols ~ and E are used we under­ stand that if the upper limit is less than or equal to the lower limit then a-l a-I a a ~ ~ ~ x(t ) = 1, ?-- x(t )- 0, and x(ti ) = x(ti ) =x(t ) i-a i ~=a i i-a ~=a a

where x is any function of t .] Hence the logarithm of the likelihood i differs by a constant independent of a from

The MLE of a for this case is 19 t~/O:I e It is convenient to set L i .. in qi ::: - i.e., dqi = f(ti)dti and q. = 1 - F( t.). In terms of this -notation 1 1 ..L l

1\ 0:0 1 -!L 0 /J a • -- • -- [> ~. + (N-R)~'R]. 00: R N ].

The probability of Case RO occurring is simply

R-l ~ (N) [F(T)]r [1 _ F(T)]N-r .. 1 - H. -r-O r

In Case r (r .. RI R + 11 ••• , or N; d = T), the likelihood for the case of r failures within the time interval (0, T) is

N r NN M r r M-l TM 1 r M () n f(t.) [1 - F(T)] -r .. () (-) n t exp[- (N-r) -- -- ~ t ]. r i-I ]. r 0: i=l i 0: 0: W i

The logarithm of the likelihood differs by a constant independent of 0: from

M £. .. _r in 0: _ ! [>r t + (N _ r).f!]. r 0: r:r i

Hence the MLE of 0: for this case is

1\ 1 -!- M M 0: .. - [> t + (N - r)T]; r" R, R + 1, •••, N. r r r:r i

In terms of the ~-notation

A 0:r1 - r /J a = -- •-- [> ~. + (N-r)L], r a r 1=1 i 20 M where L = - T fa. The probability of any single Case r occurring is

N > h(r)" H. r=R

7.3 Mean and Variance of the Maximum Likelihood Estimator of $,.. , " 2 We note that E(~)" a E(a) and V(~)" a V(a), where a" ~/a. It is more convenient to study the random variable a. We note

k k N k E(a ) .. EO (a ) + ~ E (a ) (7.3.1) O -r-R r r

k where EO(a ), the first term in (7.3.2), is the incomplete expectation k k of a for Case RO' and Er(a ), the second term in (7.3.2), is the in­ complete expectation of ak for each Case r. That is:

where ..•... TM --a Q .. e •

In (7.3.2) we consider t l , ••• , t R_l unordered but all less than t RI that is, - , ..., -LR-l unordered but all less than ). .il (-.fR 20 where L" - TM/a. The probability of any single Case r occurring is

N > h(r)" H. r-R

7.3 Mean and Variance of the Maximum Likelihood Estimator of ((~ -- 2 We note that E(~) .. a E(a) and V(~) .. a V(a), where a" ~/a. It is more convenient to study the random variable a. We note

k k N k E(a ) .. EO (a ) + ~ E (a ) (7.3.1) O -r-R r r

where EO(ak), the first term in (7.3.2), is the incomplete expectation k k of a for Case RO' and Er(a ), the second term in (7.3.2), is the in­ • complete expectation of ak for each Case r. That is:

1 1 R-l ( .•• S [- L ~i - J i=l [ qR qR

where .!,. TM --a Q .. e •

In (7.3.2) we consider t , ••• , t _ unordered but all less than t , 1 R l R that is, - , ..., _ unordered but all less than ). .il -LR l (-.tR 21

Let us first evaluate E(a). We make use of the following definite integrals, if R > 1. 1 1 1 R-1 2 ~j R~l, ) .£j 'It dqi = - (1 _ q)R­ ) dqj = pR-2(p+q1), j=l, •••, S·.. i=l q q q and 1 1 r J1 r-l ..t. j . 'It dqi = P (p + QL), j=l, •••, r • S·.. 5 ~=1 Q Q

Hence for k = 1, the part inside the curly brackets of the first ter.m of (7.3.2) is simply

R-2 0 t7 R-1 (R - l)p (p + q~) - (N -R + l)~p , where the subscripts R on p,q and ~ have been deleted for conven­ ience. If R· 1, the part inside the curly brackets is merely (- N.i). Similarly, that part inside the curly brackets of the second term of (7.3.2) is

r pr-1 (p + QL) _ (N. _ r)LPr = r pr-1 (p + L) - NLF'.

Hence

In order to perfor.m the integration in the first ter.m of (7.3.8), a 22

procedure will be introduced which again will be needed in evaluating 2 E(a ). Let

Q R-n N-R+n-l 0 m d Sp q ~ q; m = 0,1,4; n = 1,2,3. °

Hence

~) E(a) = (R-l) (SOl + S12) - (N-R+l)Sll + H(l + - NLEH (l/r),

where

A well-known relationship between the beta and binomial cumulative distribution functions is

By use of integration by parts on Sll' it can be shown that

Applying (7.3.13) and then (7.3.12) to (7.3.10), we have

~ ~) E(a) = RSOl - h(R) + H(l + - NLEH(l/r)

= 1 + ~ [H - Qh(R)] - NLEH(l/r). 23 Hence the fractional bias of ~ as an estimator of a is

2 In order to obtain the variance of a, we first determine E(a ) by setting k:: 2 in (7.3.2). Hence, after deleting the subscript R for "l and q, and bearing in mind the bracketed statement after (7.2.1),

1 1 R-l ] N-R 1t dq. q dq ... S i-I ~ qt td q

N +>- -::;1 (N) QN-r -r-R r G r t {l ...

where

.L. 2 .L. 2 2 Zl :: LL ; = 2(N-r)L ~..1, j; Z4 :: (N-r) L. i-I j j=l

In this case the following definite integrals are appended to (7.3.4)

1 1 2 R-l R 2 L j .1t dqi'" p - (2p + 2ql S·.. S ~::l q q 24

r ~-1 n dq. = P' ( 2 + 2QL - QL2) ; i=l ~

and 1 1 ~"i ~ d~ = pr-2 (p + QL)2, j f kj j, k = 1, ••• , r. S 5JjJk QQ i=l (7.3.21) If R = 1, the first curly brackets in (7.3.2) becomes (_N~)2. Apply­ ing (7.3.4), (703.5) and (7.3.18) through (7.3.21), we have Q 2 (~) E(a ) = * S qN-R [(R-1) pR-2 (2p + 2qi _ qf., 2) + (R4:1.)(R_2)pR-3(p+ql)2 q=O

_ 2(N-R+1) (R-1) pR-2 (p + oj)L + (N_R+1)2 pR-1 .L2]dq

This result is true for both R = 1 and R > 1. After expanding in powers of p, q, and 1.., and using the S-notation (703.9), we have

2 1 2 E(a ) =R[R(R-1)SOl - 2(R-1) (N-R+1)Sll + 2(R-1) S12 25

Again by use of integration by parts, it can be shown that

2 ~ (N-R+l)521 = (R-l)522 - 2511 + h(R);

and

Applying (7.3.24), (7.3.25) and (7.3.13) to the first bracketed part of (7.J.23)we obtain

2 ~2 *[2R(R-l)S12 - 2R(N-R+l)Sll + R(R-l)SOl + (N-R+l) ~ h(R) - (R-L) ~ h(R)] P

Finally using 7.3.12 and collecting terms, we have 2 E(a2) = 1 + 1 + H[- 1 + 2L + L ] + QLh(R) [L(NP - R+l) _ 2J RRP p2 P RP

2 2 2 2N0 ~ + NL ~(J./r) + [1 - 2NL - -p - 2 ]~(l/r). p 26 The mean square error of a, that is;-the sum of the variance and the squared bias of a is

222 · ! + RL -PH + Q&: (NP _R + 1) h (R) R ~ RP2

The variance of a is simply

It is noted in passing that, when R· N (all units are required to fail),

Substituting these in the above results, we have

B[a(N)] • ~ - NLPN/N • 0; E[a(N)] • 1;

1 _N L2 1 m.2 m 2 m.2 1 ')L2 m 2 2 1 D[a(N)] • V[a(N)] = -N + Y" (-- -- + ~ - ~ + ~ + ------~ + L) = N- , p2 NP p2 NP2 NP ~

where a· a(N) in this case. Hence this checks that the variance of 2 ~(N) is the minimum variance, a. /N. 27

If R =: 0, we have in all cases the estimator ("a ) whi ch is unde- r fined for r =: O. If T =: 0, we have the first case described by Epstein

[1954], that is the estimator is always a"O; the estimator is unbiased 2 and its variance is a /R. If T is allowed to increase without limit, we have the same situation as for R =: N. Mendenhall and Lehman [1960] tacitly assumed R = 1 with a fixed

T. Hence in their case H = 1 - QN and h()R = NPQN-l; ~(-k)r is given by tables in their paper. Hence

LNN I B[a(l)] = p (1 - Q - NPQ )- NLEH (1 r)

2 2 2 L ( l-QN) + QN + 2.-2 N p 2NL p QL2 2 2 2 D[a(l)] = - - - N-L Q + - 2 - ~(l./r) + NL EH(l./r ). p2 P

7.4 Nonmonotonic Behavior of V, A. and D as P and N Increase In section 9.1, the computational methods by which values of V, A, and D were obtained from electronic calculators will be presented. In this section the results of this computation are discussed. Empirically, we observe several strange events for which explanations have been attempted. Conjectures are stated which are drawn solely from an observation of the results, but for which a rigorous proof has not yet been obtained. The bias of ~ as given in (7.3.15) is a function of L, P, and other

II constantsll which are themselves dependent upon a. Hence a function of "a cannot be used as an unbiased estimator., The bias is large for small values of P and Rand appears to decrease, becoming 0 when either P = 1 or R = r. 2S

The variance is observed to increase with increasing N if P and

R are small until it reaches a peak; with further increases in N it decreases again, approaching the asymptotic value. The location and size of the peak varies with P and R.

Another perplexing feature of V is its nonmonotonic performance . 1 with increasing P for constant R and N. If P is 0, V J..s R:0 R As P increases, V drops to a minimum at or near P • N; V then increases despite the increasing test duration. At a point depending upon ~, generally in the neighborhood of P = 1 - :N' V attains a maxi­ mum. Thereafter it decreases, monotonically it is conjectured, with in- creasing P to a value of N1 at P = 10 These oddities are depicted graphically in Figure 2, and numerically in Table ~. The mean square error D and the asymptotic variance A also be- have, peculiarly as functions of P, as shown in Figures 3 and 4 and

Table 2. Note that, like both have the value 1 if 0 V, R P = and fall off to a minimum at or near P = ~; after this, they increase to a maximum which is more or less in the vicinity of P = 1 - 2N'R and finally decrease again (monotonically, it appears) to the value of i at P = I. The results are illustrated by the following examples. If P = .05 and R = 1 then for N~5, 10, 20, and 40 (respectively) we have

V(~) = .963, .886, .776 and .859. (7.4.1)

Observe the down-then-up movement. Even if p·.3 and R = 3, then for

N· 5, 10, 20 and 40 we have 29

v(I(?) = .322, .269, .280 and .138.

Note that the movement is still wavy. But if P = .7, then for R = 1, the corresponding variances are

v(~) ....904, .260, .0905 and .0398, (7.4.3) which is monotonically decreasing, at least for these N and (it is con- jectured) for all N. The following numerical example points up how for even moderately large S (in this case, P = ~ or S '" ~ .69315 ) the mean square error is nonmonotonic in N for small R. If we set R" 1, and P '" ~, then

'" - .• 69315, ....48046,

her) ... (N) 2-N, r h(l) .. N 2-N ... heR), N 1 - H ... h(O) ... 2- •

From the definitions and discussions in Section 7.2 and 703 it is evident that 30

E(a ) nr dq her) 1+(-N - 2) .69315 l+B(a ) r I = = ~l r r

V(a ) nr dq. her)~ -[E(a)J2= -1 (1 - ~2) r I i=l r r P

z:: .039094 r (from 7.3.4) whence the mean square errore: (D) are

2 2 D(ao)• 1 + N L D(a ) [B(a )J2 + V(a )• r= r r Now N E(a) = (l-H) E(a ) + > her) E(a } = 1 + B(a), o r=l r

N + > her) [V(a ) + (E(a )- E(a»2J r=l r r and N D(a) = (l-H) D(a ) + ;-- her) D(a ). O r=l r

Using these last two sets of formulas we perform the following cal- culations to show the contribution each partial estimator, a or a , O r makes toward the total B, V, and D of a. The latter is shown in the last row for each N. 31

B(ar ) N V(a ) D(a ) r .69315 r r her) 1 0 1 1 1.48046 ·5 1 -1 .039094 .51955 .5 all 0 1 1 1 2 0 2 1 2.9218 .25 1 0 .039094 .039094 ~5 2 -1 .•019547 .50000 .25 all .25 .84497 087500 1 3 0 3 1 5.3241 0125 1 1 .039094 .51955 .375 2 -1/2 0019547 .13967 .375 3 -1 .013031 .49349 .125 all .4375 .88245 097441 1 4 0 4 1 8.6874 .0625 1 2 .039094 1.9609 .25 2 0 .019547 0019547 .375 3 -2/3 .013031 .22657 .25 4 -1 00097735 .49023 .0625 all .52083 .99747 1.12780 1 5 0 5 1 13.012 .03125 1 3 .039094 4.3632 .15625 2 1/2 .019547 .13966 .3125 3 - 1/3 .013031 0066415 .3125 4 - 3/4 .0097735 .28003 .15625 5 - 1 .0078188 .48828 .03125 all .52865 1.C17752 1.21179 1 6 0 6 1 18.297 .015625 1 4 .039094 7.7264 .09375 2 1 .019547 .50000 023430 3 0 .013031 .013031 .3125 4 - 1/2 .0097735 .12989 .23438 5 - 4/5 .0078188 031538 .09375 6 - 1 .0065158 .48698 .015625 all .49531 1.07638 1.19425 1 7 0 7 1 24.542 .0078125 1 5 .039094 120031 .054688 2 3/2 .019547 1.1006 .16406 3 1/3 .013031 .066415 .27344 4 - 1/4 .0097735 .039802 .27344 5 - 3/5 .0078188 018079 016406 6 - 5/6 .0065158 034017 .054688 7 1 .00~5850 048605 .0078125 all -.44520 1001 12 1.11135 1

10 0 10 1 49.046 .00097656 1 8 .039094 300788 .0097656 32

2 3 .019547 4.3437 .043945 3 4/3 .013031 .88581 .11719 4 1/2 .009'7735 .12989 .20508 5 0 .0078188 .0078188 .24609 6 - 1/3 .0065158 .059000 .20508 7 - 4/7 .0055850 .16246 .11719 8 - 3/4 .0048868 .27515 .043945 9 - 8/9 .0043437 .38396 .0097656 10 1 •0039 4 .48437 ·00097656 all -.30058 .6758W .71927 1

These results show that both B and V (and hence D) increase as N increases from 2 to 5, then decrease, undoubtedly monotonically. This can be explained as follows.

B(a and B(a ) are linear with positive slope in N. Hence, B o) r 1 increases with N. In the cases of aO and ar where r < 2 N, B(aO) and B(a 0; hence rB(aO)r and IB(a increase with N. r » r )! V(a ) and V(a ) are independent of N. Hence D(a ) and O r O D(a ) increase monotonically with N if r ~ N. r < Now the probability of using these early a's decreases ,with N, but not entirely monotonically for small positive r. So at first, the decrease if any is insufficient to eliminate the influence of these early a's with their large B, V, and D. But as N continues to increase, the probability of using these early,. a's diminishes to insignificance and thus a point is reached where the natural effect of increased sample size predominates. For R ~ 1, P = ~, this point is at N = 5.

7.5 ASymptotic Properties of the Estimator

Asymptotic properties of MLE have usually been studied mainly in the case of independent observations. Wald [1948J showed that under certain restrictions on the joint probability distributions of the observations, the ML equation has at least one root which is a of 33 the parameter to be estimated. Furthermore, any root of the ML equation

which is a consistent estimator of ~ is shown to be asymptotically

efficient. Wald shows that, if four conditions hold, the maximum likeli- hood estimator is both consistent and asymptotically efficient "in the wide sense." By "in the wide sense" is meant the estimator need not be asymptotically normal. Wald's article is concerned with unbiased estima­ tors for which the Cr~r-Rao lower bound on the variance, the asymptotic variance, is the reciprocal of

where ~ is the logarithm of the likelihood. Since our estimator is biased, the Cramer-Rao/ lower bound is on the mean square error. This lower bound is

Hence we will show that Wald's conditions hold for A. i~ i Condition 1: Q i (i ... 1,2,3) exist, and further E(l.U·Kb. I Q c;l> is O~· ~e a~

finite. where K is some finite proper interval on the positive real axis.

We note by successive differentiation that the likelihood functions

for both d > T and d ~ T have finite derivatives well beyond the third. For K, choose (1,2), say, for which E(~/a) is always finite and is continuous; hence, the expected l.u.b. also exists finitely. Therefore, Condition 1 is fulfilled. Condition 2: As N increases without limit (for fixed Rand P), A

goes to zero. OE(~) (i) Oa "" l+B + a B' , where B' @ oBloa. 34­ We wish to ascertain first the limit of the bias in Condition 2 (i) which from (7.3.15) is

limlBl. lim L 1 N--> 00 N->oo P [H - Q h(R)] - NLEH (r)· (7.5.2)

Three factors in (7.5.2) require study, namely, H, h(R) and ~ (;).1 Note that

We must find the limit of (~) QN.

N Nr~N r lim N(N-l) ••• (N-r+l)Q < lim ... lim N ...-lim 1 N--> 00 1. 2 ••• r - N->oo r N-->oo -N I N->oo Q-N..-.r • Q r. --L

by r applications of l / Hepital's rule and the limit is 0 since o < Q < 1. Hence h(r) may be referred to as 0(1) when N is very large, where the expression 0(1) indicates any function tending to zero with increase in N. R-l Then 1 - H ... ~ h(r) becomes with increasing N r-o R-l 1 - H ... > 0(1)· (R-l) 0(1) r==O or

H ... 1 - 0(1) •

Finally,

NK-(!r) ... N >N h(r)! < N >N h(r)! _ NE(l) --Ii - r- - r r r-R r-l 35 where E(l)r the expectation of 1r for the nonzero binomial in the Grab and Savage [1954] sense. In that paper they show that

1 1 1 ~ 1 NP + 0(1) ~ E(;) ~ NP + _2 2 + O(~) N-P N where the expression means a function whose value at most is of order N•-i Then

Now applying (7.5.8) and (7.5.5) to (7.3.15) we have, for N suffi- cient1y large

IBlsl~ [(1-0(1» - Q 0(1)] - ~ - 31 -L O(~)I NP2 N

== - ~P (l+Q) 0(1) - 31 -L 0(1-) ) NP2 N2 all these terms go to zero with increasing N. Hence we may say B-->O and a is consistent. Now turning our attention to aB' of Condition 2 (i),

LP' l' _T aB' .. a[=-- + -] (H - Qh) + 2 (H' - Qh' - hQ') p2 P P

12 () 1 L ~(r) m.~h == - [~ + p] (H - Qh) + P [p .. LNH - T + QLNh + QLhJ P 36 where h'" heR). Using the same methods as above, we see that

EH(r) ... E(r) + 0(1) ... NP + 0(1). Then

2 2 1. H 1 2 2 2 1 + B + cxB II ... 1 - ~ (Q + 2NP) + 2' ~(r) + N 1 ~(l/r) + 1 §h (l..;R+NP) I P p P

2 S 1 + ~ [-(1 - o(l»(Q + 2 NP) + NP + 0(1) + Q 0(1) (l-R+NP)]

For N large enough to ignore 0(1) we have O(~») 11 + B + aB'k 1 + $[-Q-NP) + $[NP + 3 +

2 1 1 - 1 - - [Q - 3 + o(-N)] • (7.5.12) p2

Thus the limiting value of 11 + B + cxB'1 is at most

(ii) In evaluating the limit of the denominator of A, (7.2.2) and

(7.2 0 7) are differentiated twice, l and then the expectation is determined. The two second derivatives are, respectively,

The first part of (7.5.14) occurs with probability 1 - H, and each of the second parts with probability her), where N > her) - H. r-R

The expectations for the a's can be found from (7.3.14): 37

where EO is the incomplete expectation for case (Ro). Hence ()2.r. a~[ + + ~) - ] = R(l - H) -P2R9Lh(R) (1 PH-,E (r) 2NLH da 2 which for large N, = (1 + ~) NP - 2 NL + 0(1) = NP + 0(1).

(iii) Since (7.5.1) can now be rewritten ~~f N is large) as 2 L 1 - "2 (Q - 3) + O(l/N) A - a2 _--.;;;P , NP + 0(1)

Lim...... A = O. N--...... 00 Therefore, Condition 2 is fulfilled.

~2 ~a2 Condition 3: For any a i n K, the standard deviation of 0 ~~u'PI divided by its expectation converges to zero with increasing N.

(i) In order to compute the standard deviation, we first require the limiting value of the sum of

less the square of the limiting expectation, ~ p2. 2 4Eo(a~)]. (ii) EO[(R - 2Rao)2] • R [(1 - H) - 4EO(ao) + We have shown above that the limiting value of EO(ao) is zero. Using (7.3.26) 2 and our previous limit operations, the same holds for EO(ao). 38 Hence,

Lim EO[(R - 2Ra )]2 = o. (7.5.18) N-->00 0

22222~(r (iii) EH[(r - 2rar )] = EH(r )- ar ) + 4EH(r ar}. From the second part of (7.3.14) and of (7.3.23),

Hence

~L]2 2 EH[(r - 2rar }2] = [1 + EH(r ) 2 ~ 2NL + 4[1 - 2 - NL - p-] EH(r} p which for large N, 2_? 212 2 2L =: (NPQ + N r - 0(1» [1 + p] - 4N LP(l + p) <,,[,

2 2 2 + 4N L [1 - 0(1)] + 4 NP[l - ~] p

2 = N(PQ + 4LQ + 4P} + ~ p + 0(1).

Hence the variance is

N(PQ + 4L Q + 4P) + 0(1)0

Therefore the ratio in Condition 3 has the limiting value

Lim jPQ + 4LQ + 4P • 0 N--> 00 Np2 ' 39 as required, and Condition-3 therefore holds.

Condition 4: For some small positive 6, and for all at closer to a than distance 6, the expression

is a bounded function of N. The third derivatives of (7.2.2) and (7.2.7) are, respectively,

and 2r(- 1 + 3a )/ a' 3• r

[4R(1 - H) - 6RQLh(R)/P], and 6L [(4 + p) EH(r) - 6NLH] • (7.5.26) a31n~ Then by the continuity of I oa 3 f, for every small positive e, there exists a positive & such that if la' - al < &, then the l.u.b. of the numerator of (7.5.23) has for its expectation a value that differs by less than e from the sum of (7.5.25) and (7.5.26). Therefore the numerator of (7.5.23) is)by proper choice of 6, closer than any distance e to the sum of the two expressions (7.5.25) and (7.5.26)~ which is finite for all N, but becomes infinite with increAsing N. ,Recalling the limits of H» h(R) and EH(r),the numerator (if N large) is 4NP/a3+ 0(1). We have already shown in (7.5.16) that the denominator under this condition is (7.5.17). Hence the fraction, finite for all N, approaches a finite 40 value and Condition 4 is thus fulfilled. Our estimator therefore is consistent and asymptotically efficient in the wide sense. In recapitulation, the lower bound on the mean square error for Aa is

2 I 2 = ~[l + B(a) + aB (a)J:- 2 ' _ a2 E[ a ~J oa2 where 1 + B(a) + aB/(a) is given by the first equality of (7.5.11) and the denominator by the first equality of (7.5.16). Using the approximation of Grab and Savage [1954J namely, E(~) ~ iF, we have

A 2 2 A(a) = ~P + O(l/N ).

For the special cases, we have 2 (i) R = N. A(~) =- a /N. (ii) R = O. The denominator of A(~) is NP, but the numerator is infinite, since it involves E(l/r) for r = 0 as well as for r =1, ••• , N. The asymptotic result (7.5.11) does not hold since E(l/r) thus defined is infiniteo (iii) T· O(R> 0). Using the first equalities of (7.5.11) and 2 (7.5.16), with L = P = H =EH(r) = EH(1/r) = 0 and Q =1, A(a) = a /R; 2 2 in this case, L /p = 1. Note that the asymptotic result (7.5.11) does not hold, since H is zero and not unity when T = 0.

(iv) T = 00. Q = 0, P = 1, heN) = 1, her < N) = 0, H = 1, 2 EH(r) = N, EH(l/r) = 1/N, L = 00 but QL = QL2 = O. A(~)'" a /N. 41

8.0 COST AND PR]CE OF OBTAINING THE MAXIMUM LIKELIHOOD ESTIMATE OF a

In Section 7.0 the mean square error, D(a), of the estimator was de­ rived. We now define the information obtained, I(a), as the reciprocal of D(a). The experimenter desires to increase I as much as possible. However, the cost of obtaining the estimator~(a) must be considered. If N units are used in this testing program one of the costs of obtaining information is the cost of N units. During the test, r items will be destroyed and all others will be at least partially damaged. In either case they are unmarketable; hence, the number of failures. (r) appears to be unimportant in determining the cost of estimation. However, the duration of the test certainly is a factor in the cost, for there will be an expense involved in labor or testing equipment more or less linear with time. Since the test duration, (d) is unknown when the test is planned, it will be replaced in our considerations by its average value, E(d). In light of the above, the cost of obtaining the estimate is defined as

where J is a factor selected by the experimenter to represent the ratio of cost per time unit of testing to cost per item subject to test. Having obtained C, the next step is to determine the price, U(a), defined as the cost per unit information. Thus

U = CII = CD •

If information is available on the value of the shape parameter

(M) and the ratio (J), the experimenter might wish to select a 42 combination of N, R, and P (or 5), to minimize U, either uncondition- ally or perhaps subject to some limitation such as C or N not to exceed a given value, or I to be at least some minimal value.

In this study, the following values of M and J were chosen:

In (8.0.1)

II E(d)/al/ • [HT + ~aJ t g*(~)dtRl/al/1I R T

Q S y{q)dq, o where R-l N-R ( . 1)1/M Y(q) = p q -~ • (8.0.6)

8.1 Determination of E{d) Now if 1/M is an integer - which holds only for the first two values of M studied in this dissertation - we can integrate directly as follows, using the tables in Dwight [1957] based on:

where we have expanded {l_q)R-l by the binomial theorem. Then 43

Q Q R-l (q)dq • :;- (R-l) (_I)R-r-l qN-l-r (IJ-..,(..n qq.)1/ M d S Y - r S o r=O o

If M = ~,

Q R-l R 1 1 N-r I 2 N-r IJ N-r Q y(q)dq • :;- (-) (_I)R-r- [9 ~ _ 2g ~ + 29 ]. 2 So r-O r N-r (N-r) (N_r)3 0

If M - 1,

Q R-l N-r 0 N-r Q y(q)dq = > (R-l) (_I)R-r-l [_ 9 .-(.. + 9 ]. r:o r N-r (N_r)2 0 So

Now (8.1.3) and (8.1.4) can be simplified as follows. Let

Note that

Q I-HK_l(R) Q dQ. So

Thus 44

Q 5 pR-1 qN-R dQ o

- 1 - H(R) by (7.3.12).

Similarly, using (8.1.6) and (8.1.7) we have

1 - H (R) ­ (8.1.8) 2

Employing (8.1.7) again but replacing R - 1 by r we have

R-1 R-1 () 1 _ H (R) I: > (N) 1 [1 _ H(r+1)] I: > 1-H r+1 2 - r N r-O (r+1) (r+1) -r-O N-r •

Again using (8.1.9), (8.1.6) and (8.1.7),

&:1 -> - 1 -r=O N-r

Applying (8.1.7), (8.1.9), and (8.1.10) to (8.1.3) and (8.1.4), we 1 have, if MI: 2 :

Q N 2 R-1 1-H(r+1) &! .L- 1-H~~+1) R(R) 5 y(q)dq - (l-H)L - 2L > + 2 > > (N-r N-j) ; o r=O N-r r"O j-O

(8.1.11) 45 and if M == 1: Q R-1 () y(q)dq == _ (l-H)L + > 1-~:~+1. So r-O

Finally, inserting (8.1.11) and (8.1.12) into (8.0.5) we see that, if 1 M == '2:

and if M == 1:

~ == _ L + ~-1 1-H(r+1) • a r=O N-r

These values are not difficult to program on an electronic calculator. But if ~ is not an integer, the integration can not be expressed in closed form. It is necessar,y therefore to use some form of quadrature such as Simpson's Rule:

Q S o

where q1 - q2 - q1 - q3 - q2 = ... - Q- ~-1; n must be even and selected large enough to give the desired accuracy of three significant digitso This computation was performed on the Datatron at Purdue and is dis- cussed in Section 9.0. 46

• -7,..--- Having obtained E(d), it was easy to compute C and U. The machine program was found satisfactory for all five values of M, including 1 the two cases M ... '2 and M = 1, where direct integration could have been used. However, programming time was saved by using the same program throughout.

8.2 Discussion of Results We note Table 3 presents values of C and U for the combinations M ... 0.5, 1.5, 2.5 and J ... 1, 10VlO, and 1000. The columns are headed p ....05, .1(.1).5,.7 and under each value of P is given the corresponding value of S'" T/al / M• There are four subtables in each part for the four values N ... 5, 10, 20, 40. The rows of the four subtables represent values of R, namely:

In subtable N ... 5, R = 1,- 2'rJ, 4 In subtable N ... 10, R ... 1, 2, 3, 4, 7, 9 In subtable N ... 20, R ... 1, 2, 3, 4, 10, 14, 19

In subtable N • 40, R ... 1, 2, 3, 4, 10, 20, 30, 39. In each case R = ° is omitted because if r ... 0, there is no MLE. The case R ... N is also omitted, because the Simpson Rule program on the Datatron implies the existence of y(q) on the closed interval [0, QJ; Q if R'" N, y(O) is not defined. Hence even though 5 y(q)dq o exists when R = N, an entirely new program would be required for that last case.

Table 5 records I for those values of N, R, and P for which C and U have been computed. Thus cost, information and price may be ob- served for all the designs studied. l It is of interest to note that when Q < e- ....632, an increase in 47 M means a decrease in expected duration of test and hence cost; the reverse holds for Q > e-l • On the other hand if Q. e-l , E(d) is independent of M. Since V and B are independent of M, the same holds for I and D; thus, U depends on M only because of C.

We note that if J is large, it is advantageous to put a large num­ ber of items on test and terminate the test after a short time; likewise if J is small, N should be smaller and these tests continued until more items fail.

For instance, if M =.5 (that is, the hazard decreases with time) and J = 1000 (meaning items are cheap but a long test is expensive), we note that the price (U) for the combination N = 40, P = .1, and R· 4 is the smallest of those computed; U = 11.5 with a total cost of 56.9.

Likewise, if again M = 05 but J = 1 (implying items are costly but once sacrificed we may continue the test a long time without great expense), the combination (N • 40, R• 39), gives the best price (U· 1.32) for all P, with a cost C = 51.4. For the case of intermediate J, the combination (N = 40, P = .5, R = 20) gives the smallest price; namely, U = 2.59, c = 58.0.

Turning to the last three parts for M = 2 0 5, indicating an in- creasing hazard, the prices are much higher. In the table for J = 1000, the combination (N = 40, R = 30, and any value of P ~ .7) gives the smallest price, U· 37060 Naturally the cost is high (C = 1180). If

J = 1, it is probably best to let all items in a sample of size 40 fail.

The minimum price (for N = 40, R = 39, and any p) is U = 1.07; C = 4106. Note that we have not computed the results for R = No If the experiment is to be limited by a maximum N or C, it is still possible to find optimal combinations within our budget. For instance if 48 2 M = 1.5, J = 103/ , and we may spend no more than 40 times the cost of a single item, the combination (N =10, P• .3, R• 4) would be used, giving U = 6.68, the lowest price for any C under 40., C in this case is 30.1. 49

9.0 COMPUT~rIONS

9.1 Computer Programs All arithmetic work beyond the range of a desk calculator was per- formed on electronic calculators. The IBM 650 at North Carolina State College was employed for programs (1) through (6), while program (7) was accomplished by the Burroughs Datatron 204 at Purdue. As representative values of the five parameters, we chose:

N == 5, 10, 20, 40

P == .5, .1(.1).5, .7

R == 1, ••• , N-l

J = 1. 3.162, 10, 31.62, 100, 316.2, 1000 •

Following is a description of the programs:

(1) The individual terms rkh(r), k == - 2(1)2, of the binomial expectation were computed for all 525 combinations of N, P, R. (2) Using the output of program (1), E(r), V(r), JV'(r), EH(~) and E ("\) were computed where r is distributed as h(r) == (~) yQN-r H r

(3) Since the standardized time variable,

T d ~M VM S == ~ == [- ~n(l-P)] == (- L) , a is uniquely determined from P for each M, a conversion table was computed. These results are presented in Table 4 and also as a graph in Figure 5 by means of which S may be determined from P for each M. (In Figure 5, 50 the curve for M = .1 is shown ana-that for M = 2 is omitted.) It is of interest to note that if S =1, then Q = e-l independently of M, where Q = 1 - P. (4) Using the results of (1) and (2) and equations (7.3.15), (7.3.28), and (7.3.29) the bias, variance and standard deviation of a (= ~/a) were computed. (5) A conversion table was desirable giving us

and

For this purpose the following computations were read into the IBM 650 from a regular gamma function table: If

M= 1/2, 1, 1 1/2, 2, 2 1/2, then

(~ i) =2, 1, .90275, .88623, .88726 and

(M.2 I) - (1M0)I 2 = 20, 1, .37568, .21460, .14415.

The mean and variance of the life span of an item may thus be esti­ mated by merely multiplying ~1/M or ~2/M by the appropriate factor in (9.106) or (9.1.7)0 The results for ~ = E(t) are presented graphically in Figure 60 Both ~ and 6 = Jvttr are tabulated in Table 1. (6) The mean square error, D(a), was computed by (7.3.28) and A(a), 51 the lower bound on D( a), was computed by (7.5.27). (7) The calculation of C proved to be the most difficult task because of the integral involved in (8.0.5)

Q y(q)dq So

where y(q) is given by (8.0.6):

The general appearance of y(q) is as follows: \Q i y(q)dq y(q) 0

1

q ------:>~

For small values of BVN, the peak usually occurs to the right of Q, and there is no difficulty. For moderate value of N'R the peak moves to the left, narrows and rises. As NR approaches 1, the location of the peak approaches 0 and its height increases without limit. Thus, an extremely large proportion of the area is concentrated in a narrow band between two small values of q.

Ordinary Simpson Rule quadrature as described in (8.1.15), therefore, does not give uniform accuracy throughout the domain of y. To maintain good results to three significant figures one must vary the interval qi+l - qi inversely to the change in slope of the curve - that is, 52

The Datatron successfully accomplished this by integrating separately:

Q Q-.05 .05 ) y(q)dq, S y(q)dq, ..., S y(q)dq. Q-.05 Q-.l o

In each integration, n, the number of Simpson intervals, was first chosen as 2 - that is, q2 - ql - .025. Then n was doubled, the inter- 12 - II vals halved, and the two results, 12 and II' compared. If II was less than 10-4, the second result was accepted. If this ratio equaled or exceeded 10-4, n was doubled again and the last two results, 1 3 compared. This process was continued until two successive integra­ l - I tions yielded a ratio j j-l < 10-4 where j = 10g2n• I _ ' j l The last result was then accepted. The greatest value attained by n was 27 in the case where N = 40, R = 39. In subsequent computations where it is planned for N to be 80, 160, and 320, the peak for large f will become so sharp that it will be necessary to integrate separately over domains of length 00005, because of the danger of the width of the peak being even less than .005 if R is close to N. Integrating over

005 intervals may thus miss the peak entirely. Q Having obtained the values of the separate integrals, ) y(q)dq o was derived by addition; then EffJ, C, and U followed immediately, a while I evolved as the reciprocal of D obtained from program (6). 53 9.2 Demonstration of the Program To demonstrate the performance of the programs it is instructive to examine ~ and its properties for some small, easily computed, if imprac- tical samples. For instance, let

N = 2 R = 1

a .. 1 M=2 _ 1 T2 a S .. T = .32459475 (corresponding to p .. 1 - e ...1)

(T2 .. _L ...10536169; T4 .. L2 ...011101994; Q ...9)

J .. 10.

That is, we have two randomly selected units from a population where life span follows the density function

2 f(t) .. 2t e-t , t ~ 0, and we burn them until at least time T" .32459475 has elapsed, and until at least R = 1 unit has failed. The latter event will occur before T 2 with probability 1 - .9 ...19. In this case ~ .. a. If no failures occur before T we wait for one and then use

If but one failure occurs at or before T we use

!\ 2_2 a .. a = t + y-; III and if both fail before T we use 54

The life span estimates are

1\ 1 ! E(t) =' Jd. 2' =' .88623 1&, and

A? 1\ It" 2 1\ E(t ,) =' a 1 =' a or V(t} = a(l - .88623 ) = .21460 a. ,

Actually, of course, E(t) = .88623 and V(t) = .21460, as indicated in (9.1.3) and (9.1.4). J\ Now to compute the mean and variance of a we go back to the original definition of expectation and integrate:

_t2 _t2 k E(a ) = 2 ( (2t~)k e 1 d(e 1) 0

1 _t2 2 + 2 S (t~ + T )k .9 d(e 1) .9

1 1 t 2 + t 2 k _t2 2 -t 2 + S S ( 1 2 2) d(e 1) d(e ), (9.2.8) .9 .9 which, using again the "q,.l" notation, becomes

1 S (-,i'l - L)k .9 dql .9

1 + 5

Letting k = 2 we obtain: 55 .9 1 2 2 E(a ) = 8 ~1 q1 dq1 + 2(.9) (t~ + 2.£1L + L2) dq1 S0 S.9 1 1 1 1 + 1. [ t~ + L dql + L~ dql]dq2 4 5 dql 2 .-£2 1 S S.9 .9 S.9 .9

which upon integrating by meansof the Dwight [1957] formulas,

S£ dq ... q( L - 1),

5qi dq .. t q2(2L - 1), 2 Si 2 dq .. q(.£ - 2.f. + 2), and

gives:

E(~2) = 8 (.24966660) + 1.8 (.00255906) + .25 (.00011264)

.. 2.0019740.

Now using in (7.3.27) the values:

h(O) .. Q2 81 ,. h(l) .. h(R) 2 PQ ...18 h(2) =p2 01

H = H(l) + h(2) = .19 1 - H = .81 E (1.) '" hill + hi&... 18 + 1. ( 01) ....185 Hr 1 2 • 2· 56 ~(1 ) h~l) 2 ~ 2 =< + hi ) =< .18-+ (.01) • .1825 r

~(r) • Ih(l) + 2h(2) = .18 + 2(.01) • .2

L = _T2 = -.10536169 2 L = T4 • .011101099

..;:r. = T = .32459475

we obtain

E(~2) = 1 + 1 + .19 [-1-201072338 + 1.1101099] + 9(.18)(-.10536169) 2.21072338

+ 4 (.011101099) .1825 + [1 + .42144676 - 130 (.011101099)] .185

• 2.0019759

in agreement with direct integration (9.2.12). Similarly, letting k of (9.2.9) be 1: .9 E(~) = -4 S l 1q1 dq - 108 o

1 1 - .5 SJ. 2 S dql d~ .9 .9

• 1.0094826 .

... And substituting in (703.15) we find

B(~) = 10(-.10536169)(019 - .162) -2(-.10536169) .185

• .009482552,

consistent with (9.2015). 57 Thus 2 V(~) '" 2.0019759 - 1.0094826 ...9829199,

2 D(~) ...9829199 + .0094826 '" .9830098, and ..;vrar ... 9914232.

These last three values, as would be expected from such a small sample) are somewhat large. The mean, variance, and standard deviation of the number of failures can be computed from her):

2 E(r ) ...81 + (.18 + .04) '" 1.03

E(r) '" .81 + (.18 + .02) '" 1.01 VCr) '" 1.03 - 1.0201 '" .0099 .JV(r) '" .0995.

The expected test duration can be computed from (8.0.5): .9 E(d) '" .19(.32459475) + 2 S q vi:): dq. o

Let us obtain a rough approximation of E(d) using only 4 intervals for Simpson's rule. If y(q) '" q ..;-.J" then

yeo) '" 0 4y (.225) '" 4(.225) \/1.49166 '" 1.0992 2y (.45) .. 2(.45) .j.79851 '" .8042 4y (.675) '" 4(.675) J.39304 .. 1.6926 y (.9) "'.9";.10537 ::: .2921 Total .. 3.8871. 58 Hence,

E(d) = .0618 + 2(1/3) ~ (3.8871) = .6447.

The cost of the test is

C • 2 + .6447J = 8.447.

The mean square error and the information are, respectively

D• .9830098 I• 1/.9830098 • 1.01728.

The Cramer-RaoI lower bound of the mean square error may be obtained from

2 1 + ~ [-H(Q+2NP) + EH(r) + Qh(R)(l-R+NP)] + N~2 Ea(~) A• R(l-H) _ 2R~h(R) + (1 + ~) Ea(r) _ 2NLH

• 1 + 1.1101099 [-.247 + .2 + .0;243 + .00821481 ••29200721 • 98218536 .81 + .34137188 - .22144676 + .08007488 1.0100000·

not much lower than D ••9830098 as in (9.2.25). The price

U = CD = 8.477 x .9830098 = 8.303. 59

10.0 SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS FOR FURTHER RESEARCH This dissertation presents the results of an investigation to deter- mine the optimum sample size (N) minimum required failures (R) and minimum test duration (T) in order to estimate most effectively the

scale parameter, a, in the Weibull distribution:

In this paper we use S, a standardized time variate,

dependent upon a, which can be set at will, but from which T may only

be guessed until a is known. The location parameter (~) is considered to be 0; the shape parameter (M) is assumed known to the experimenter. In determining the optimum combination of these test constants, the maximum

likelihood estimate of a has been used.

The properties of the Weibull distribution and the shape of the acco~ panying density function for various values of M were presented verbally

in Section 5.0 and graphically in Figure 1, and numerically in Table 6. The mean and standard deviation of the life span, t, of an item were M shown in Section 5.0 to be proportional to a1/ :

..

Figure 6 displays u as a function of a and Table 1 presents

both IJ. and a as functions of a. The "hazard" function 60 was defined in Section 5.0 as the instantaneous tendency to failure; it

is proportional to the probability of failure in the next instant of

time given that the item has survived till time t. This probability

increases, remains constant, or decreases with time as M exceeds,

equals, or is less than 1. A new method of censoring by both Rand T was given in Section 7.1 - that is: place N items on test and stop the test only after

both R items have failed ~ T time units has elapsed. The maximum

likelihood estimator (~) of a was derived in Section 7.2.

R 1\ M M "'! [> t + (N-R)t ], if d>T o R i=l i R Aa =[a r 1\ M M a .. ! [> t + (N-r)T ], if d = T, r r I=I i

where:

t is the life span of the i th successive item to fail, i r is the actual number of failures, at least as great as R, and

d is the actual test duration, at least as great as T.

The bias (B) and mean square error (D) of the standardized estimator 1\ (a .. !) were shown in Section 7.3 to be Q:

L 1 B(a) = p [H - Q h(R)] - NLEH (r)

, • and

D(a) '" E(a) 2 - 2 B(a) - 1 61

222P • 1 + RL -2 H + Qh: (NP -R + 1) h(R) R RP RP2

where the expectation [E(a)] is the bias plus unity, and the other symbols are defined in Section 6.0. The varianee V is then

2 V(a) • D(a) - [B(a)] •

It was proved in Section 7.4 that the estimator satisfies the Wald [1948] conditions for consistency and asymptotic ; hence, the lower bound [A(a)] on D(a) is

A(a) •

R(1 - H) _ 2RgLh(R) + (1 + ~) p P

where X. is the log of the likelihood of the sample. The behavior of B, V, D, and A, as functions of R, P, and V are discussed in Section 7.4. They all decrease montonically with increasing R as might be expected. But as functions of P or N, they actually demonstrate, if R is small, a period of increase with increasing P or No Even if we consider B1N as an argument, still V, D, and A display 62 a hump, although B does not. Considered as functions of P, the vari- abIes V, D, and A all have the value 1/R for P=O, decrease to a mini­ mum near the point P-R/N, rise again to a maximum in the vicinity of

P '" 1 - R/2N, and then falloff to the value 1/N at P-l. A suggestion as to why these oddities occur is given by some calculations in Section 7.4. It is shown there, that at least for the case P '" ~ and R=l, the bias, variance, and mean square error all increase with increasing N to a maximum at N=5, and thereafter decrease. It is evident that for the early values of a for each N, that is, bias and variance are both large, as is to be expected. For small N, the probability of employing these early a's as estimators is high, and does not in all cases decrease with increasing N at first. Thus the early a's with their large biases and variances occur often with small

N and therefore contribute heavily to the total B, D, and V. When N finally grows to the point that the early a's are used rarely, then the later a's with decreasing B, D, and V, predominate. Figures 2, 3, and 4 present these oddities graphically in the cases of V, D, and A, while Table 1 shows the numerical values of B, V, and D for various N, R, and T.

A brief table, Table 49 is given showing the relationship between S and the probability (p) of an item failing before S for various M.

M S '" (- L)1/M '" [-,i'n (I-P) ]1I

Figure 5 graphs S as a function of P for several M. The information [I(a)] derived from the estimator is defined in Section 8.0 as the reciprocal of D, and is tabulated in Table 5. The cost [C(a)] of the estimator was defined in Section 8.0 as 63

C .. N + JE(d)/al!M (8.0.1) where J is the ratio "cost per unit time of continuing the test" to "cost per item placed on test". The formula for obtaining E(d) was derived in Section 8.1. ~ E(d) • HT + t R g*(tR) dtR T

th where g*(tR) is the density function of the R failure time. The price U(a) of the estimator is the cost per unit information,

U .. C/I .. CD. (8.0.2)

Values of C and U for various combinations of M, N, P, R, and J are presented in Table 3. For instance if M".5 and J .. 1000, we would select N = 40, P .. .1 (or 5 ...0111) and R .. 4, giving a price /2 of 11.5 and a cost of 56.9. If M.. 1.5, J .. 103 , and C is limited to 40, we choose N = 10, P .. .3 (or 5 ...502) and R .. 4, giving a price of 6.68 and a cost of 30.1.

By means of the IBM 650 at N. C. State College, the numerical values of B, V, A, D and I used in the above cited tables and graphs were computed. With the aid of the Burroughs Datatron 204 at Purdue, the values of E(d), C, and U were obtained. With the aid of the tables, an experimenter who has knowledge of the M and J which apply to his product, may design a life test ex- per1ment selecting N, R, and 5 which will be optimum for him - either giving a minimum price, a maximum information, or cost not exceeding some value, or a best test for a given size sample. In determining T from

5, he must have at least an idea of the order of magnitUde of a since T is a function of S and a. Future topics of research, some of which the author already has planned or in progress include: 1. A deeper investigation of the peculiar nonmontonic behavior of the variance, mean square error and asymptotic mean square error, of the estimator as a function of sample size and testing time, with a view toward obtaining a rigorous explanation of their strange performance. 2. Expansion of the results of this thesis to include samples of size 80, 160, 320, and 640. 3. Simulation of a real life test situation by means of several hundred random samples of each of the various sizes, drawn from several thousand Weibull distributed random numbers for each value of M. Ten such samples for each (N, M) combination have been drawn so far and many more are planned. 4. A study of an estimator in the event the test stops when either R failures occur ~ time T elapses instead of ~ ••• and. 5. A study similar to this but wherein ~ rather than R is con­ sidered one of the constants at the disposal of the experimenter. 6. A study of the large bias which is characteristic of ~ for small Rand P and an investigation of a method of reducing this bias. 7. Some entirely different procedure not based on maximum likeli­ hood, and free from some of the ~steries and disadvantages of this es­ timator. 65

Table 1. Mean (IJ.) and standard deviation «f') of life span (t) for various 0: and M l;.~;:>;:'~'

1 l 11 21 M 2 2 2 2 4 IJ. 2 • 10- .01 1>.0419 .0886 .141 .01 a 4.47 • 10-4 .01 .0284 .0463 .0602 4 IJ. 8 • 10- .02 .0665 .125 .186 .02 a 17.9 • 10-4 .02 .0452 .0655 .0794 4 fJ. 32 • 10- .04 .106 .177 .245 .04 a 71.6 • 10-4 .04 .0717 .0927 .105 2 IJ. 2 • 10- .1 .194 .280 .353 .1 2 a 4.47 • 10- 01 0132 .146 .151 2 IJ. 8 • 10- .2 .309 .396 .466 .2 2 a 17.9 0 10- 02 .210 .207 0199 2 IJ. 32 • 10- .4 .490 .561 .615 .4 2 a 7106 • 10- .4 .333 .293 0263

IJ. 2 1 0903 .886 .887 1 a 4.47 ,"::\ 1 .613 .463 .380

I.l 8 2 1.43 1025 1017 2 a 17.9 2 0973 .655 .501

IJ. 32 4 2.27 1077 1054 4 a 71.6 4 1054 .926 0661

1 \ M/- . /2 I 1 I 2 IJ.=M o vo:; a=VMo-(M.)

I.l and a expressed in time units

[See Section 9 0 1, Program (5)J 66 ~ Table 2. Bias (B), variance (V), and mean square error (D) of estimator (a) for selected N, R or ~, and P

p .0 .05 .1 .2 .3 .4 .5 .7 1 P .0 .05 .1 .2 .3 .4 .5 .7 1 R=l R=3 N N B 0 .021 .073 .208 .320 .375 .366 .232 0 B 00 0 0 .002 .008 .022 .042 .084 0 5 V 1 .963 .893 .791 .825 .958 1.077 .904.200 5 V .333 .333 .333 0330 .322 .308 .295 .297 .200 D 1 .964 .898 .835 u.927 _~&98 1~?lg.928 .200 D .,333 .333 .333 _,,310 .322 .,302 .296 .304 0200 B 0 .078 .215 .393 .390 .301 .208 .093 0 B o 0 .004 .033 .084 .12"6' .137 .090 0 10 V 1 .886 .780 .895 10066 .959 ~674 .260 .100 10 V .333 .333 .327 .296 .269 .275 0290 0232 .100 D 1 .892 .826 1.049 1,,21a __l~949 .717 .269 .100 D .333 .333 .327 .221 .276 .290 .309 0240 .100 B 0 .218 .399 .349 .206 .125 .083 .041 0 B o 0005 0038 ,,135 .153 .118 .083 .041 0 20 V 1 .776 .870 1.012 .595 .292 .169 0090 .050 20 V .333 .326 .291 .263 .280 .231 .163 .090 .050 D 1 .823 1.029 1.134 .617 .3~{ 0176 .092 .050 D .333 0326 .293 .281 .303 .245 .170 .092 .050 B 0 .401 .3b9 .152 .082 .054 .038 0019 0 B o .040 .139 .137 .082 .054 .038 .019 0 40 V 1 .859 1.026 .361 .141 .086 .062 .040 .025 40 V .333 .289 .259 0244 .138 .086 .062 .040 .025 D 1 1.019 1.162 .384 0148 .089 .064 .040.025 D .333 .291 .278 .263 .145 .089 0064 0040 0025 R=:2 R=4 B 0 .001 .004 .025 .063 .110 .150 0163 0 B 0 0 0 0 .001 .002 .005 .022 0 5 V .500 .499 .493 .465 .430 0410 .420 .475 .. 200 5 V .250 .250 ,,250 .250 ,,249 .,247 .243 .233 .200 e D .500 .499 .493 .466 .43~ .422 .442 •.2.02 ,,200 D .250 .250 .250 •. 250 __ .249 .247 .243 .233 .200 B 0 0005 .031 .123 .19 .209 .178 .092 0 B 0 0 0 • ooryI .029 .062 ,,090 .084 0 10 V .500 .491 .457 .395 ,,412 .450 ,,427 .249 .100 10 V 0250 .. 250 .. 249 ,,241 .221 .207 .210 ,,207 .100 D .500 0491 "lt58 .410 .450 .494 .459 .257 .100 ~250 .249 d41 .222 .211 ,,218 .214 .100 B 0 .034 .128 .227 .183 .122 .083 0041 0 B 0 .. 001 .010 .070 ~li7 .,109 0081 .041 0 20 V .500 0454 0389 .435 ,,394 .260 .166 .090 .050 20 V .. 250 .. 249 ,,238 .200 .205 .198 0157 .090 0050 D .500 0455 ,,405 . .486 .427 .275 .173 .092 .020 D .. 250 ,,21±2 .238 .205 .219 .. 210 .164 .092 .050 B 0 .130 .234 ,,146 .. 082 .054 .038 .019 0 B 0 .011 .074 .123 ,,081 .054 .038 .019 0 40 V .500 0386 .427 .294 .140 .086 .062 .040 .025 40 V .250 .236 ,,197 .200 .135 .086 .062 .040 ,,025 D .500 .4~~~2_ 0316 0147 .089 0064 .040 .022_ D _.250 .236 .203 .215 .141 ,,089 .064 0040 .025

2 B expressed in nondimensiona1 time units} V and D :: B + V expressed in (nondimensional time units)2

[See Section 9.1~ Programs (4) and (6)] 67 -- Table 2 (continued)

p .0 .05 .1 .2 .3 .4 .5 .7 1 p .0 .05 .1 .2 .3 .4 .5 .7 1 R=5 R=N N N B 0 0 0 0 0 0 0 0 0 B 0 0 0 0 0 0 0 0 0 V .200 .200 .200 .200 .200 .200 .200 .200 .200 V .200 .200 .200 .200 .200 .200 .200 .200 .200 5 ,.., 5 J.J .200 .200 .200 .200 .200 .200 .200 .200 .200 D .200 .200 .200 .200 .200 .200 .200 .200 .200 3 0 0 0 •COl .008 .024 .047 .070 0 B 0 0 0 0 0 0 0 0 0 10 V .200 .200 .200 .198 .191 .179 .170 .176 .100 10 V .100 .100 .100 .100 .100 .100 .100 .1CO .100 D .200 .200 .200 .198 .191 .179 .172 .181 .100 D .100 .1QQ., .100 ..1.00 ,100 .1CO .100 .10C .100 B 0 0 .002 .031 .079 .094 .079 .041 0 B 0 0 0 0 0 a 0 0 0 20 V .200 .200 .197 .172 .161 .166 .148 .090 .050 20 V .050 .050 .050 .050 .050 .050 .050 .050 .050 D .200 .200 .197 .173 .167 .175 .154 .092 .050 D .050 .050 .050 .050 .05-0 .e50 .• 050 .05c .050 B 0 .003 .035 .104 .080 .054 .038 .019 0 B 0 0 0 0 0 0 0 0 C 40 V .200 .196 .170 .162 .129 .086 .062 .040 .025 40 v .025 .025 .025 .025 .025 .025 .025 .025 .025 D .200 .196 .171 .173 .136 • Oi3~_~6l;.___.040 .025 D .025 .025 .025 .025 .025 .025 .025 .025 .025 - 68 e Table 2 (continued)

p .0 .05 .1 .2 .3 .4 .5 .7 1 p .0 .05 .1 .2 .3 .4 .5 .7 1

R R N= .2 N= .6 N N B 0 .021 .073 .208 .320 .375 .366 .232 0 B 0 0 0- .002 .008 .022 .042 .084 0 5 V 1 .963 .893 .791 .825 · .958 1.al? .904 .200 5 V .333 .333 .333 .330 .322 .308 .295 .297 .2CO D 1 .964 .898 .835 .927 1.098 1.212 .958 .200 D .333 .333 .333 .330 .322 .309 .296 .304 .200 B 0 .005 .031 .123 .196 .209 .178 .092 0 B 0 0 0 0 .001 .007 .019 .050 0 10 V .500 .491 .457 .395 .412 .450 .427 .249 .100 10 V .167 .167 .167 .166 .165 .159 .151 .147 .1CD D .500 .491 .458 .410 .450 .494 .459 .257 .100 D .167 .167 .167 .166 .165 .159 .151 .150 .100 B 0 .001 .010 .070 .117 .109 .081 .041 0 B 0 0 0 0 0 .001 .007 .029 0 20 V .250 .249 .238 .200 .205 .198 .157 .090 .050 20 V .083 .083 .083 .083 .083 .082 .078 .074 .050 D .250 .249 .238 .205 .219 .210 .164 .092 .050 D .083 .083 .083 .083 .083 .082 .078 .075 .050 B 0 0 .002 .039 .067 .053 .038 .019 0 B 0 0 0 0 0 0 .002 .017 0 40 V .125 .125 .123 .102 .101 .083 .062 .040 .025 40 V .042 .042 .042 .042 .042 .042 .040 .037 .025 D .125 .125 .123 .104 .106 ·.086 .063 .040 .025 D .042 .042 .042 .042 .042 .042 .040 .037 .025

R ~ = .4 N= .8

B 0 .C01 .004 .025 .063 .110 .150 .163 a B 0 0 0 0 .001 .002 .005 .022 0 e 5 V .500 .499 .493 .465 .430 .410 .420 .475 .200 5 V .250 .250 .250 .250 .249 .247 .243 .233 .200 D .500 .499 .493 .466 .434 · .422 .442 .502 .200 D .250 .250 .250 .250 .249 .247 .243 .233 .2CO B 0 0 0 .0Cf7 .029 .062 .090 .084 0 B 0 0 0 0 0 0 •COl .009 0 10 v .250 .250 .249 .241 .221 · .207 .210 .207 .100 10 V .125 .125 .125 .125 .125 .125 .124 .118 .100 D .250 .250 .249 .241 .222 .211 .218 .214 .100 D .125 .125 .125 .125 .125 .125 -.124 .118 .1CO B 0 0 0 .001 .011 .034 .053 .040 0 B 0 0 0 0 0 0 0 .003 0 20 v .125 .125 .125 .124 .115 .106 .106 .089 .050 20 v .062 .062 .062 .062 .062 .062 .062 .060 .050 D .125 .125 .125 .124 .115 · .1Cf7 .108 .091 .050 D .062 .062 .062 .062 .062 .062 .062 .060 .050 B 0 0 0 0 .003 .019 .031 .019 0 B 0 0 0 0 0 0 0 .001 0 40 V .062 .062 .062 .062 .060 .054 .053 .040 .025 40 V .031 .031 .031 .031 .031 .031 .031 .031 .025 D .062 .062 .062 .062 .060 · .055 .054 .040 .025 D .031 .031 .031 .031 .031 .031 .031 .031 .025 69 - Table 3. Cost (C) and price (U) of estimator (a) for selected M, J, N, R, and P M= .5 J = 1 P .05 .1 .2 .3 .4 .5 .7 P .05 .1 .2 .3 .4 .5 .7 ~ '-' .00263 .0111 .0498 .127 .261 .480 1.45 S .00263 .0111 .0498 .127 .261 .480 1.45 N=5 N=20 R R 1 C 5.08 5.08 5.11 5.16 5.28 5.49 6.45 1 C 20.0 :20.0 20.1 20.1 20.3 20.5 21.4 u 4.90 4.56 4.26 4.79 5.80 6.65 6.18 U 16.5 20.6 22,.7 12.8 6.23 3.60 1.98 2 C 5.31 5.31 5.31 5.34 5.42 5.58 6.47 2 C 20.0 20.0 ZQ.1 20.1 20.3 20.5 21.4 u 2.65 2.62 2.48 2.32 2.29 2.47 ~.25 U 9.11 8.12 8.60 5.58 1.98 ,... 9.75 3.55 3 u 5.83 5.83 5.83 5.84 5.87 5.96 .65 3 c 20.0 20.0 20.1 20.1 20.3 20.5 21.4 u 1.94 1.94 1.92 1.88 1.81 1.77 2.02 U 6.53 5.36 5.64 6.10 4.96 3.48 1.98 4 C 7.11 7.11 7.11 7.11 7.12 7.15 7.51 4 C 20.1 20.1 20.1 20.1 20.3 20.5 21.4 u 1.78 1.78 1.78 1.77 1.76 1.74 1.75 U 4.99 4.77 4.ll 4.41 4.26 3.35 1.98 10 C 20.5 20.5 20.5 20.5 20.5 20.6 21.5 N=10 u 2.05 2.05 2.05 2.02 1.90 1.80 1.83 14 C 21.4 21.4 21.4 21.4 21.4 21.4 21.7 1 c 10.0 10.0 10.1 10.1 10.3 10.5 11.4 u 1.53 1.53 1.53 1.53 1.53 1.51 1 ...... I,?- U 8.94 8.29 10.5 12.3 10.8 7.52 3.08 19 C 27.3 27.3 27.3 27.3 27.3 27.3 27.3 2 c 10.1 10.1 10.1 10.1 10.3 10.5 1l.4 U 1.44 1.44 10M 1.44 1.44 1.4L 1.41 u 4.94 4.62 4.14 4.57 5.07 4.81 2.95 e 3 C 10.2 10.2 10.2 10.2 10.3 10.5 1l.5 N=40 u 3.38 3.32 3.02 2.82 2.99 3.24 2.75 4 c 10.3 10.3 10.3 10.3 10.4 10.5 11.5 1 C 40.0 40.0 40.0 40.1 40.3 40.5 41.4 u 2.57 2.57 2.48 2.29 2.19 2.29 2.45 U 40.8 46.5 15.4 5.94 3.58 2.57 1.67 7 c 11.4 11.4 1l.4 11.4 11.4 1l.4 li.8 2 C 40.0 40.0 40.0 40.1 40.2 40.5 41.4 u 1.63 1. 63 1.63 1.62 1.61 1.57 1.53 u 16.1 19.3 12.6 5.88 3.58 2.57 1.67 9 c 14.3 14.3 14.3 14.3 14.3 14.3 14.3 3 c 40.0 40.0 40.0 40.1 40.3 40.5 1.;.1.4 u 1.59 1.59 1.59 1.59 1.59 1.58 1.57 U 11.6 11.1 10.5 5.80 3.58 2.57 1.67 4 C 40.0 40.0 40.1 40.1 40.3 40.5 41.4 C and U expressed in units "cost per item placed on test" U 9.46 8.11 8.61 5.66 3.58 2.57 1.67 10 C 40.1 40.1 40.1 40.1 40.3 40.5 41.4 U 4.01 4.00 3.61 3.39 3.24 2.56 1.67 s = ----T = [- 1.n (l-P)]1/M expressed in nondimensionaltime 20 C 40.5 40.5 40.5 40.5 40.5 40.6 1...1.4 ry-; U 2.02· 2.02 2.02 2.02 1.95 1.81 1.65 30 C 41.9 41.9 41.9 41.9 41.9 41.9 42.0 units [see Section 8.0J U 1.40 1.40 1.40 1.40 1.40 1.40 1.34 39 C 51.4 51.4 51.4 51.4 51.4 . 51.4 51.4 u 1.32 1.32 1.32 1.32 1.32 1.32 1.32 70 e Table 3 (continued)

M= .5 J = 10v'10 p .05 .1 .2 .3 .4 .5 ,.7 p .05 .1 .2 .3 .4 .5 .7 s .00263 .0111 .0498 .127 .261 .480 1.45 S .00263 .Olil .0498 .127 .261 .480 1.45 N=5 N=20 R R 1 C 7.54 7.63 8.33 10.2 14.0 20.5 50.9 1 c 20.2 20.4 21.6 24.0 28.3 35.2 65.8 U 7.27 6.85 6.95 9.46 15.3 24.9 48.7 u 16.6 21.0 24.5 15.3 8.69 6.18 6.W 2 C 14.6 14.7 14.9 15.8 18.3 23.4 51.6 2 C 20.5 20.6 21.6 24.0 28.3 35.2 65.8 U 7.31 7.23 6.93 6.86 7.71 10.4 25.9 U 9.34 8.37 10.5 10.3 7.78 6.10 6.07 3 c 31.2 31.2 31.2 31.5 32.5 35.4 57.1 3 C 21.1 21.1 21.9 24.1 28.3 35.2 65.8 U 10.4 10.4 10.3 10.1 10.0 10.5 17.3 u 6.86 6.18 6.15 7.30 6.91 5.98 6.07 4 c 71.7 71.7 71.7 71.8 72.0 73.0 84.5 4 C 21.9 21.9 22.4 24.3 28.3 35.2 65.8 U 17.9u_.17.2.~_17. 9 17.9 17.8 17.8 19.7 U 5.44 5.20 4.58 5.31 5.94 5.76 6.07 10 C 35.6 35.6 35.6 35.7 36.2 39.2 66.0 N=10 U 3.56 3.56 3.56 3.51 3.36 3.42 5.63 14C 65.0 65.0 65.0 65.0 65.0 65.2 75.3 1 C 10.7 10.8 11.8 14.1 18.3 25.2 55.8 U 4.64 4.64 4.64 4.64 4.63 4.59 4.91 U 9.50 8.93 12.4 17.2 19.2 18.1 15.0 19 C 252 252 252 252 252 252 252 2 C 12.1 12.2 12.8 14.6 18.5 25.3 55.8 u 13.3 13.3 13.3 13.3 13.3 13.3 13.3 u 5.95 5.58 5.23 6.58 9.12 11.6 14.4 3 c 14.8 14.8 15.0 16.2 19.4 25.6 55.9 N=40 e U 4.92 4.84 4.47 4.48 5.63 7.91 13.4 4 c 19.1 19.1 19.2 19.8 21.9 27.0 56.0 1 C 40.1 40.4 41.6 44.0 48.3 55.2 85.8 U 4.77 4.76 4.62 4.39 4.61 5.88 12.0 U 40.9 46.9 16.0 6.52 4.29 3.51 3.45 7 c 53.9 53.9 53.9 53.9 54.1 54.8 68.2 2 C 40.2 40.4 41.6 44.0 48.3 55.2 85.8 U 7.70 7.70 7.70 7.69 7.64 7.54 8.81 U 16.2 19.4 13.1 6.46 4.24 3.51 3.45 9 c 145 145 145 145 145 145 147 3 c 40.3 40.4 41.6 44.0 48.3 55.2 85.8 U 16.1 16.1 16.1 16.1 16.1 16.1 16.1 U li.7 11.2 10.9 6.37 4.29 3.51 3.45 4 C 40.4 40.5 41.6 44.0 48.3 55.2 85.8 U 9.56 8.21 8.94 6.21 4.28 3.51 3.45 10 C 42.8 42.8 42.9 44.4 48.3 55.2 85.8 U 4.28 4.27 3.87 3.74 3.89 3.49 3.45 20 C 55.4 55.4 55.4 55.4 55.6 58.0 85.9 U 2.77 2.77 2.77 2.77 2.68 2.59 3.43 30 C 99.8 99.8 . 99.8 99.8 99.8 99.8 103 U 3.33 3.33 3.33 3.33 3.33 3.33 3.28 39 C 400 400 400 400 400 400 400 U 10.2 10.2 10.2 10.2 10.2 10.2 10.2 71 e Table 3 (continued) M=.5 J = 1000 P .05 .1 .2 .3 .4· .5 .7 P .05 .1 .2 .3 .4 .5 .7 S .00263 .0111 .0498 .127 .261 .480 1.45 S .•00263 .Olll .0498 .127 .261 .480 1.45 N=5 N=20 R R 1 C 85.4 88.2 110 170 288 497 .1460 1 C 26.3 33.0 70.1 147 281 500 1470 U 82.3 79.2 92.0 157 316 602 1390 U 21.6 33.9 79.5 93.9 86.4 87.9 135 2 C 310 310 317 347 424 588 1480 2 C 36.2 40.1 72.2 148 281 5C0 1470 U 155 153 148 150 179 260 742 U 16.5 16.3 35.1 63.0 77.4 86.7 135 3 C 832 832 834 842 875 966 1650 3 C 53.5 55.2 79.0 149 281 500 1470 U 277 277 275 271 270 286 501 U 17.4 16.2 22.2 45.2 68.8 85.0 135 4 C 2120 2120 2120 2120 2120 2150 2520 4 C 78.9 79.5 94.5 155 282 501 1470 U 529 529 529 528 525 524 587 U 19.7 '18.9 19.4 33.9 59.2 81.9 135 10 C 514 514 514 515 534 627 1480 N=10 U 51.4 51.4 51.3 50.7 49.5 54.8 126 14 C 1440 1440 1440 1440 1440 1450 1770 1 C 30.7 35.4 66.7 140 272 491 1460 U 103 103 103 103 103 102 115 U 27.4 29.3 70.0 170 285 352 392 19 C 7360 7360 7360 7360 7360 7360 7370 2 C 77.0 78. 6 97.0 156 278 493 1460 U 388 388 388 388 388 388 388 U 37.8 36.0 39.8 70.2 137 226 375 3 C", 161 .... . 161 169 207 306 504 1460 N=40 tr '"' 53:6'" 52.8 50.2 57.2 89.0 156 351 a 4 C 298' .3~8,... 300 319 385 547 1460 1 C 43.1 51.2 89.8 167 301 520 11.90 • U 71..4' 74.3 72.3 70.8 81.3 119 313 U 44.0 59.5 34.5 24.8 26.8 33.1 59.9 7 C 1400 1400'- 1400 ..... 1400 .1400 1430 1850 2 C 44.9 51.8 89.8 167 301 520 11.90 U 200 200 200 200 198 196 239 U 18.1 24.9 28.3 24.5 26.7 33.1 59.9 9 C 4280 4280 4280""42804280. 4280 4350 3 C 48.3 53.4 89.9 167 301 520 1490 U 476 476 476 476 476 475 477 U 14.0 14.8 23.7 24.2 26.7 33.1 59.9 . ---, , .._. " 4 c. 53.7 56.9 90.3 167 301 520 1490 U 12.7 11.5 19.4 23.6 26.7 33.1 59.9 10 C 128 129 133 178 302 520 1490 U 12.8 12.8 12.0 15.0 24.3 32.9 59.9 20 C 528 528 528 528 535 610 1490 U 26.3 26.3 26.3 26.3 25.7 27.2 59.4 30 C 1930 1930 1930 1930 1930 1930 2030 U 64.4 64.4 64.4 64.4 64.4 64.4 61..7 39 C 11,400 11,400 11,400 11,400 11,400 11,400 11,400 U 293 293 293 293 293 293 293 72 e Table 3 (continued)

M= 1.5, J = 1 P .05 .1 .2 .3 .4 .5 .7 p .05 .1 .2 .3 .4 .5 .7 s .138 .223 .368 .503 .639 .783 1.13 s .138 .223 .368 .503 .639 .783 1.13 N=5 N=20 R R 1 C 5.32 5.35 5.43 5.53 5.65 5.79 6.13 1 c 20.2 20.2 20.4 20.5 20.6 20.8 21.1 U 5.13 4.80 4.53 5.13 6.21 7.01 5.87 U 16.6 20.8 23.1 13.1 6.35 3.65 1.95 2 c 5.56 5.56 5.59 5.63 5.71 5.82 6.14 2 c 20.2 20.3 20.4 20.5 20.6 20.8 21.1 U 2~77 2.74 2.60 2.45 2.41 2.57 ~.08 u 9.20 8.21 9.90 8.76 5.69 3.60 1.9 3 c 5.82 5.82 5.82 5.84 5.87 5.93 .17 3 c 20.3 20.3 20.4 20.5 20.6 20.8 21.1 U 1.94 1.94 1.92 1.87 1.81 1.76 1.87 u 6.61 5.94 5.74 6.21 5.05 3.53 1.9 4 c 6.15 6.15 6.15 6.15 6.16 6.18 6.30 4 c 20.4 20.4 20.4 20.5 20.6 20.8 21.1 U 1.54 1.54 1.54 1.53 1.52 1.50 1.47 u 5.07 4.84 4.18 4.49 4.33 ].40 1.95 10 c 20.8 20.8 20.8 20.8 20.8 20.8 21.1 U 2.08 2.08 2.07 2.04 1.93 1.82 1.80 N=10 14 C 21.1 21.1 21.1 21.1 21.1 21.1 21.2 U 1.51 1.51 1.51 1.51 1.50 1.49 1.38 19 C 21.9 21.9 21.9 21.9 21.9 21.9 21.9 1 c 10.2 10.3 10.4 10.5 10.6 10.8 li.l U 1.15 1.15 1.15 1.15 1.15 1.15 1.15 u 9.il 8.48 10.9 12.8 11.2 7.73 2.99 2 c 10.3 10.4 10.4 10.5 10.6 10.8 11.1 N=40 e U 5.08 4.75 4.27 4.74 5.25 4.94 2.86 3 c 10.5 10.5 10.5 10.6 10.7 10.8 li.l 1 C 40.1 40.2 40.4 40.5 40.6 40.8 41.1 U 3.48 3.43 3.12 2.92 3.10 3.33 2.68 u 40.9 46.7 15.5 6.00 3.61 2.59 1.65 4 c 10.6 10.6 10.6 10.6 10.7 10.8 11.1 2 C 40.2 40.2 40.4 40.5 40.6 40.8 41.1 U 2.65 2.64 2.55 2.36 2.26 2.35 2.38 u 16.2 19.4 12.7 5.94 3.61 2.59 1.65 7 c 11.0 li.O il.O 11.0 il.O 11.1 li.2 3 C 40.2 40.2 40.4 40.5 40.6 40.8 41.1 U 1.58 1.58 1.58 1.57 1.56 1.52 1.45 U 11.7 11.2 10.6 5.86 3.61 2.59 1.65 9 c 11.5 li.5 11.5 11.5 li.5 li.5 11.5 4 c 40.2 40.2 40.4 40.5 40.6 40.8 41.1 U 1.28 1.28 1.28 1.28 1.28 1.28 1.27 U 9.51 8.15 8.67 5.72 3.61 2.59 1.65 10 c 40.4 40.4 40.4 40.5 40.6 40.8 41.1 U 4.04 4.04 3.65 3.42 3.27 2.58 1.65 20 c 40.8 40.8 40.8 40.8 40.8 40.8 41.1 U 2.03 2.03 2.03 2.03 1.96 1.82 1.64 30 c 41.2 41.2 41.2 41.2 41.2 41.2 41.2 U 1.37 1.37 1.37 1.37 1.37 1.37 1.31 39 C 42.2 42.2 42.2 42.2 42.2 42.2 42.2 U 1.08 1.08 1.08 1.08 1.08 1.08 1.08. 73 e Table 3 (continued) H = 1.5 J = 10 v'IO P .05 .1 .2 .3 .4 .5 .7 p .05 .1 .2 .3 .4 .5 .7 S .138 .223 .368 .503 .639 .783 1.13 s .138 .223 .368 .503 .639 .783 1.13 N=5 N=20 R R 1 C 15.2 16.0 18.5 21.8 25.6 29.9 40.8 1 C 25.2 27.3 31.7 35.9 40.2 44.8 55.8 U 1Lo6 14.4 15.5 20.2 28.1 36.2 39.1 U 20.7 28.1 35.9 22.9 12.4 7.87 5.14 2 C 22.6 22.7 23.5 25.1 27.5 30.9 40.9 2 c 26.9 28.1 31.8 35.9 40.2 44.8 55.8 U 11.2 il.O 11.6 20.6 U ,.. 11.3 10.9 13.7 12.2 11.4 15.4 15.3 11.1 7.76 5.lL. 3 v 30.9 30.9 31.0 31.5 32.5 34.4 41.9 3 c 29.0 29.5 32.1 36.0 40.2 44.8 55.8 U 10.3 10.3 10.2 10.1 10.0 10.2 12.7 U 9.45 8.63 9.04 10.9 9.84 7.60 5.14 4 C 41.2 41.2 41.2 41.3 41.6 42.2 46.1 4 C 31.1 31.3 32.9 36.1 40.2 44.8 55.8 U 10.3 10.3 10.3 10.3 10.3 10.3 10.7 u 7.75 7.44 6.74 7.91 8.1.5 7.33 5.14 10 C 43.9 43.9 43.9 44.0 44.5 46.4 55.8 N=10 U 4.39 4.39 4.39 4.33 4.13 t..05 4.76 14C 54.4 54.4 54.4 54.4 54.4 54.5 57.7 1 C 16.9 18.4 22.0 26.0 30.2 34.8 45.8 U 3.88 3.88 3.88 3.88 3.88 3.8t. 3.77 U 15.1 15.2 23.0 31.6 31.7 24.9 12.3 19 c 79.2 79.2 79.2 79.2 79.2 79.2 79.2 2 C 20.7 21.2 23.2 26.4 30.3 34.8 45.8 U 4.17 4.17 4.17 1..17 4.17 4.17 4.17 u,.. 10.2 9.71 9.53 il.9 15.0 16.0 11.8 3 v 24.7 24.8 25.7 27.7 30.9 35.0 45.8 N=40 e u 8.24 8.13 7.63 7.65 8.96 10.8 il.O 4 C 28.8 28.8 29.1 30.1 32.2 35.5 45.8 1 C 44.5 47.1 51.6 55.9 60.2 64.8 75.8 U 7.21 7.19 7.01 6.68 6.78 7.73 9.81 U 45.4 54.7 19.8 8.28 5.35 4.11 3.e5 7 c 43.0 43.0 43.0 43.1 43.2 43.7 48.1 2 C 45.0 47.2 51.6 55.9 60.2 6Lo8 75.8 U 6.15 6.15 6.15 6.14 6.10 6.01 6.22 U 18.2 22.7 16.3 8.20 5.35 Le.1l 3.05 9 c 58.2 58.2 58.2 58.2 58.2 58.3 58.9 3 C 45.9 47.4 51.6 55.9 60.2 64.8 75.8 U 6.47 6.47 6.47 6.47 6.47 6.47 6.46 u 13.3 13.2 13.6 8.08 5.35 4.11 3.05 4 C 46.9 47.9 51.7 55.9 60.2 6Lo8 75.8 U 11.1 9.69 11.1 7.89 5.35 4.11 3.05 10 C 53.5 53.5 53.9 56.3 60.2 64.8 75.8 U 5.35 5.34 4.86 4.75 4.85 4.C) 3.05 20 C 64.3 64.3 64.3 64.3 64.5 66.0 75.8 U 3.21 3.21 3.21 3.21 3.10 2.9Le 3.C2 30 C 78.5 78.5 78.5 78.5 78.5 78.5 79.4 U 2.61 2.61 2.61 2.61 2.61 2.61 2.53 39 c 109 109 109 109 109 109 109 u 2.80 2.80 2.80 2.80 2.80 2.80 .2.20 71J. e Table 3 (continued) M= 1.5 J = 1000

P .05 .1 .2 .3 .4 .5 .,7 P .05 .1 .2 .3 .4 .5 .7 S .138 .223 .368 .503 .639 .783 1.13 S .138 .223 .368 .503 .639 .783 1.13 N=5 N=20 !) R 1 c 327 354 433 536 656 793 il40 1 c 185 ~51 388 523 659 803 1150 u 315 318 362 497 720 960 1090 u 152 258 441 333 203 141 106 2 c 562 566 590 640 717 825 1140 2 c 239 276 392 523 659 803 1150 u 280 279 275 278 303 365 573 u 109 il2 191 223 182 139 106 ':\ -' c 824 824 828 842 875 934 1170 3 c 304 320 404 525 659 803 1150 u 275 274 273 271 270 277 356 u 99.1 93.7 il4 159 161 136 106 4 c il50 1150 1150 1150 il60 1180 1300 4 c 372 377 428 531 660 803 1150 U 288 288 288 287 287 287 304 u 92.6 89.6 87.6 116 139 131 106 10 c 776 776 776 779 796 856 1-150 N=10 u 77.6 77.6 77.6 76.7 73.9 74.7 98.3 14 c 1110 1110 1110 ill0 1110 lliO 1210 1 C 229 274 388 515 649 793 il40 U 79.0 79.0 79.0 79.0 79.0 78.3 79.2 u 204 226 408 628 682 569 307 19 C 1890 1890 1890 1890 1890 1890 1890 2 c 349 364 428 530 654 794 1140 u 99.6 99.6 99.6 99.6 99.6 99.6 99.6 u 171 167 176 238 323 364 294 3 c 476 479 506 570 670 799 1140 N=40 U 159 157 150 157 195 247 274 e 4 c 605 606 614 645 711 816 1140 u 151 151 148 1 C 183 264 408 543 679 823 1170 143 150 178 245 U 187 306 157 80.4 60.4 52.3 47.1 7 "v 1050 1050 1050 1060 1060 1070 1220 u 151 151 151 2 C 199 266 408 543 679 823 1170 150 150 148 157 U 80.3 128 124 79.6 60.4 52.3 47.1 9 c 1540 1540 1540 1540 1540 1540 1560 u 171 171 171 171 171 3 C 226 274 408 543 679 823 1170 171 171 u 65.6 76.1 107 78.5 60.3 52.3 L.7.1 4 C 259 288 409 543 679 823 1170 U 61.2 58.4 87.9 76.6 60.3 52.3 47.1 10 C 467 467 480 554 680 823 1170 u 46.7 46.6 43.3 46.8 54.7 52.0 47.1 20 C 810 810 810 810 816 864 1170 U o. 40.4 40.4 40.4 39.3 38.5 46.8 30 C 12 0 12 0 12 0 12 0 12 0 12 0 1280 U 41.9 41.9 41.9 41.9 41.9 41.9 41.0 39 C 2230 2230 2230 2230 2230 2230 2230 U 57.3 57.3 57.3 57.3 57.3 57.3 57.~ 75 e Table 3 (continued)

Me ::z: 2.~ J = 1

p .05 .1 .2 .3 .4 .5 .7 p .05 .1 .2 .3 .4 .5 .7 s .305 .407 .549 .662 .764 .864 1.08 S' .305 .407 .549 .662 .764 .864 1.08 N=5 N=20 R R 1 c 5.49 5.52 5.60 5.68 5.77 5.m 6.08 1 C 20.3 20.4 20.5 20.7 20.8 20.9 21.1 u 5.29 4.96 1..67 5.27 6.34 7.11 5.82 U 16.7 21.0 23.3 13.2 6.38 3.67 1.94 .:: C 5. 68 5.69 5.71 5.75 5.81 5.89 6.08 2 C 20.4 20.4 20.6 20.7 20.8 20.9 21.1 u 2.84 2.81 2.66 2.50 2.46 2.60 3.05 u 9.28 8.29 9.99 8.82 5.72 3.62 1.91. 3 C 5.87 5.87 5. eJ1 5.89 5.91 5.95 6.10 3 C 20.5 20.5 20.6 20.7 20.8 20.9 21.1 U 1.96 1.96 1.94 1.89 1.82 1.76 1.85 U 6.67 5.99 5.79 6.26 5.08 3.54 1.94 4 C 6.07 6.07 6.07 6.07 6.08 6.09 6.16 4 C 20.5 20.5 20.6 20.7 20.8· 20.9 21.1 U 1.52 1.52 1.52 1.51 1.50 1.48 1.44 U 5.11 4.88 4.22 4.52 4.36 3.41 1.91- 10 C 20.8 20.8 20.8 20.8 20.9 20.9 21.1 N=10 U 2.08 2.08 2.08 2.05 1.9~ 1.82 1.80 14 C 21.0 21.0 21.0 21.0 21.0 21.0 21.1 1 C 10.4 10.4 10.6 10.7 10.8 10.9 H.l U 1.50 1.50 1.50 1.50 1.50 1.48 1.38 U 9.27 8.63 11.1 13.0 11.3 7.79 2.98 19 C 21.4 21.4 21.4 21.4 21.4 21.4 21.5 2 c 10.5 10.5 10.6 10.7 10.8 10.9 11.1 U 1.13 1.13 1.13 1.'3 1.13 1.13_ 1.13 U. 5.16 4.83 4.34 1:..81 5.32 4.98 2.85 3 c 10.6 10.6 10.7 10.7 10.8 10.9 11.1 N=40 U 3.53 3.48 3.16 2.96 3.13 3.35 2.66 e 4 c 10.7 10.7 10.7 10.8 10.8 10.9 11.1 1 C 40.3 40.4 40.5 40.7 40.8 40.9 1.1.1 U 2.68 2.67 2.58 2.39 2.28 2.37 2.37 U 41.1 46.9 15.6 6.02 3.62 2.60 1.65 7 c 11.0 11.0 11.0 11.0 11.0 11.0 11.1 2 C 40.3 40.4 40.5 40.7 40.8 40.9 41.1 U 1.57 1.57 1.57 1.57 1.56 1.52 1.44 U 16.3 19.5 12.8 5.96 3.62 2.60 1.65 9 c 11.3 11.3 11.3 11.3 11.3 11.3 11.3 3 c 40.4 40.4 U 40.5 40.7 40.8 40.9 41.1 1.25 1.25 1.25 1.25 1.25 1.25 1.24 U 11.7 11.2 10.7 5.88 3.62 2.60 1.65 4 C 40.4 40.4 40.5 40.7 40.8 40.9 41.1 u 9.55 8.19 8.71 5.74 3.62 2.60 1.65 10 c 40.6 40.6 40.6 40.7 40.8 40.9 41.1 u 4.06 4.05 3.66 3.43 3.28 2.58 1.65 20 c 40.9 40.9 40.9 40.9 40.9 40.9 41.1 U 2.04 2.04 2.04 2.04 1.97 1.82 1.64 30 c 41.1 41.1 41.1 41.1 41.1 41.1 41.1 U 1.37 1.37 1.37 1.37 1.37 1.37 1.31 39 C 41.6 41.6 41.6 41.6 41.6 41.6 41.6 U 1.07 1.07 1.07 1.07 1.07 1.07 i.O?

e e e

Table 4.' }A"..i.nimum. standardized test duration (S) for various P and r.1

-1 l-e = P 0 .05 .1 .2 .3 .4 .5 .6 .63213 .7 .8 .95 1 -M 1 0 .00263 .Oill .0498 .127 .261 .480 .840 1 1.45 2.59 8.97 co 2 1 0 .0513 .105 .223 .357 .511 .693 .916 1 1.20 1.61 3.00 co

1 1 1.13 2.08 co 1 2 0 .138 .223 .368 .502 .639 .783 .943 1.37 2 0 .226 .325 .472 .597 .715 .833 .957 1 1.10 1.27 1.73 co

1 .662 .966 1 1.08 1.21 co 2 2 0 .305 .407 .549 .764 .864 1.55

S expressed in nondimensional time units S = [-J n (l-P)]l/M = (-L) 11M [See Section 9.1, Program (3)]

--J en e 79 Table 5. Information (I = t) obtained from estimator (a)

p .05 .1 .2 .3 .4 .5 .7 1 p .05 .1 .2 .3 .4 .5 .7 1 N=5 N=20- R R 1 1.04 1.11 1.20 1.08 .911 .825 1.04 5 1 1.22 .972 .882 1.57 3.25 5.69 10.9 20 2 2 2.03 2.15 2.30 2.37 2.26 1.99 5 2 2.20 2.47 2.06 2.34 3.63 5.77 10.9 20 3 3 3 3.03 3.11 3.24 3.37 3.29 5 3 3.07 3.42 3.55 3.30 4.09 5.89 10.9 20 4 4 4 4 4.01 4.04 4.11 4.29 5 4 4.02 4.21 4.88 4.57 4.76 6.11 10.9 20 5 5 5 5 5 5 5 5 5 10 10 10 10 10.2 10.8 11.5 11.7 20 14 14 14 14 14 14 14.2 15.3 20 19 19 19 19 19 19 19 19 20 N=10 20 20 20 20 20 20 20 20 20 1 1.12 1.21 .953 .821 .953 1.39 3.72 10 2 2.04 2.18 2.44 2.22 2.03 2.18 3.89 10 N=40 3 3.01 3.06 3.37 3.62 3.44 3.24 4.16 10 4 4 4.01 4.15 4.50 4.74 4.59 4.67 10 7 7 7 7 7.01 7.08 7.27 7.73 10 1 .981 .861 2.60 6.75 11.2 15.7 24.9 40 9 9 9 9 9 9 9.01 9.12 10 2 2.48 2.08 3.17 6.82 11.3 15.7 24.9 40 e 10 10 10 10 10 10 10 10 10 3 3.44 3.60 3.80 6.92 11.3 15.7 24.9 40 4 4.23 4.94 4.65 7.09 11.3 15.7 24.9 40 10 10 10 11.1 il.8 12.4 15.8 24.9 40 20 20 20 20 20 20.8 22.4 25.1 40 30 30 30 30 30 30 30 31.4 40 39 39 39 39 39 39 39 39 40 40 40 40 40 40 40 40 40 40

I expressed in (nondimensional time units)-2 [See Section 8.0] 80

Table 6. Ordinates of the s~andardized Weibull density function, I M M-l -5 . w( s;M) = s e

1 M 11 ~ 2 1 2 2 2

_s2 2 - 1 -J;' -s . 2. J- -8 J;' 5 -s J 8 5 w= -- e wee W=2 s e w=2 se w= - se 2J;' 2

0 co 1 0 0 0 .05 1.783 .951 .332 .010 .028 .1 1.153 .905 .~59 .198 .079 .2 .714 .819 • 14 .384 .220 .3 .527 .741 .697 .548 .390 .34& .470° .4 .420 .670 .736 .682 .572 .481 .746* .3&9 .697 .7M .779 .7&1 :~ .297 .549 .730 .838 .880 .7 .497 .972 .707 .857* .8 .229 .449 .664 .845 1.006 e .815 1.0101f-

.9 0 .611 .980 1 .184 .368 .552 .736 .920 1.066 .515° 1.225 .5&6° 1.245 .611° 1.35 .437 1.5 .120 .223 .292 .316 .291 1.731 .110 ,1.75 .164 2 .086 .135 .125 .073 .024 *maximum °inf1ection point

[See Section 5.0 and Figure 1] e e e

1.4

1.2

t 0.8 -en -~ 0.6

0.4

(See table 6 and section 5.0) o 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 ,....0) .. e e e

\.0 1.0

0.8 0.8

0.6 0.6 t R=2,P=0.05 t '==- R=2,P=O.2 > 0.4 >0.4 -----~ 0.2 0.2 ------R=3,P=0.2

o 10 20 30 -40 0 10 20 30 40 N~ N~

1.0 1.0

0.8 0.8 t 0.6 t 0.6 ______R=2,P=O.l_ R=2'P=O.3~ > 0.4 >0.4 ;; -- 0.2 012 R :;:3,P=O.l R =4, P=0.3

o 10 20 30 40 0 10 20 30 40 N~ N~ (See section 7.3) Figure 2. Variance (V) of the standardized estimator (a) as a function of N R5 for various combinations of small R and small P e .. e e

1.2 1.2

1.0 1.0 R='I N=40

0.8 0.8

t0.6 t 0.6 o r--­ o 0.4 0.4

0.2. 0.2 R=5 N=5 R=40,N=40 , 0 0.2 0.4 0.6 0.8 1.0 o 0.2 0.4 0.6 0.8 1.0 P --.. p • (See table 2, section 7.4 and section 9.1) Figure 3. Mean square error (0) of standardized estimator (a) ~ as a function of P for various ( R, N) com binations e e e

1.0

0.8 0.8

t 0.6 t 0.6 R =2 <{ 0.4 <{ 0.4ktN=40 R=3,N=5- R=5.N=5 0.21 :""J. 0.2

R=40.N=40 0 .. 0.2 0.4 0.6 0.8 1.0 o 0.2 0.4 0.6 0.8 1.0 P .. P , (See section 7.5) Figure 4. Asymptotic mean square error (A) of standardized estimator (a ) as a function of P co .;:- for various (R, N) combinations 85

M=O.l 1.8

1.6

M =1.5 1.4

1.2

t 1.0 en 0.8

0.6

0.4

o 0.2 0.4 0.6 0.8 1.0 p \>- (See table 4 and section 9.1 program(3»

Figure 5. Minimum standardized test duration (S) as a function of P for various M e e e

2.5

2.0

5 M=2 -I r· M=2 2 =t.. 1.0

0.5

·0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 ~ oc. (See table I and section 3.0) Figure 6. Expected life span (}l) as a function of the scale parameter (Cl') co 0'- of the Wei bull density function for various values of the shape parameter, M LIST OF REFERENCES

• Chapman, D. G., 1959. Advanced Theory of Statistics. R. C. Taeuber, recorder, North Carolina State College. Mimeo Series No. 214. Cohen, A. Co, Jr., 1950. Estimating the Mean and Variance of Normal Populations from Singly Truncated and Doubly Truncated Samples. Annals Math. Stat. 21, 557-569. Cohen, A. C., Jro, 1951. Estimation of Parameters in Truncated Pearson Distributions. Annals Math. Stat. 22, 256-265. Deemer, W. L., Jr., and Votaw, D. F., Jr., 1955. Estimation of Parameters of Truncated or Censored Exponential Distributions. Annals Math. Stat. 26, 498-504. Dwight, H. B., 1957. Tables of Integrals. 3d Ed. New York. The Macmillan Co. Epstein, B., 1953. Statistical Problems in Life Testing. Proceedings 7th Annual Convention ASQC, 385-398. Epstein, B., 1954. Life Test Estimation Procedures. Unpub. Tech. Rep. No.2, Dept. of Matho, Wayne State Univ.

Epstein, B., and Sobel, M., 1953. Life Testing. J. Am. Stat. Assn. ~, 486-502. Epstein, B., and Sobel, Mo, 1954. Some Theorems Relevant to Life Testing from an Exponential Distribution. Annals Math. Stat. 22, 373-381. Epstein, B.~ 1960. Tests for the Validity of the Assumption that the Underlying Distribution of Life is Exponential. Parts I and II, Technometrics ~, 83-101, 167-183. Grab, Eo L., and Savage, I. Ro, 1954. Tables of the Expected Value of x1 for Positive Bernoulli and Poisson Variables. J. Am. Stat. Assn. .. lt2" 169-177• Gupta, S. S., 19520 Estimation of the Mean and the Standard Deviation of a Norlllal Population from a Censored Sample. Biometrika.2.2., 260-273. Hald, Ao, 1949. Maximum Likelihood Estimation of the Parameters of a Normal Distribution which is Truncated at a Known Point. Skand. Aktuarietidskrift ~, 119-1340 Herd, Go R., 1956. Estimation of the Parameters of a Population from a MUlti Censored Sample. Unpub. Ph.Do dissertation, Iowa State College. IBM Reference Manual, 1959. Random Number Generation and Testing. New York. 88 Jaech, J. L., 1955. New Techniques for Life Testing. Unpub. Hanford Atomic Products Co. Kao, J. H. K., 1956. A New Life Quality Measure for Electron Tubes. IRE Trans. on Reliability and . PGRQC-7, April, 1956. 1-11. Mendenhall, W., 1957. Estimation of Parameters of Mixed Exponentially Dis­ tributed Failure Time Distributions from Censored Life Test Data. • Unpub. Ph.D. dissertation, North Carolina State College• Mendenhall, W., 1958. A Bibliography on Life Testing and Related Topics. Biometrika ~, .521,-543.

Mendenhall, W. p and Hadar, Ro Ho, 1958. Estimation of Parameters of Mixed Exponentially Distributed Failure Time Distributions from Censored Life Test Data. North Carolina State College Reprint Series No. 130. Mendenhall, Wop and Lehman, Eo H., Jr., 1960. An Approximation to the Negative Moments of the Positive Binomial Useful in Life Testing. Technometrics ~ 227-242. Sarhan, Ao Eo, and Greenberg, B. G., 1956. Estimation of Location and Scale Parameters by Order Statistics from Singly and Doubly Censored Samples 0 Annals Math. Stat. gz, 427=451.

Stephan, F. F• .9 1945. The Expected Value and Variance of the Reciprocal and other Negative Powers of a Positive Bernoullian Variate. Annals Math. Stat. ~, 50-61. Tilden, Do Ao, 1957. Life Testing Using the Halfrange. Proceedings All Day Conference QC, Rutgers, Sept. 7, 1957. Wald, Ao, 1948. Asymptotic Properties of the Maximum Likelihood Estimate of an Unknown Parameter of a Discrete Stochastic Processo Annals Matho Stat. 12" 40-46. Weibull, Wo, 1951. A Statistical Distribution Function of Wide Applicability. J. Applo Mech. 18, 293-297.

Zelenll M., 19590 Factorial Experiments in Life Testing. Technometrics 1, 269-2880 Zelen, Mo, 1960. Analysis of Two-Factor Classifications with Respect to Life Test. Contributions to Probability and Statistics. Stanford • University Press, Stanford, Calif., 508-517.

Zelen, Mo, and Dannemiller» Mo C., 1961. The Robustness of Life Testing Procedures Derived from the Exponential Distribution. Technometrics J., 29-49. ABSTRACT LEHMAN, EUGENE H, JR. Estimation of the Scale Parameter in the Weibull Distribution Using Samples Censored by Time and by Failures. (Under the

direction of RICHARD LOREE ANDERSON).

• This dissertation presents the results of an investigation to deter- mine the optimum sample size" (N) the miniJnum required failures (R) and the minimum test duration (T) in order to estimate most effectively

the scale parameter ,(a) in the Weibull distribution. To determine the optimum combination of these test constants, the maximum likelihood

estimator (~) of a was used. The properties of the Weibull distribution and the shape of the accompanying density function for various values of the shape parameter

(M) were presented. The mean and variance of the life span (t) of M an item were shown to be proportional to al/ :

M ~ = E(t) =al/ ~ t

2 M (J2 • Vet) = a / [~'. _ (~ 1)2].

The "hazard" function

z •

was defined as the instantaneous tendency to failure. It is proportional to the probability of failure in the next instant of time given that an • item has survived to time t. This probability increases, remains con-

stant, or decreases with time as M exceeds, equals, or is less than I. A new method of censoring was presented where N items are placed

on test and the test is stopped only after both R items have failed ~ T time units have elapsed; ~ for this method was derived. The bias (B») the variance (V), and the mean square error (D) of the standardized 1\ estimator (a =~) were investigated. It was noted that B, V, and D 0: are not monotonic as functions of P and N, where P is the probability

of any item failing before time T. B, V, and D are, however, mono- tonically decreasing as functions of R. It was proved that "0: satisfies the four conditions set up by Wald in 1948 for consistency and asymptotic efficiency. The information (I) derived from ~ was defined as the reciprocal of D. The cost (C) of the estimator was defined as

C = N + J E(d)/o:l/M

where J is the ratio "cost per unit time of continuing the test" to "cost per item placed on test", and E(d) is the expected duration of the test. The price (U) of the estimator was defined as the cost per unit information,

U = C/I = CD.

Values of U and C for various combinations of M, N, P, R, and J were calculated and tabulated. With the aid of the tables, an experimenter who has knowledge of

the M and J which apply to his product, may design a life test experi- ment selecting N, R, and S (where S is the standardized time variate, M • S = T/o:l/ ) which will be optimum for him - either giving a minimum price, a maximum information, or cost not exceeding some value, or a best test for a given sample sizeo Included also are graphs and tables presenting the Weibull density function of the standardized time variate M-l (M) W( sJ M) = Ms exp - s ;

D and the asymptotic mean square error (A) as functions of P; V as a function of N; ~ as a function a; and P as a function of S.