
ESTIMATION BY LEAST SQUARES AND BY MAXIMUM LIKELIHOOD JOSEPH BERKSON MAYO CLINIC AND MAYO FOUNDATION* We are concerned with a functional relation: (1) P,=F(xi, a, #) =F(Y,) (2) Y = a+pxj where Pi represents a true value corresponding to x,, a, P represent parameters, and Yi is the linear transform of Pi. At each of r 2 2 values of x, we have an observation of pi which at xi is distributed as a random variable around Pi with variance O2. We are to estimate the parameters as a, , for the predicting equation (3) pi = F (xi, a, ,) . By a least squares estimate of a, , is generally understood one obtained by minimizing (4) E (Pi-23;) 2 . Although statements to the contrary are often made, application of the principle of least squares is not limited to situations in which p is normally distributed. The Gauss- Markov theorem is to the effect that, among unbiased estimates which are linear func- tions of the observations, those yielded by least squares have minimum variance, and the independence of this property from any assumption regarding the form of distribution is just one of the striking characteristics of the principle of least squares. The principle of maximum likelihood, on the other hand, requires for its application a knowledge of the probability distribution of p. Under this principle one estimates the parameters a, , so that, were the estimates the true values, the probability of the total set of observations of p would be maximum. This principle has great intuitive appeal, is probably the oldest existing rule of estimate, and has been widely used in practical appli- cations under the name of "the most probable value." If the pi's are normally distributed about P1 with or0 independent of Pi, the principle of maximum likelihood yields the same estimate as does least squares, and Gauss is said to have derived least squares from this application. In recent years, the principle of maximum likelihood has been strongly advanced un- der the influence of the teachings of Sir Ronald Fisher, who in a renowned paper of 1922 and in later writings [1] outlined a comprehensive and unified system of mathematical statistics as well as a philosophy of statistical inference that has had profound and wide development. Neyman [2] in a fundamental paper in 1949 defined a family of estimates, * The Mayo Foundation, Rochester, Minnesota, is a part of the Graduate School of the University of Minnesota. 2 THIRD BER1KELEY SYMPOSIUM: BERKSON the R.B.A.N. estimates, based on the principle of minimizing a quantity asymptotically distributed as x2, which have the same asymptotic properties as those of maximum likeli- hood. F. Y. Edgeworth [3] in an article published in 1908 presented in translation excerpts from a letter of Gauss to Bessel, in which Gauss specifically repudiated the principle of maximum likelihood in favor of minimizing some function of the difference between esti- mate and observation, the square, the cube or perhaps some other power of the differ- ence. Edgeworth scolded Gauss for considering the cube or any other power than the square, and advocated the square on the basis of considerations that he advanced him- self as well as on the basis of Gauss's own developments in the theory of least squares. Fisher's revival of maximum likelihood in 1922 is thus seen to be historically a retro- gression. Whether scientifically it was also a retrogression or an advance awaits future developments of statistical theory for answer, for I do not think the question is settled by what is now known. When one looks at what actually has been proved respecting the variance properties of maximum likelihood estimates, we find that it comes to little or nothing, except in some special cases in which maximum likelihood and least squares estimates coincide, as in the case of the normal distribution or the estimate of the binomial parameter. What has been mathematically proved in regard to the variance of maximum likelihood esti- mates almost entirely concerns asymptotic properties, and no one has been more un- equivocal than Sir Ronald Fisher himself in emphasizing that this does not apply to real statistical samples. I hasten to note that, from what has been proved, there is a great deal that reasonably can be inferred as respects approximate minimum variance of the maximum likelihood estimate in large samples. But these are reasonable guesses, not mathematical proof; and sometimes the application in any degree, and always the measure of approximation, is in question. Of greatest importance is this: the maximum likelihood estimate is not unique in possession of the property of asymptotic efficiency. The members of Neyman's class of minimum x2 estimates have these properties and he introduced a new estimate in this class, the estimate of minimum reduced x2. Taylor's [4] proof that the minimum logit x2 estimate for the logistic function and the minimum normit x2 estimate for the normal function advanced by me [5], [6] fall in this class di- rects attention to the possibility of its extension. In this paper is presented such an extension applying to a particular situation in which Pi = 1 - Qi is the conditional probability given xi of some event such as death, and where Yi = a + i3xi is the linear transform of Pi. This is the situation of bio-assay as it has been widely discussed. We define a class of least squares estimates either by the minimization of (5) (A) Wi(pi-pi) 2 where pi is an observed relative frequency at xi, distributed binomially about Pi, pi is the estimate of Pi and l1wi is any consistent estimate of the variance of pi; or by the minimization of (6) (B) Wi (y,-9i) 2 where yi is the value of the linear transform Yi corresponding to pi, 9i = 4 + j$xi is the estimated value of the linear transform in terms of 4, 4, the estimates of a, ,B, respec- LEAST SQUARES AND MAXIMUM LIKELIHOOD 3 tively, and 1/W; is any consistent estimate of the variance of yi. The quantities (5) and (6) which are minimized are asymptotically distributed as x2. The minimum logit x2 estimate and the minimum normit x2 estimate fall in the de- fined class of least squares estimates (B), and, as I mentioned previously, Taylor proved that these are R.B.A.N. estimates. Recently Le Cam kindly examined the class of esti- mates given by the extended definition and in a personal communication informed me that, on the basis of what is demonstrated in the paper of Neyman previously referred to and Taylor's paper, this whole class of least squares estimates can be shown to have the properties of R.B.A.N. estimates. They are therefore asymptotically efficient. The defined class contains an infinity of different specific estimates, of which a par- ticular few suggest themselves for immediate consideration. Suppose we minimize (7) , p -pi) 2 where ni is the number in the sample on which the observed pi is based and the pi of the weight wi is constrained to be the same value as the estimate pi. If pi is a consistent esti- mate of Pi, then P,4,/n, is a consistent estimate of the variance of piand this estimate falls in the defined class. Now the expression (7) is identically equal to the classic x2 of Pearson, so that this particular least squares estimate is identical with the minimum x2 estimate. Suppose we have some other consistent estimate of Pi which we shall symbolize as p= 1 - 4, (omitting the subscripts i) and we minimize (8) z pQ~~~~n (Pi-pi) 2; then this is a least squares estimate as defined. The weights wi = n/P4. are now known constants, and to minimize (8) we set the first derivatives equal to zero and obtain the equations of estimate n- dP O (9) (pi ) = (10) ni (pi -_p) dpi = 0 If now we specify that in the conditional equations (9), (10), p = f that is, that the values yielded as the estimates shall be the same as those used in the coefficients, then the equations of estimate become ( 1 1 ) z p d4~~Pi( i a) a ° (12) (p Pi)- =0 The equations (11) and (12) are just the equations of estimate of the M.L.E. Therefore the M.L.E. is also a particular member of the defined class of least squares estimates. This may be presented more directly in a way that emphasizes an interesting point. Suppose the independently determined consistent estimate P. to be used in the constant weights wi for minimizing (8) is in fact the one obtained as the solution of (11) and (12). Then pi, the estimate obtained, will be the same as was used in the weights and this is the M.L.E. This is clear if we observe that we should obtain these least squares estimates as the solution of (9), (10), and we already have noted that these are 4 THIRD BERKELEY SYMPOSIUM: BERKSON satisfied with pO = pi if pi is the M.L.E. The estimate obtained by minimizing (8) is consistent with the estimate used in the weights, only if the estimate appearing in the weights in equation (8) is the M.L.E. For instance, if we use for P. in the weights wi not the M.L.E. but the minimum x2 estimate, the estimate which will be obtained is not the minimum x2 estimate, nor is it the M.L.E., but another estimate which is neither, al- though it too is asymptotically efficient.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-