<<

Biometrika Trust

A Simple Method by Perturbing the Minimand Author(s): Zhezhen Jin, Zhiliang Ying and L. J. Wei Source: , Vol. 88, No. 2 (Jun., 2001), pp. 381-390 Published by: Biometrika Trust Stable URL: http://www.jstor.org/stable/2673486 . Accessed: 15/12/2013 19:56

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp

. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

.

Biometrika Trust is collaborating with JSTOR to digitize, preserve and extend access to Biometrika.

http://www.jstor.org

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions Bioinetr-ika(2001), 88, 2, pp. 381-390 (C 2001 BiometrikaTrust Printed in Great Britain

A simpleresampling method by perturbing the minimand

BY ZHEZHEN JIN Departmentof , Harvard School of Public Health, 655 HuntingtonAvenue, Boston,Massachusetts 02115, U.S.A. [email protected]

ZHILIANG YING Departmentof , Columbia University, 2990 Broadway,New York, New York10027, U.S.A. zying(stat.columbia.edu

AND L. J. WEI Departmentof Biostatistics, Harvard School of Public Health, 655 HuntingtonAvenue, Boston,Massachusetts 02115, U.S.A. [email protected]

SUMMARY Suppose thatunder a semiparametricsetting an estimatorof a vectorof parametersof interestis obtainedby optimisingan objectivefunction which has a U-processstructure. The covariancematrix of the estimatoris generallya functionof the underlyingdensity function,which may be difficultto estimatewell by conventionalmethods. In thispaper, we presenta simpleresampling method by perturbingthe objectivefunction repeatedly. Inferencesof theparameters can thenbe made based on a largecollection of theresulting optimisers.We illustrate our proposal by three examples with a heteroscedastic regressionmodel.

Some key words: Bootstrap;Heteroscedastic regression; Lp norm;Resampling method; Truncated regression; U-process.

1. INTRODUCTION Let 0 be an r x 1 vectorof parameters of interest for the distribution of a randomvector Z. With n independentcopies of Z, a consistentestimator 0 for00, the truevalue of 0, can oftenbe obtained by minimisingan objectivefunction Un(0). If the minimnandis smoothenough in 0, generallya large-sampleapproximation to thedistribution of (0 - 00) can be obtainedeasily. If U,1(0)is not smooth,however, it may be difficultto estimatethe covarianceof 0 well by conventionalmethods, especially under the semiparametric setting withunknown underlying density functions. Resampling methods, such as bootstrap,jack- knifeand so on, are particularlyuseful in dealingwith this situation (Efron & Tibshirani, 1993; Shao & Tu, 1995;Davison & Hinkley,1997). Generallya resamplingmethod gener- ates 'artificial'samples from the observedZ's and, foreach generatedsample, we obtain an estimateof 00 by minimisingthe correspondingminimand. Inferences about 00 can

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions 382 Z. JIN, Z. YING AND L. J. WEI thenbe made based on a large collectionof theseestimates. Theoretical justification of the existingresampling methods is generallynontrivial and has to be made on a case by case basis. In this paper, we considera large class of objectivefunctions and propose a simple resamplingmethod by perturbingthe minimanddirectly. As withthe bootstrapmethod, the new proceduredoes not involveany complicatedand subjectivenonparametric func- tional .Our proposal is illustratedwith three examples and, foreach example, we showhow thenew methodcan be implementedeasily with existing statistical software. Recently,Parzen et al. (1994) and Hu & Kalbfleisch(2000) proposedresampling methods by perturbingthe 'score function' of the minimand directly. Often, however, the estimating functionmay not be continuousand solvingthe corresponding equations numerically can be ratherchallenging, especially when the dimensionof the parametervector is large.

2. A GENERAL RESAMPLING METHOD Suppose thatZi (i= 1,.. , n) are independentand identicallydistributed random vec- torsand that0 belongsto a compactspace 0 c .r. Let 0 be a minimiserof a U-process of degreeK,

Un(O)= K h(Zii r *ri,; 0) i I< il < i2< *.** < iK < P whereh(.) is symmetricin theZ's, and L denotesthe summation over subsets of K integers (i1, ..., K) from{1, . . . , n}. For fixed0, U,,(0)is simplya standardU- with kernel h(.). Most ofthe minimands for the estimation procedures in theliterature are U-processes. In Propositions1 and 2 of the Appendixwe show that,if, for large n, U,(0) has a 'good' quadraticapproximation around 00, 0 is stronglyconsistent and asymptoticallynormal. When h(.; 0) is not twice differentiablewith respectto 0, it is difficultto estimatethe covariancematrix of 0, whichgenerally involves the unknown underlying density functions. Let {zi} be the observedvalue of {Zi} and let {Vi} be n independent,identical copies of a nonnegative,completely known V withmean [u and varianceK2ft2. Considera stochasticperturbation U,(0) of the observedUn(O), where

Un)= ($7)-' Z (Vil *. +? ViK)h(zi,. *ZiK.. ; 0). K 11< il < i2 < ... < iK,< n Let 0* be theminimiser of U,(0). Note thatthe onlyrandom quantities in U,1(0)are {IV}. In Proposition3 of the Appendix,we show that,if U,(0) also has a 'good' quadratic expansion around 00, the distributionof ni(0 -0) can be used to approximatethe distributionof n?(0 - 00), where0 is the observedvalue of 0. In practice,the distributionof 0* can be estimatedby generatinga large number, M, say, of random samples {Vi,i = 1, ..., n}. For the jth realisedsample from{IV}, we obtain Oj0 by minimisingUn(0) (j = 1,... ,M). The theoreticaldistribution of 0* or0 can then be approximatedby the usual empirical distributionfunction based on {0]7,j = 1, ... , M}. The matrixof 0 can then be estimatedby the constructed from {0 j, j = 1, ... , M}. To make our resamplingmethod practicallyuseful, one needs reliable and efficient optimisationalgorithms for minimising U,(0) and U,1(0).In ? 3, we considerthree examples to illustrateour proposal. For each case, easily accessiblesoftware exists for solving the above optimisationproblem.

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions Simpleresampling method 383

3. EXAMPLES 3 1. Model and dataset Considera heteroscedasticlinear regression model

YX= oc+ X, + ei, (3.1)

where ST = (2, /T), Eei = 0, Zi = (Xi, 17),Xi is bounded, and the componentsof Xi are linearlyindependent in a nonnull set, i = 1, . . . , n. For X = x, let f(. Ix) be the density functionof 8, which may depend on x. Under this model, we considerthree cases to illustrateour resamplingmethod using a well-knowndataset from 41 prepubescentboys (Hettmansperger& McKean, 1998,p. 204). For each individualsubject in thedataset, the responseis thelevel of freefatty acid whilethe independent variables are age, weightand skin-foldthickness.

3 2. Estimationbased on Lp norm The estimator0 usingthe Lp normfor model (3 1) is a minimiserof the U-process n U,,(O)=n-1l E1-oc-XIDIP. i=l1 Here, the degreeK of the U-processis 1. The correspondingUn(O) is

UJO()= n ,i~1 Vi|yi-oc-xTflJP,

where(y, x) is the observedvalue of (Y, X). When p > 1, UO(O)is a convexfunction of 0. The programRLLP in the IMSL statisticalpackage (1991) can be used to estimatethe regressioncoefficients of (341) with the general Lp norm.For thecase withp = 1, minimis- ationof UO(O)can be efficientlyhandled by linear programming techniques, and an efficient algorithmdeveloped by Koenker & D'Orey (1987) is available in S-Plus to obtain 0. Furthermore,when p > 2, Un(O)is twice differentiableand 0 can be obtained trivially. Note that when p <2 all the existingestimators for the covariance matrixof 0 are derivedunder the assumption that the distribution of the error term 8 is freeof thecovari- ate X. Otherwise,further parametric structures on the errorterm or high-dimensional nonparametricfunction are needed forthe varianceestimation. For theresampling method proposed in ? 2, whenp t 2, assume that,with respect to t, f(tI x) satisfiesthe usual Lipschitzcondition and is symmetricabout 0. Also, assume that f(0tx)> 0 and El 81(p-2) < oo. Under these mild assumptions,all the conditions in PropositionsA1-A3 of the Appendixare satisfied.The details of the proofare givenin an unpublishedreport by Z. Jin,Z. Ying and L. J.Wei. It followsthat, for the generalLp norm,the distribution of 0 can be approximatedby thatof 0*. Furthermore,the aforemen- tioned softwarecan also be used to obtain 0* numerically.It is importantto note that our procedureis valid withoutimposing any parametricstructure on f(. Ix). For the freefatty acid example,estimates for the regressioncoefficients of model (341) wereobtained with various L norms.For each A " case, 1000 0*'s weregenerated to estimate the covariancematrix of 0 with several differenttypes of random variable V in U. In Table 1(a), we reportthe resultswith p = 1, 1 5, 2 and 2 5 and with V being the gamma distributionGa(1, 1) and beta distributionBe(12 - 1, 1). For comparison,we also ana- lysedthe withthe subroutineRLLP fromIMSL (1991) and withthe heteroscedastic bootstrapmethod (Efron, 1982, p. 36). The standarderror estimates under the heading

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions 384 Z. JIN, Z. YING AND L. J. WEI

_

b~~~~~~al m CD CD nOO

U~~~~~~~- mO!n.?? 11 CDCD

F 6666 666

s O _ _ C

H 00 00 > a O0O0

4-, clt . J

Ct m DL~,t CDO OCDL,?5 ?

<^ I-noC ~f- 4 O m 6O6O O g~66 666 O X 1 ~~~ 1 1

x Ws, ~~~CD1 CDco CD Clo o - -c c

v:~~~~~~~~~~~~ 0 -'Q --' XXaDHE '~ X ~~ E O O ~O_ O- C> O- O Q

o Z 0 o 66 o. ?O

t I I I I I 66666666 -~~~~~~~~~~~~ * ~ ,~~ ~ ~ ~~~~~~~~~~~0Wa a,~~~~~~~~~~~~~~~~~~~~~~~~ v ,, _4 ~~ ClfCW X 0 X * ;S OQ ; XX

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions Simpleresampling method 385 'Bootstrap' in Table 1(a) are based on 1000 bootstrapsamples. The resultsfrom both methodsare fairlysimilar to thoseobtained from our resamplingprocedure. The estimatesof the regression coefficient estimates used in IMSL wereobtained under a strong assumption,namely that the errorsare independentof {Xi}.

3 3. Estimationbased on Wilcoxonstatistic The rank estimatorbased on the Wilcoxon test statisticfor linear regressionmodel (31) is a minimiserof

Un( (2) E I Yil yil (Xil Xi2 )T#I iil < i2 -<-PI a U-processwith degree 2, whichis a convexfunction of ,B. Note thatthe interceptterm in (341) is not estimablewith this type of rank estimation.The correspondingU,(/3) is

n-'

) (Vil + Vi2) IYil -Yi2 (Xil Xi2 )TA 2 P 1 iI < i2 <- Minimisationof U,,(/3)or U,1(/3)is a linearprogramming problem. If f(. x) and its deriva- tivef'(. Ix) are bounded,one can show that all the conditionsin PropositionsA1-A3 of theAppendix are satisfied.It is importantto note that,for our resamplingmethod, there is no need to assume that the distributionof - is freeof X. On the other hand, this assumptionis crucialfor existing methods, which involve nonparametric density function estimation,for estimating the varianceof ,B. For the freefatty acid example,we generated1000 /P*'swith various random variables V. In Table l(b), we reportthe resultswith V being Ga(0 25, 05) and Be(0 125,1 125). For comparison,we also reportthe standarderror estimates of /Busing nonparametric densityfunction estimates with the softwareMinitab (Hettmansperger & McKean, 1998, p. 181), and those usingthe heteroscedasticbootstrap method.

3 4. Truncatedmedian regression Suppose that E in (3 1) has a unique at 0 forany given x. Let c < d be two known constants.It is not unusual that,because of the limitationof instruments,the responsevariable Y may not be accuratelymeasured when it is above d or below c. This resultsin a truncatedvariable T fromY such that

c, if Yj< c, Ti= Yi ifc < Yi< d, d, if Yi> d. Recently,Fitzenberger (1997) proposed an estimationprocedure defined by minimising n U1(O)= n-1 Y jT-min{d, max(c,oc+XiTf)j, i=l1 and showedthat the resultingestimator 0 is consistentand asymptoticallynormal under some regularityconditions. However, it is quite difficultto estimatewell the covariance matrixof 0. For our resamplingmethod, UJ1(O) is

1 I Un(O) = n' 1 E/tV t-min {d, max(c,oc? x77B)}1, i=l1

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions 386 Z. JIN, Z. YING AND L. J. WEI wheret is the observedvalue of T. Minimisationof UJ(O)or U11(0)can be implemented by the algorithmdeveloped by Koenker & Park (1996), or by an adaptation of the Barrodale-Robertsalgorithm for the one-sided censored quantile regression (Fitzenberger, 1997). In theeconometrics literature, the one-sided truncated regression model, with either the c = -oo or d = oo, has been extensivelystudied, for example by Powell (1984, 1986) and Pollard (1990). If

pr{XTO E (c, d)} > 0, pr(XTO = c) = 0, pr(XTO = d) = 0, one can show that the conditionsin Propositions1-3 of the Appendixare satisfied.For the freefatty acid example,we artificiallyset c = 0 325 and d = 1P016to createa dataset with20% truncationfor the response variable. The results,using the algorithm by Koenker & Park (1996), are reportedin Table l(c) with V being Ga(1, 1) and Be(12 - 1, 1). For comparison,the estimatesbased on 1000 bootstrap samples are also reportedin Table 1(c).

4. REMARKS In thispaper, we presenta rathersimple resampling method by perturbing the minimand directly.In generatingan approximationto thedistribution of 0, theproblem of choosing an appropriatenumber M of samples {Vi} is similarto that of choosingthe numberof bootstrapsamples. One maystart with 500 0*'s, say,for constructing a confidenceinterval of theparameter of interest and thenrepeat the same kindof analysiswith 500 additional 0*'s. If the discrepancybetween the two intervalsis practicallyinsignificant, no further resamplingis needed. Several extensivesimulation studies were conductedto evaluate the adequacy of the new proposals. The resultsindicate that the new resamplingmethod behaves well even forsmall sample sizes. For example,in one of the numericalstudies, we mimickedthe set-upof the freefatty acid exampleto examineif the new intervalestimation procedure based on L1 normfor the regressioncoefficient of model (341)has correctcoverage prob- abilities.To thisend, we generated1000 samples {(yi, zi), i = 1, . .. , 41} fromthe model Y-1 1702-0002 x Age-0015 x Weight+ 0 205 x SFT+ 8, whereSFT is the skin-foldthickness, the regressioncoefficients are the esti- matesfrom the free fatty acid data and E is a normalerror with various variance structures. For each of these 1000 samples,the covariateswere fixedand taken fromthe freefatty acid data, and 1000 0*'s were generatedfor estimating the covariancematrix of 0 with severaldistinct random variables V. The empiricalcoverage probabilities and estimated averagelengths for the resulting intervals are summarisedin Table 2 withV beingGa(1, 1) and Be(12 - 1, 1). For comparison,we also reportthe results based on theunconditional bootstrapmethod and theprocedure in RLLP fromIMSL. We findthat, when the variance of the errorterm in the model is constant,the intervalsobtained fromthe bootstrap methodare similarto ours in termsof theempirical coverage probabilities and estimated average lengths.The procedureused in the commercialsoftware IMSL is slightlybetter thanthese two resamplingmethods. This superiority,however, vanishes when the sample size is moderateor large,n > 100, say. If the varianceof the errorterm is not constant, forexample if the varianceis proportionalto the square of the standardisedweight, the empiricalcoverage probabilities of theexisting procedure obtained from the IMSL can be

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions Simpleresampling method 387 extremelylow. On the otherhand, the new methodperforms well; see Table 2. Through our simulationstudies, we also findthat the choice of V forthe resamplingmethod is ratherrobust.

Table 2. Empirical coverage probabilitiesand estimatedmean lengthsof confidenceintervals for variousprocedures based on L1 normwith n = 41

(a) Gaussianerror with 0 and constantvariance 0 0464

Confidence Gamma Beta Bootstrap IMSL level ECP EML ECP EML ECP EML ECP EML Intercept 0.95 0 97 1 92 0 98 2 03 0 98 2.00 0.95 1 76 Age 098 002 099 002 098 002 094 002 Weight 098 003 099 003 098 003 095 003 SFT 0 98 1 01 0.99 1 10 0.99 1 08 0 94 0 90 Intercept 090 094 1 61 095 1 71 095 1 68 091 1 48 Age 095 002 096 002 096 002 090 002 Weight 096 002 097 002 097 002 091 002 SFT 0 95 0 84 0 96 0 92 0 96 0 90 0 91 0 75 Intercept 0 85 0.91 1 41 0 93 1 49 0 92 1 47 0 86 1 30 Age 089 001 093 002 092 001 086 001 Weight 0.91 0.02 0 94 0.02 0 93 0.02 0 87 0.02 SFT 0 92 0 74 0 94 0 80 0 94 0 79 0 87 0 66

(b) Gaussianerror with mean 0 and heteroscedasticvariance

Confidence Gamma Beta Bootstrap IMSL level ECP EML ECP EML ECP EML ECP EML Intercept 0 95 0 95 1 86 0 97 1 95 0 96 1 92 0 71 0 91 Age 099 001 099 001 099 001 094 001 Weight 097 002 098 003 097 003 076 001 SFT 0 99 0 75 1 00 0 81 1 00 0 79 0 90 0 46 Intercept 0 90 0 91 1 56 0 92 1 64 0 92 1 61 0 61 0 77 Age 096 001 097 001 097 001 089 001 Weight 0 93 0 02 0 94 0 02 0 94 0 02 0 68 0 01 SFT 0 97 0 63 0 98 0 68 0 98 0 66 0 83 0 39 Intercept 0 85 0 87 1 36 0 89 1 44 0 88 1 41 0 55 0 67 Age 094 001 095 001 095 001 085 001 Weight 089 002 091 002 091 002 062 001 SFT 0 93 0 55 0 95 0 60 0 95 0 58 0 79 0 34

ECP, empiricalcoverage probabilities; EML, estimatedmean lengths. IMSL, methodbased on programRLLP in the IMSL package; SFT, skinfold thickness.

The estimatingfunction bootstrap method proposed by Hu & Kalbfleisch(2000) is valid only forthe case witha linearestimating equation withindependent terms. Using similartechniques presented in thispaper, one may be able to generalisetheir method to the case in whichthe estimatingfunction has a U-processstructure.

ACKNOWLEDGEMENT We are verygrateful to theeditor and refereesfor helpful comments on the paper.This researchwas partiallysupported by grantsfrom the US National Institutesof Health.

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions 388 Z. JIN, Z. YING AND L. J. WEI

APPENDIX Large-sampleproperties of 0 an.d * PROPOSITION Al. Assume that {h(z1,. . ., ZK; 0): e?_ } is Euclidean (Nolan & Pollard, 1987, Definition8, p. 789), and there exists a function H(Z1, . . ., ZK) such that EH(Z1, .. ., ZK) < oo and , < , alinost all 0 e 0. Furthermore, assume that Ih(ZI... ZK; 0)l H(Z1,... ZK) surely for A Eh(Z1, ... , ZK; 0) is continuousand has a uniqueminimum at 00. Then0- 00 almostsurely as n- oo. Proof. Since {h(z1,. , ZK; 0): 0 ec0) is Euclidean,by Theorem3.1 of Arcones& Gine (1993), supo0 , 1U,(0) - EU,(0) I 0 almost surely.This, coupled withthe factthat Eh(Zl, . . .,ZK; 0) has a unique minimumat 00, impliesthat 0 -+00 almostsurely. D Note that 'h(.) is Euclidean' is not a strongassumption (Nolan & Pollard, 1987, Lemma 22, p. 797). For the weak convergenceof 0, assume that there exists an r-dimensionalfunction q(zl, . , ZK; 0), the gradientof h(z1,. , ZK; 0) withrespect to 0 ifit exists,such that

Eq(Z1, Z2,... , ZK; 0) = a {Eh(Z1,... , ZK; 0)}1/0, and E lq(Zl, ... , ZK; 0)112< 00. Furthermore, assume that Eq(Zl,.. ., ZK; 0) is continuously differentiableand

D = E{q(Z1, ,ZK; 0)} is nonsingular.Let

W,l(0)=n2 (K) , q(Zil, z,;Zi;0). 1'

PROPOSITION A2. Assumethat all theconditions listed in PropositionAl are satisfied.In addition, assumethat, almost surely,

Un(01)- U,n(02)=n-W (02)(01 -02)+(01-02 D(01-02)/2 + o(1 01 -0212 + n') (A-1) holdsuniformly in 1101- 00 1 < d,,and 1102- 00 < d,, where{d,, } is any sequenceof positive randon variables,converging to 0 almostsurely. Then

2(0-00) = -DV (0)?o(1 + V (02) +1 0 - 00) (A.2) almostsurely.

A Proof. Since 0 is stronglyconsistent, by (Al1), ul(0)

=2 - u,l(b)- W ,00(-) + (a-o _)T D(a 0)/2 + o(I 0| _-00112 + n1). (A.3

Also, since ni-2 WI(00) convergesto 0, again by (A 1),

2 2 1 {0- -D' (00)}-U,,(00) = 1-1W'I4(00)TD'I4'(01) + ( I ) (A4)

Furthermore,since 0 is a minimiserof U,(0), U,(0) , U,,{0O 1nD'W(00)}.- It followsfrom (A 3) and (A 4) that

{0 - 00 + n-D'D-1(0 )}TD{0 - 00 + n_-D-'W,I(00)}/2

+o(0-l 0012+n'1 1I4(0o)11 +in1)0, almost surely.This implies(A 2). D

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions Simple resamplingmethod 389 For the resamplingpart, we considerthe unconditionalperturbed objective function

= (\)) (Vil + * + VJK)h(Zil, ... z i; 0). K 11

A\ Now let 0* be the minimiserof U,,(0).Without loss of generality,we assume that the mean and varianceof V are 1/K and 1, respectively.Since U,,is also a U-processof degreeK, exactlythe same argumentsas forPropositions Al and A2 are applicableto obtainthe following proposition, but with W,1(0)replaced by

WI (0) =n-1-Kn ) (V'l + + ViK)q(Zi,,...,~ 'iK; 0) 1 < il< . .. < iK1<1 PROPOSITION A3. Assumethat the conditions in PropositionAl are satisfied.Then 0* is strongly consistent.Assumefurthermnore that, almost surely, - -2- - - Un(O 1) Un(02) =n W,, (02)(01 02) + (01 02) D(01- 02)/2+ o( Il01- 02 l +),(A5) uniformlyin 1101-00 01 d,,, 02 -00 d,,.Then, in probabilityspace generatedby {V,Z},

A A - A A A 1A i- 1 2 n2(0* 0) -D'Wd?(0) + o(l + 14W,'(0) 1 +n 01- 0 1) (A.6) almostsurely.

Note that,since Eh(Z1, ..., ZK; 0) = E{(V1 + .... + VK)h(Zl,..., ZK; 0)}, the matrixD in (A 5) is the same as that in (Al1). It followsfrom Proposition A2 that the asymptoticdistribution of n2(0 -00) is thesame as thatof D W14(00).Thus, in viewof(A 6), to show thatfor every realisation of {Z} the conditionaldistribution of n (0 -0) convergesto the same limitingdistribution as thatof n2(0 - 00), it sufficesto show thatfor every realisation of {Z} the conditionaldistribution of W,1(0)converges to normal with mean 0 and covariance matrixF, the limitingdistribution of WI(00). A A A A Let 01 = 0 - n-DI-1D (0) and 02 = 0 in (A1). Since0 is a minimiser,it follows that 0 S -n'lWT(I)D-'I4 (0)/2 + o(n' 1W,?()ll+ 1-) almost surely,which implies 14'(0) o(l) almost surelyin the space of {Z}. Thus, up to an almostsurely negligible term,

- Vi z + K q(Zi-1 WK)(0l<).iKt lf {( K) ( il ) ZiK; a)

Zi2,.'' 0), 2 l (y K~)( (n-1) . .(n-K)ZL q(Zi, ZiK (A7)

wherethe last summationis overi2,. . ., iKsuch that no pair ofindices among i, i2, , i are equal. By the stronglaw of large numberfor U-processes(Arcones & Gine6,1993, Theorem 3.1) and PropositionAl, the covariancematrix of (A 7) convergesalmost surelyto F. Moreover,by the usual multivariatecentral limit theorem (Serfling, 1980, p. 30), for everyrealisation of {Z} the conditionaldistribution of (A 7) convergesto normalwith mean 0 and covariancematrix F. Hence, n(0- 0) has the same asymptoticdistribution as thatof n(O* - 0).

REFERENCES ARCONES,M. A. & GINE,E. (1993). Limittheorems for U-processes. Ann. Prob. 21, 1494-542. DAVISON,A. & HINKLEY, D. V. (1997). BootstrcapMethods and Their Applicationi.New York: Cambridge UniversityPress. EFRON, B. (1982). The Jackknife,the Bootstrap and OtherResampling Plains. Philadelphia,PA: SIAM. EFRON, B. & TIBSHIRANI, R. J. (1993). An Int7oductionto theBootstrap. New York: Chapman and Hall.

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions 390 Z. JIN, Z. YING AND L. J. WEI

FITZENBERGER, B. (1997). A guide to censoredquantile regressions. In Handbookof Statistics,15, Ed. G. S. Maddala and C. R. Rao, pp. 405-37. Amsterdam:Elsevier Science. HETTMANSPERGER, T. P. & McKEAN, J. W. (1998). RobustNonparametric Statistical Methods. New York: JohnWiley. Hu, F. & KALBFLEISCH, J. D. (2000). The estimatingfunction bootstrap (with Discussion). Can. J. Statist. 28, 449-99. IMSL (1991). STAT/LIBRARY User'sManual, Version2.0. Houston,TX: IMSL. KOENKER, R. & D'OREY, V. (1987). Computingregression quantiles. Appl. Statist. 36, 383-93. KOENKER, R. & PARK, B. J.(1996). An interiorpoint algorithm for nonlinear quantile regression. J. Economet. 71, 265-83. NOLAN, D. & POLLARD, D. (1987). U-processes:rates of convergence.Ann. Statist. 15, 780-99. PARZEN, M. I., WEI, L. J. & YING, Z. (1994). A resamplingmethod based on pivotal estimatingfunctions. Biometrika81, 341-50. POLLARD, D. (1990). EmpiricalProcesses: Theoryand Applications,Regional ConferenceSeries Probability and Statistics2. Hayward,CA: Instituteof MathematicalStatistics. POWELL, J. L. (1984). Least absolute deviationsfor the censored regression models. J. Economet.25, 303-25. POWELL, J. L. (1986). Censoredregression quantiles. J. Economet.32, 143-55. SERFLING, R. J. (1980). ApproximationTheorems of MathematicalStatistics. New York: JohnWiley. SHAO, J. & Tu, D. S. (1995). The Jackknifeand Bootstrap.New York: Springer-Verlag. [ReceivedFebruary 2000. Revised August 2000]

This content downloaded from 156.145.72.198 on Sun, 15 Dec 2013 19:56:38 PM All use subject to JSTOR Terms and Conditions