Interval Estimation for Drop-The-Losers Designs

Total Page:16

File Type:pdf, Size:1020Kb

Interval Estimation for Drop-The-Losers Designs Biometrika (2010), 97,2,pp. 405–418 doi: 10.1093/biomet/asq003 C 2010 Biometrika Trust Advance Access publication 14 April 2010 Printed in Great Britain Interval estimation for drop-the-losers designs BY SAMUEL S. WU Department of Epidemiology and Health Policy Research, University of Florida, Gainesville, Florida 32610, U.S.A. [email protected]fl.edu WEIZHEN WANG Downloaded from Department of Mathematics and Statistics, Wright State University, Dayton, Ohio 45435, U.S.A. [email protected] AND MARK C. K. YANG Department of Statistics, University of Florida, Gainesville, Florida 32610, U.S.A. http://biomet.oxfordjournals.org [email protected]fl.edu SUMMARY In the first stage of a two-stage, drop-the-losers design, a candidate for the best treatment is selected. At the second stage, additional observations are collected to decide whether the candidate is actually better than the control. The design also allows the investigator to stop the trial for ethical reasons at the end of the first stage if there is already strong evidence of futility or superiority. Two types of tests have recently been developed, one based on the combined means and the other based on the combined p-values, but corresponding inter- at University of Florida on May 27, 2010 val estimators are unavailable except in special cases. The problem is that, in most cases, the interval estimators depend on the mean configuration of all treatments in the first stage, which is unknown. In this paper, we prove a basic stochastic ordering lemma that enables us to bridge the gap between hypothesis testing and interval estimation. The proposed confi- dence intervals achieve the nominal confidence level in certain special cases. Simulations show that decisions based on our intervals are usually more powerful than those based on existing methods. Some key words: Adaptive design; Clinical trial; Drop-the-losers design; p-value combination; Two-stage test. 1. INTRODUCTION The purpose of this paper is to construct confidence limits for the mean difference between the selected treatment and the control in a two-stage, drop-the-losers clinical trial. As the name implies, when there are k treatments to be compared with a control, we wish to pick the most promising treatment using a two-stage design, in which we drop all other treatments after the first stage and confirm that the selected treatment is indeed better than the control in the second stage. The investigator can also stop the trial for ethical reasons at the end of the first stage if there is already strong evidence for or against the null hypothesis that no treatment is better than the control. Multiple interim looks may be incorporated after the treatment selection. In drug testing, the selection stage corresponds to a Phase II trial and the confirmation stage corresponds to a 406 SAMUEL S. WU,WEIZHEN WANG AND MARK C. K. YANG Phase III study. Traditionally, these two trials have been conducted separately, with the Phase III study ignoring all the Phase II information except the selection. A seamless Phase II/III design combines the data from both stages to make the final decision. Since the primary goal of most clinical trials is to make sure that the selected treatment is indeed better than the control, most two-stage designs are investigated using hypothesis testing. Thall et al. (1988) considered the two-stage, drop-the-losers strategy for dichotomous responses with an early futility stop, but no early rejection. Bischoff & Miller (2005) constructed an optimal test based on the likelihood ratio principle for k = 2, but an extension of their result for k > 2 was not explored. Sampson & Sill (2005) proposed a uniformly most powerful conditional test for general k conditioned on the first-stage results. Unfortunately, this test is not an uncondi- tionally uniformly most powerful test. Wang (2006) constructed a rigorous level-α test under the assumption that all treatments are not inferior to the control. Downloaded from The methods in the four papers mentioned so far use the likelihood ratio principle on data from the two stages; i.e. the decision is based on combined means. Alternatively, one could use p-values from the two stages to integrate information. Popular methods of combining p-values include Fisher’s products of p-value combination (Bauer & Kohne,¨ 1994; Bretz et al., 2006), weighted linear p-value combination (Chang, 2007) and weighted linear p-value combination http://biomet.oxfordjournals.org under the inverse transformation of the normal distribution function (Lehmacher & Wassmer, 1999). In contrast to likelihood-based methods, the techniques that combine p-values do not exploit all of the available information and are therefore less efficient (Tsiatis & Mehta, 2003; Jennison & Turnbull, 2006). On the other hand, the p-value methods offer more flex- ibility. For example, the statistical methods that lead to the p-values at the two different stages do not have to be the same, and the second-stage design may depend on all first-stage data. Once we have confirmed that the candidate treatment is indeed better than the control, it is important from a practical standpoint to quantify the difference. In some cases, an interval estima- tor may be obtained as a by-product of the hypothesis-testing procedure. For example, Liu et al. at University of Florida on May 27, 2010 (2002) describe how this can be accomplished in a situation where there exists a pivotal quantity for a univariate parameter. Unfortunately, the parameter of interest in our problem is random due to selection. There seems no clear way to extend the method of Liu et al. (2002) in this case. To be more specific, inverting a test in our context would require the least favourable con- figuration for the coverage probability of the interval estimator. Lehmacher & Wassmer (1999) obtained a confidence interval for the k = 1 case and Brannath et al. (2006) provided a com- prehensive discussion of confidence interval construction under various conditions, such as designs with upper and lower bounds on the second-stage sample size, but they also considered only the k = 1 case. Bischoff & Miller (2005) considered the k = 2 case but did not provide an interval estimator. Sampson & Sill (2005) were able to convert their conditional test into a confidence interval because, by conditioning on the first-stage data, the final decision is a one-stage problem. Since the Sampson and Sill method seems to be the only confidence in- terval for the general k-arm drop-the-losers design, we use it as the standard reference for comparison. In this paper, we develop interval estimators for the mean difference between the selected treatment and the control. Our methods work for both types of testing; i.e. both mean combi- nation and p-value combination methods. While the unknown mean configuration prevents us from constructing the tightest possible lower confidence limit, we are able to prove that our interval estimators do achieve the nominal confidence level in certain special cases. Moreover, our simulation study shows that the new intervals usually perform better than Sampson and Sill’s intervals. Interval estimation for drop-the-losers designs 407 2. BASIC FORMULATION AND DECISION RULES 2·1. Basic formulation Let the response of the jth patient from the ith treatment at stage t be Xtij and Xtij = μi + tij ( j = 1,...,nti; i = 0, 1,...,k; t = 1, 2), 2 where μi stands for the ith treatment mean, with i = 0 for the control, and tij ∼ N(0,σ )are independent errors. Furthermore, we assume that the larger the mean the better. ¯ 2 Let Xti and Sti be the sample mean and sample variance from the ith treatment at stage t.From 2 the first stage, let S1 be the pooled estimate of the common variance as defined in Appendix 1 and = ¯ − ¯ / ¯ ¯ = ,..., Yi (X1i X10) ai be the standardized differences between X1i and X10 for i 1 k,where Downloaded from 1/2 ai = (1/n1i + 1/n10) .Letτ be the selected treatment; i.e. τ = i, if Yi = max j=1,...,k Y j .We assume that n2i (i = 0,...,k) are prespecified functions of S1 for the mean combination method and functions of both S1 and X¯ 1i (i = 0,...,k), for the p-value combination method; hence, they are known at the end of the first stage. Also, only the selected treatment and the control are continued after the first stage; i.e. the second-stage data from the dropped treatments are not observed. http://biomet.oxfordjournals.org Since the selected treatment τ for the second stage is random, we are actually estimating the difference between the means of an unprespecified treatment and the control, = μτ − μ0.Itis clear that for continuous error tij, τ and μτ are uniquely defined with probability one. 2·2. The mean combination decision rule The difference between the sample means of the selected treatment and the control is D = (n1τ X¯ 1τ + n2τ X¯ 2τ )/(n1τ + n2τ ) − (n10 X¯ 10 + n20 X¯ 20)/(n10 + n20), based on the data from both stages. When the sample sizes n2i (i = 1,...,k) are different, the distribution of the usual sample variance depends on τ; consequently, it depends on X¯ 1i (i = 1,...,k). Therefore, we elect to use at University of Florida on May 27, 2010 2 a modified variance estimate S defined in Appendix 1. In most practical situations, the n2i are all equal and S2 is the usual variance estimator. 1/2 Let bτ ={1/(n1τ + n2τ ) + 1/(n10 + n20)} and T = D/bτ .Foragivenset(d0, d1, d2)such that d0 d1, our decision rule is: ⎧ ⎨if Yτ /S1 < d0, stop for futility, / , τ, at stage 1 ⎩if Yτ S1 d1 stop for superiority of treatment if d Yτ /S < d , continue to stage 2; 0 1 1 if T/S < d , claim futility, at stage 2 2 (1) if T/S d2, claim superiority of treatment τ.
Recommended publications
  • Confidence Interval Estimation in System Dynamics Models: Bootstrapping Vs
    Confidence Interval Estimation in System Dynamics Models: Bootstrapping vs. Likelihood Ratio Method Gokhan Dogan* MIT Sloan School of Management, 30 Wadsworth Street, E53-364, Cambridge, Massachusetts 02142 [email protected] Abstract In this paper we discuss confidence interval estimation for system dynamics models. Confidence interval estimation is important because without confidence intervals, we cannot determine whether an estimated parameter value is significantly different from 0 or any other value, and therefore we cannot determine how much confidence to place in the estimate. We compare two methods for confidence interval estimation. The first, the “likelihood ratio method,” is based on maximum likelihood estimation. This method has been used in the system dynamics literature and is built in to some popular software packages. It is computationally efficient but requires strong assumptions about the model and data. These assumptions are frequently violated by the autocorrelation, endogeneity of explanatory variables and heteroskedasticity properties of dynamic models. The second method is called “bootstrapping.” Bootstrapping requires more computation but does not impose strong assumptions on the model or data. We describe the methods and illustrate them with a series of applications from actual modeling projects. Considering the empirical results presented in the paper and the fact that the bootstrapping method requires less assumptions, we suggest that bootstrapping is a better tool for confidence interval estimation in system dynamics
    [Show full text]
  • WHAT DID FISHER MEAN by an ESTIMATE? 3 Ideas but Is in Conflict with His Ideology of Statistical Inference
    Submitted to the Annals of Applied Probability WHAT DID FISHER MEAN BY AN ESTIMATE? By Esa Uusipaikka∗ University of Turku Fisher’s Method of Maximum Likelihood is shown to be a proce- dure for the construction of likelihood intervals or regions, instead of a procedure of point estimation. Based on Fisher’s articles and books it is justified that by estimation Fisher meant the construction of likelihood intervals or regions from appropriate likelihood function and that an estimate is a statistic, that is, a function from a sample space to a parameter space such that the likelihood function obtained from the sampling distribution of the statistic at the observed value of the statistic is used to construct likelihood intervals or regions. Thus Problem of Estimation is how to choose the ’best’ estimate. Fisher’s solution for the problem of estimation is Maximum Likeli- hood Estimate (MLE). Fisher’s Theory of Statistical Estimation is a chain of ideas used to justify MLE as the solution of the problem of estimation. The construction of confidence intervals by the delta method from the asymptotic normal distribution of MLE is based on Fisher’s ideas, but is against his ’logic of statistical inference’. Instead the construc- tion of confidence intervals from the profile likelihood function of a given interest function of the parameter vector is considered as a solution more in line with Fisher’s ’ideology’. A new method of cal- culation of profile likelihood-based confidence intervals for general smooth interest functions in general statistical models is considered. 1. Introduction. ’Collected Papers of R.A.
    [Show full text]
  • A Framework for Sample Efficient Interval Estimation with Control
    A Framework for Sample Efficient Interval Estimation with Control Variates Shengjia Zhao Christopher Yeh Stefano Ermon Stanford University Stanford University Stanford University Abstract timates, e.g. for each individual district or county, meaning that we need to draw a sufficient number of We consider the problem of estimating confi- samples for every district or county. Another diffi- dence intervals for the mean of a random vari- culty can arise when we need high accuracy (i.e. a able, where the goal is to produce the smallest small confidence interval), since confidence intervals possible interval for a given number of sam- producedp by typical concentration inequalities have size ples. While minimax optimal algorithms are O(1= number of samples). In other words, to reduce known for this problem in the general case, the size of a confidence interval by a factor of 10, we improved performance is possible under addi- need 100 times more samples. tional assumptions. In particular, we design This dilemma is generally unavoidable because concen- an estimation algorithm to take advantage of tration inequalities such as Chernoff or Chebychev are side information in the form of a control vari- minimax optimal: there exist distributions for which ate, leveraging order statistics. Under certain these inequalities cannot be improved. No-free-lunch conditions on the quality of the control vari- results (Van der Vaart, 2000) imply that any alter- ates, we show improved asymptotic efficiency native estimation algorithm that performs better (i.e. compared to existing estimation algorithms. outputs a confidence interval with smaller size) on some Empirically, we demonstrate superior perfor- problems, must perform worse on other problems.
    [Show full text]
  • Likelihood Confidence Intervals When Only Ranges Are Available
    Article Likelihood Confidence Intervals When Only Ranges are Available Szilárd Nemes Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg 405 30, Sweden; [email protected] Received: 22 December 2018; Accepted: 3 February 2019; Published: 6 February 2019 Abstract: Research papers represent an important and rich source of comparative data. The change is to extract the information of interest. Herein, we look at the possibilities to construct confidence intervals for sample averages when only ranges are available with maximum likelihood estimation with order statistics (MLEOS). Using Monte Carlo simulation, we looked at the confidence interval coverage characteristics for likelihood ratio and Wald-type approximate 95% confidence intervals. We saw indication that the likelihood ratio interval had better coverage and narrower intervals. For single parameter distributions, MLEOS is directly applicable. For location-scale distribution is recommended that the variance (or combination of it) to be estimated using standard formulas and used as a plug-in. Keywords: range, likelihood, order statistics, coverage 1. Introduction One of the tasks statisticians face is extracting and possibly inferring biologically/clinically relevant information from published papers. This aspect of applied statistics is well developed, and one can choose to form many easy to use and performant algorithms that aid problem solving. Often, these algorithms aim to aid statisticians/practitioners to extract variability of different measures or biomarkers that is needed for power calculation and research design [1,2]. While these algorithms are efficient and easy to use, they mostly are not probabilistic in nature, thus they do not offer means for statistical inference. Yet another field of applied statistics that aims to help practitioners in extracting relevant information when only partial data is available propose a probabilistic approach with order statistics.
    [Show full text]
  • 32. Statistics 1 32
    32. Statistics 1 32. STATISTICS Revised September 2007 by G. Cowan (RHUL). This chapter gives an overview of statistical methods used in High Energy Physics. In statistics, we are interested in using a given sample of data to make inferences about a probabilistic model, e.g., to assess the model’s validity or to determine the values of its parameters. There are two main approaches to statistical inference, which we may call frequentist and Bayesian. In frequentist statistics, probability is interpreted as the frequency of the outcome of a repeatable experiment. The most important tools in this framework are parameter estimation, covered in Section 32.1, and statistical tests, discussed in Section 32.2. Frequentist confidence intervals, which are constructed so as to cover the true value of a parameter with a specified probability, are treated in Section 32.3.2. Note that in frequentist statistics one does not define a probability for a hypothesis or for a parameter. Frequentist statistics provides the usual tools for reporting objectively the outcome of an experiment without needing to incorporate prior beliefs concerning the parameter being measured or the theory being tested. As such they are used for reporting most measurements and their statistical uncertainties in High Energy Physics. In Bayesian statistics, the interpretation of probability is more general and includes degree of belief (called subjective probability). One can then speak of a probability density function (p.d.f.) for a parameter, which expresses one’s state of knowledge about where its true value lies. Bayesian methods allow for a natural way to input additional information, such as physical boundaries and subjective information; in fact they require as input the prior p.d.f.
    [Show full text]
  • Interval Estimation Statistics (OA3102)
    Module 5: Interval Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 8.5-8.9 Revision: 1-12 1 Goals for this Module • Interval estimation – i.e., confidence intervals – Terminology – Pivotal method for creating confidence intervals • Types of intervals – Large-sample confidence intervals – One-sided vs. two-sided intervals – Small-sample confidence intervals for the mean, differences in two means – Confidence interval for the variance • Sample size calculations Revision: 1-12 2 Interval Estimation • Instead of estimating a parameter with a single number, estimate it with an interval • Ideally, interval will have two properties: – It will contain the target parameter q – It will be relatively narrow • But, as we will see, since interval endpoints are a function of the data, – They will be variable – So we cannot be sure q will fall in the interval Revision: 1-12 3 Objective for Interval Estimation • So, we can’t be sure that the interval contains q, but we will be able to calculate the probability the interval contains q • Interval estimation objective: Find an interval estimator capable of generating narrow intervals with a high probability of enclosing q Revision: 1-12 4 Why Interval Estimation? • As before, we want to use a sample to infer something about a larger population • However, samples are variable – We’d get different values with each new sample – So our point estimates are variable • Point estimates do not give any information about how far
    [Show full text]
  • Confidence Intervals in Analysis and Reporting of Clinical Trials Guangbin Peng, Eli Lilly and Company, Indianapolis, IN
    Confidence Intervals in Analysis and Reporting of Clinical Trials Guangbin Peng, Eli Lilly and Company, Indianapolis, IN ABSTRACT for more frequent use of confidence Regulatory agencies around the world intervals (Simon, 1993). have recommended reporting confidence There is a close relationship between intervals for treatment differences along confidence intervals and significance with the results of significance tests. tests (Hahn and Meeker, 1991); in fact, a SAS provides easy and convenient ways confidence interval can often be used to to produce confidence intervals using test a hypothesis. If the 100(1-α)% procedures such as PROC GLM and confidence interval for the mean PROC UNIVARIATE in conjunction treatment difference in a clinical trial with ODS (output delivery system). In does not contain zero, there is evidence this paper, I will discuss the relationship to indicate a treatment difference at the between significance tests and 100 α % significance level. This strategy confidence intervals, summarize the is equivalent to the hypothesis test that types of confidence intervals used in rejects the null hypothesis of no mean clinical study reports, and provide treatment difference at the level of α. examples from clinical trials to illustrate Compared to p-values, confidence the computation of distribution- intervals are generally more informative. dependent confidence intervals for the They provide quantitative bounds that mean treatment difference and express the uncertainty inherent in distribution-free confidence intervals for estimation, instead of merely an accept the median response within each or reject statement. The length of a treatment group using SAS. confidence interval depends on the sample size; this influence of sample INTRODUCTION size is evident from observing the length Confidence interval estimation and of the interval, while this is not the case significance testing (hypothesis testing) for a significance test.
    [Show full text]
  • Interval Estimation, Point Estimation, and Null Hypothesis Significance Testing Calibrated by an Estimated Posterior Probability of the Null Hypothesis David R
    Interval estimation, point estimation, and null hypothesis significance testing calibrated by an estimated posterior probability of the null hypothesis David R. Bickel To cite this version: David R. Bickel. Interval estimation, point estimation, and null hypothesis significance testing cali- brated by an estimated posterior probability of the null hypothesis. 2020. hal-02496126 HAL Id: hal-02496126 https://hal.archives-ouvertes.fr/hal-02496126 Preprint submitted on 2 Mar 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Interval estimation, point estimation, and null hypothesis significance testing calibrated by an estimated posterior probability of the null hypothesis March 2, 2020 David R. Bickel Ottawa Institute of Systems Biology Department of Biochemistry, Microbiology and Immunology Department of Mathematics and Statistics University of Ottawa 451 Smyth Road Ottawa, Ontario, K1H 8M5 +01 (613) 562-5800, ext. 8670 [email protected] Abstract Much of the blame for failed attempts to replicate reports of scientific findings has been placed on ubiquitous and persistent misinterpretations of the p value. An increasingly popular solution is to transform a two-sided p value to a lower bound on a Bayes factor. Another solution is to interpret a one-sided p value as an approximate posterior probability.
    [Show full text]
  • Statistical Inference II: Interval Estimation and Hypotheses Tests
    Statistical Inference II: The Principles of Interval Estimation and Hypothesis Testing The Principles of Interval Estimation and Hypothesis Testing 1. Introduction In Statistical Inference I we described how to estimate the mean and variance of a population, and the properties of those estimation procedures. In Statistical Inference II we introduce two more aspects of statistical inference: confidence intervals and hypothesis tests. In contrast to a point estimate of the population mean β, like b = 17.158, a confidence interval estimate is a range of values which may contain the true population mean. A confidence interval estimate contains information not only about the location of the population mean but also about the precision with which we estimate it. A hypothesis test is a statistical procedure for using data to check the compatibility of a conjecture about a population with the information contained in a sample of data. Continuing the example from Statistical Inference I, suppose airplane designers have been basing seat designs based on the assumption that the average hip width of U.S. passengers is 16 inches. Is the information contained in the random sample of 50 hip measurements compatible with this conjecture, or not? These are the issues we consider in Statistical Inference II. 2. Interval Estimation for Mean of Normal Population When σ2 is Known Let Y be a random variable from a normal population. That is, assume YN~,()β σ2 . Assume that we have a random sample of size T from this population, YY12,,, YT . The least squares estimator of the population mean is T = bYT∑ i (2.1) i=1 This estimator has a normal distribution if the population is normal, bN~,()βσ2 T (2.2) For the present, let us assume that the population variance σ2 is known.
    [Show full text]
  • Contents 3 Parameter and Interval Estimation
    Probability and Statistics (part 3) : Parameter and Interval Estimation (by Evan Dummit, 2020, v. 1.25) Contents 3 Parameter and Interval Estimation 1 3.1 Parameter Estimation . 1 3.1.1 Maximum Likelihood Estimates . 2 3.1.2 Biased and Unbiased Estimators . 5 3.1.3 Eciency of Estimators . 8 3.2 Interval Estimation . 12 3.2.1 Condence Intervals . 12 3.2.2 Normal Condence Intervals . 13 3.2.3 Binomial Condence Intervals . 16 3 Parameter and Interval Estimation In the previous chapter, we discussed random variables and developed the notion of a probability distribution, and then established some fundamental results such as the central limit theorem that give strong and useful information about the statistical properties of a sample drawn from a xed, known distribution. Our goal in this chapter is, in some sense, to invert this analysis: starting instead with data obtained by sampling a distribution or probability model with certain unknown parameters, we would like to extract information about the most reasonable values for these parameters given the observed data. We begin by discussing pointwise parameter estimates and estimators, analyzing various properties that we would like these estimators to have such as unbiasedness and eciency, and nally establish the optimality of several estimators that arise from the basic distributions we have encountered such as the normal and binomial distributions. We then broaden our focus to interval estimation: from nding the best estimate of a single value to nding such an estimate along with a measurement of its expected precision. We treat in scrupulous detail several important cases for constructing such condence intervals, and close with some applications of these ideas to polling data.
    [Show full text]
  • Applied Resampling Methods
    APPLIED RESAMPLING METHODS Michael A. Martin, PhD, AStat Steven E. Stern, PhD, AStat © 2001 Michael A. Martin and Steven E. Stern [email protected] [email protected] 3 MAJOR GOALS OF STATISTICS 1. What data should I collect? How do I collect my data (sampling)? What assumptions am I making about my data? 2. How do I organise and analyse my data? Graphically? Numerically? 3. What do my data analyses and summaries mean? How “accurate” are my estimators? How “precise” are my estimators? What do “accurate” and “precise” even mean? Statistical Inference promises answers to 3. A simple example: Does aspirin prevent heart attacks? Experimental situation: population consists of healthy, middle-aged men Type of study: Controlled – half of the subjects were given aspirin, the other half a placebo (control group) Randomised – subjects were randomly assigned to aspirin and placebo groups Double-blind – neither patients nor doctors knew to which group patients belonged The Data: Subjects heart attacks Aspirin Group 11037 104 Placebo Group 11034 189 How might we analyse this data? One possible approach (there are others…) Rate of heart attacks in aspirin group: 104 =0.0094. 11037 Rate of heart attacks in placebo group: 189 =0.0171. 11034 A reasonable statistic of interest is qˆ = Rate in Aspirin Group =0.55. Rate in Placebo Group This is a ratio estimator. We could also take logs to produce a location-type estimator based † on differences. † † Now what? How did we do? Data Collection: • the use of a control group allows meaningful comparison………………… 3 • randomisation reduces the risk of systematic bias……………………………3 • double-blind study removes potential for “psychological” effects or the tendency for doctors to treat higher-risk patients……3 Analysing data: • The statistic of interest makes sense – it obviously compares the rate of heart attacks between the two groups.
    [Show full text]
  • Resampling for Estimation and Inference for Variances 1 Introduction
    ResamplingInt. Statistical Inst.: Proc. for 58th Estimation World Statistical and Congress, Inference 2011, Dublin for(Session Variances IPS056) p.1022 Thompson, Mary (1st author) University of Waterloo, Department of Statistics and Actuarial Science 200 University Avenue West Waterloo N2L 3G1, Canada E-mail: [email protected] Wang, Zilin (2nd author) Wilfrid Laurier University, Department of Mathematics 75 University Avenue West Waterloo N2L 3C5, Canada E-mail: [email protected] 1 Introduction In survey sampling, estimating and making inferences on population variances can be challenging when sampling designs are complicated. Most of the literature limits the investigation to the case where a random sample is obtained without replacement (For example, Thompson (1997) and Cho and Cho (2008)). In this paper, we pursue a computational approach to inference for population variances and, ultimately, variance components with complex survey data. This computational approach is based on a resampling technique, artificial population boot- strapping (APB), introduced in Wang and Thompson (2010). The APB procedure departs from the common application of resampling procedures in survey sampling. It belongs to the category of bootstrapping without replacement, with unequal probability sampling being used to select the resamples. In particular, using a set of complex survey data, many artificial populations are built, and within each of the those populations, resamples are selected by using the same probability sam- pling design as the original sample. The APB procedure entails not only mimicking the sampling design, but also creating artificial populations to resemble as closely as possible the population from which the original sample is selected. As a result, inferences on parameters for the artificial population using resamples would resemble inferences on parameters from the finite population using the original sample.
    [Show full text]