Analysis of Covariance (ANCOVA) with Two Groups
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Simple Sample Size Formula for Analysis of Covariance in Randomized Clinical Trials George F
Journal of Clinical Epidemiology 60 (2007) 1234e1238 ORIGINAL ARTICLE A simple sample size formula for analysis of covariance in randomized clinical trials George F. Borma,*, Jaap Fransenb, Wim A.J.G. Lemmensa aDepartment of Epidemiology and Biostatistics, Radboud University Nijmegen Medical Centre, Geert Grooteplein 21, PO Box 9101, NL-6500 HB Nijmegen, The Netherlands bDepartment of Rheumatology, Radboud University Nijmegen Medical Centre, Geert Grooteplein 21, PO Box 9101, NL-6500 HB Nijmegen, The Netherlands Accepted 15 February 2007 Abstract Objective: Randomized clinical trials that compare two treatments on a continuous outcome can be analyzed using analysis of covari- ance (ANCOVA) or a t-test approach. We present a method for the sample size calculation when ANCOVA is used. Study Design and Setting: We derived an approximate sample size formula. Simulations were used to verify the accuracy of the for- mula and to improve the approximation for small trials. The sample size calculations are illustrated in a clinical trial in rheumatoid arthritis. Results: If the correlation between the outcome measured at baseline and at follow-up is r, ANCOVA comparing groups of (1 À r2)n subjects has the same power as t-test comparing groups of n subjects. When on the same data, ANCOVA is usedpffiffiffiffiffiffiffiffiffiffiffiffiffi instead of t-test, the pre- cision of the treatment estimate is increased, and the length of the confidence interval is reduced by a factor 1 À r2. Conclusion: ANCOVA may considerably reduce the number of patients required for a trial. Ó 2007 Elsevier Inc. All rights reserved. Keywords: Power; Sample size; Precision; Analysis of covariance; Clinical trial; Statistical test 1. -
Analysis of Covariance (ANCOVA) in Randomized Trials: More Precision
Johns Hopkins University, Dept. of Biostatistics Working Papers 10-19-2018 Analysis of Covariance (ANCOVA) in Randomized Trials: More Precision, Less Conditional Bias, and Valid Confidence Intervals, Without Model Assumptions Bingkai Wang Department of Biostatistics, Johns Hopkins University Elizabeth Ogburn Department of Biostatistics, Johns Hopkins University Michael Rosenblum Department of Biostatistics, Johns Hopkins University, [email protected] Suggested Citation Wang, Bingkai; Ogburn, Elizabeth; and Rosenblum, Michael, "Analysis of Covariance (ANCOVA) in Randomized Trials: More Precision, Less Conditional Bias, and Valid Confidence Intervals, Without Model Assumptions" (October 2018). Johns Hopkins University, Dept. of Biostatistics Working Papers. Working Paper 292. https://biostats.bepress.com/jhubiostat/paper292 This working paper is hosted by The Berkeley Electronic Press (bepress) and may not be commercially reproduced without the permission of the copyright holder. Copyright © 2011 by the authors Analysis of Covariance (ANCOVA) in Randomized Trials: More Precision, Less Conditional Bias, and Valid Confidence Intervals, Without Model Assumptions BINGKAI WANG, ELIZABETH OGBURN, MICHAEL ROSENBLUM∗ Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, 615 North Wolfe Street, Baltimore, Maryland 21205, USA [email protected] Summary \Covariate adjustment" in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called \covariates"). The baseline variables could include, e.g., age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the sta- tistical properties of covariate adjustment. We focus on the ANCOVA estimator, which involves fitting a linear model for the outcome given the treatment arm and baseline variables, and trials with equal probability of assignment to treatment and control. -
Chapter 5 Contrasts for One-Way ANOVA Page 1. What Is a Contrast?
Chapter 5 Contrasts for one-way ANOVA Page 1. What is a contrast? 5-2 2. Types of contrasts 5-5 3. Significance tests of a single contrast 5-10 4. Brand name contrasts 5-22 5. Relationships between the omnibus F and contrasts 5-24 6. Robust tests for a single contrast 5-29 7. Effect sizes for a single contrast 5-32 8. An example 5-34 Advanced topics in contrast analysis 9. Trend analysis 5-39 10. Simultaneous significance tests on multiple contrasts 5-52 11. Contrasts with unequal cell size 5-62 12. A final example 5-70 5-1 © 2006 A. Karpinski Contrasts for one-way ANOVA 1. What is a contrast? • A focused test of means • A weighted sum of means • Contrasts allow you to test your research hypothesis (as opposed to the statistical hypothesis) • Example: You want to investigate if a college education improves SAT scores. You obtain five groups with n = 25 in each group: o High School Seniors o College Seniors • Mathematics Majors • Chemistry Majors • English Majors • History Majors o All participants take the SAT and scores are recorded o The omnibus F-test examines the following hypotheses: H 0 : µ1 = µ 2 = µ3 = µ 4 = µ5 H1 : Not all µi 's are equal o But you want to know: • Do college seniors score differently than high school seniors? • Do natural science majors score differently than humanities majors? • Do math majors score differently than chemistry majors? • Do English majors score differently than history majors? HS College Students Students Math Chemistry English History µ 1 µ2 µ3 µ4 µ5 5-2 © 2006 A. -
On the Meaning and Use of Kurtosis
Psychological Methods Copyright 1997 by the American Psychological Association, Inc. 1997, Vol. 2, No. 3,292-307 1082-989X/97/$3.00 On the Meaning and Use of Kurtosis Lawrence T. DeCarlo Fordham University For symmetric unimodal distributions, positive kurtosis indicates heavy tails and peakedness relative to the normal distribution, whereas negative kurtosis indicates light tails and flatness. Many textbooks, however, describe or illustrate kurtosis incompletely or incorrectly. In this article, kurtosis is illustrated with well-known distributions, and aspects of its interpretation and misinterpretation are discussed. The role of kurtosis in testing univariate and multivariate normality; as a measure of departures from normality; in issues of robustness, outliers, and bimodality; in generalized tests and estimators, as well as limitations of and alternatives to the kurtosis measure [32, are discussed. It is typically noted in introductory statistics standard deviation. The normal distribution has a kur- courses that distributions can be characterized in tosis of 3, and 132 - 3 is often used so that the refer- terms of central tendency, variability, and shape. With ence normal distribution has a kurtosis of zero (132 - respect to shape, virtually every textbook defines and 3 is sometimes denoted as Y2)- A sample counterpart illustrates skewness. On the other hand, another as- to 132 can be obtained by replacing the population pect of shape, which is kurtosis, is either not discussed moments with the sample moments, which gives or, worse yet, is often described or illustrated incor- rectly. Kurtosis is also frequently not reported in re- ~(X i -- S)4/n search articles, in spite of the fact that virtually every b2 (•(X i - ~')2/n)2' statistical package provides a measure of kurtosis. -
The Probability Lifesaver: Order Statistics and the Median Theorem
The Probability Lifesaver: Order Statistics and the Median Theorem Steven J. Miller December 30, 2015 Contents 1 Order Statistics and the Median Theorem 3 1.1 Definition of the Median 5 1.2 Order Statistics 10 1.3 Examples of Order Statistics 15 1.4 TheSampleDistributionoftheMedian 17 1.5 TechnicalboundsforproofofMedianTheorem 20 1.6 TheMedianofNormalRandomVariables 22 2 • Greetings again! In this supplemental chapter we develop the theory of order statistics in order to prove The Median Theorem. This is a beautiful result in its own, but also extremely important as a substitute for the Central Limit Theorem, and allows us to say non- trivial things when the CLT is unavailable. Chapter 1 Order Statistics and the Median Theorem The Central Limit Theorem is one of the gems of probability. It’s easy to use and its hypotheses are satisfied in a wealth of problems. Many courses build towards a proof of this beautiful and powerful result, as it truly is ‘central’ to the entire subject. Not to detract from the majesty of this wonderful result, however, what happens in those instances where it’s unavailable? For example, one of the key assumptions that must be met is that our random variables need to have finite higher moments, or at the very least a finite variance. What if we were to consider sums of Cauchy random variables? Is there anything we can say? This is not just a question of theoretical interest, of mathematicians generalizing for the sake of generalization. The following example from economics highlights why this chapter is more than just of theoretical interest. -
Theoretical Statistics. Lecture 5. Peter Bartlett
Theoretical Statistics. Lecture 5. Peter Bartlett 1. U-statistics. 1 Outline of today’s lecture We’ll look at U-statistics, a family of estimators that includes many interesting examples. We’ll study their properties: unbiased, lower variance, concentration (via an application of the bounded differences inequality), asymptotic variance, asymptotic distribution. (See Chapter 12 of van der Vaart.) First, we’ll consider the standard unbiased estimate of variance—a special case of a U-statistic. 2 Variance estimates n 1 s2 = (X − X¯ )2 n n − 1 i n Xi=1 n n 1 = (X − X¯ )2 +(X − X¯ )2 2n(n − 1) i n j n Xi=1 Xj=1 n n 1 2 = (X − X¯ ) − (X − X¯ ) 2n(n − 1) i n j n Xi=1 Xj=1 n n 1 1 = (X − X )2 n(n − 1) 2 i j Xi=1 Xj=1 1 1 = (X − X )2 . n 2 i j 2 Xi<j 3 Variance estimates This is unbiased for i.i.d. data: 1 Es2 = E (X − X )2 n 2 1 2 1 = E ((X − EX ) − (X − EX ))2 2 1 1 2 2 1 2 2 = E (X1 − EX1) +(X2 − EX2) 2 2 = E (X1 − EX1) . 4 U-statistics Definition: A U-statistic of order r with kernel h is 1 U = n h(Xi1 ,...,Xir ), r iX⊆[n] where h is symmetric in its arguments. [If h is not symmetric in its arguments, we can also average over permutations.] “U” for “unbiased.” Introduced by Wassily Hoeffding in the 1940s. 5 U-statistics Theorem: [Halmos] θ (parameter, i.e., function defined on a family of distributions) admits an unbiased estimator (ie: for all sufficiently large n, some function of the i.i.d. -
Lecture 14 Testing for Kurtosis
9/8/2016 CHE384, From Data to Decisions: Measurement, Kurtosis Uncertainty, Analysis, and Modeling • For any distribution, the kurtosis (sometimes Lecture 14 called the excess kurtosis) is defined as Testing for Kurtosis 3 (old notation = ) • For a unimodal, symmetric distribution, Chris A. Mack – a positive kurtosis means “heavy tails” and a more Adjunct Associate Professor peaked center compared to a normal distribution – a negative kurtosis means “light tails” and a more spread center compared to a normal distribution http://www.lithoguru.com/scientist/statistics/ © Chris Mack, 2016Data to Decisions 1 © Chris Mack, 2016Data to Decisions 2 Kurtosis Examples One Impact of Excess Kurtosis • For the Student’s t • For a normal distribution, the sample distribution, the variance will have an expected value of s2, excess kurtosis is and a variance of 6 2 4 1 for DF > 4 ( for DF ≤ 4 the kurtosis is infinite) • For a distribution with excess kurtosis • For a uniform 2 1 1 distribution, 1 2 © Chris Mack, 2016Data to Decisions 3 © Chris Mack, 2016Data to Decisions 4 Sample Kurtosis Sample Kurtosis • For a sample of size n, the sample kurtosis is • An unbiased estimator of the sample excess 1 kurtosis is ∑ ̅ 1 3 3 1 6 1 2 3 ∑ ̅ Standard Error: • For large n, the sampling distribution of 1 24 2 1 approaches Normal with mean 0 and variance 2 1 of 24/n 3 5 • For small samples, this estimator is biased D. N. Joanes and C. A. Gill, “Comparing Measures of Sample Skewness and Kurtosis”, The Statistician, 47(1),183–189 (1998). -
Covariances of Two Sample Rank Sum Statistics
JOURNAL OF RESEARCH of the National Bureou of Standards - B. Mathematical Sciences Volume 76B, Nos. 1 and 2, January-June 1972 Covariances of Two Sample Rank Sum Statistics Peter V. Tryon Institute for Basic Standards, National Bureau of Standards, Boulder, Colorado 80302 (November 26, 1971) This note presents an elementary derivation of the covariances of the e(e - 1)/2 two-sample rank sum statistics computed among aU pairs of samples from e populations. Key words: e Sample proble m; covariances, Mann-Whitney-Wilcoxon statistics; rank sum statistics; statistics. Mann-Whitney or Wilcoxon rank sum statistics, computed for some or all of the c(c - 1)/2 pairs of samples from c populations, have been used in testing the null hypothesis of homogeneity of dis tribution against a variety of alternatives [1, 3,4,5).1 This note presents an elementary derivation of the covariances of such statistics under the null hypothesis_ The usual approach to such an analysis is the rank sum viewpoint of the Wilcoxon form of the statistic. Using this approach, Steel [3] presents a lengthy derivation of the covariances. In this note it is shown that thinking in terms of the Mann-Whitney form of the statistic leads to an elementary derivation. For comparison and completeness the rank sum derivation of Kruskal and Wallis [2] is repeated in obtaining the means and variances. Let x{, r= 1,2, ... , ni, i= 1,2, ... , c, be the rth item in the sample of size ni from the ith of c populations. Let Mij be the Mann-Whitney statistic between the ith andjth samples defined by n· n· Mij= ~ t z[J (1) s=1 1' = 1 where l,xJ~>xr Zr~= { I) O,xj";;;x[ } Thus Mij is the number of times items in the jth sample exceed items in the ith sample. -
What Is Statistic?
What is Statistic? OPRE 6301 In today’s world. ...we are constantly being bombarded with statistics and statistical information. For example: Customer Surveys Medical News Demographics Political Polls Economic Predictions Marketing Information Sales Forecasts Stock Market Projections Consumer Price Index Sports Statistics How can we make sense out of all this data? How do we differentiate valid from flawed claims? 1 What is Statistics?! “Statistics is a way to get information from data.” Statistics Data Information Data: Facts, especially Information: Knowledge numerical facts, collected communicated concerning together for reference or some particular fact. information. Statistics is a tool for creating an understanding from a set of numbers. Humorous Definitions: The Science of drawing a precise line between an unwar- ranted assumption and a forgone conclusion. The Science of stating precisely what you don’t know. 2 An Example: Stats Anxiety. A business school student is anxious about their statistics course, since they’ve heard the course is difficult. The professor provides last term’s final exam marks to the student. What can be discerned from this list of numbers? Statistics Data Information List of last term’s marks. New information about the statistics class. 95 89 70 E.g. Class average, 65 Proportion of class receiving A’s 78 Most frequent mark, 57 Marks distribution, etc. : 3 Key Statistical Concepts. Population — a population is the group of all items of interest to a statistics practitioner. — frequently very large; sometimes infinite. E.g. All 5 million Florida voters (per Example 12.5). Sample — A sample is a set of data drawn from the population. -
Parametric ANCOVA Vs. Rank Transform ANCOVA When
DOCUMENT RESUME ED 231 882 TM 830 499 AUTHOR -01-ejn4-k, Stephen F.; Algina, James TITLE Parametric ANCOVA vs. Rank Transform ANCOVA when Assumptions of Conditional Normality and Homoscedasticity Are Violated4 PUB DATE Apr 83 NOTE 33p.; Paper presented at the Annual Meeting of the American Educational Research Association (67th, Montreal, Quebec, April 11-15, 1983). Table 3 contains small print. PUB TYPE Speeches/Conference Papers (150) -- Reports - Research/Technical (143) EDRS PRICE MF01/PCO2 Plus Postage. DESCRIPTORS *Analysis of Covariance; *Control Groups; *Data Collection; *Error of Measurement; Pretests Posttests; *Research Design; Sample Size; Sampling IDENTIFIERS Computer Simulation; Heteroscedasticity (Statistics); Homoscedasticity (Statistics); *Parametric Analysis; Power (Statistics); *Robustness; Type I Errors ABSTRACT Parametric analysis of covariance was compared to analysis of covariance with data transformed using ranks."Using computer simulation approach the two strategies were compared in terms of the proportion,of TypeI errors made and statistical power s when the conditional distribution of err-ors were: (1) normal and homoscedastic, (2) normal and heteroscedastic, (3) non-normal and homoscedastic, and (4) non-normal and heteroscedastic. The results indicated that parametric ANCOVA was robust to violations of either normality or homoscedasticity. However when both assumptions were violated the observed alpha levels underestimated the nominal 'alpha level when sample sizes were small and alpha=.05. Rank ANCOVA led to a slightly liberal test of the hypothesis when the covariate was non-normal and the errors were heteroscedastic. Practical significant power differences favoring the rank ANCOVA procedure were observed with moderate sample sizes and skewed conditional error distributions. (Author) ************************* *****************************f************** Reproductions supplie by EDRS are the best that can be made from the original document. -
Analysis of Covariance (ANCOVA)
PASS Sample Size Software NCSS.com Chapter 591 Analysis of Covariance (ANCOVA) Introduction A common task in research is to compare the averages of two or more populations (groups). We might want to compare the income level of two regions, the nitrogen content of three lakes, or the effectiveness of four drugs. The one-way analysis of variance compares the means of two or more groups to determine if at least one mean is different from the others. The F test is used to determine statistical significance. Analysis of Covariance (ANCOVA) is an extension of the one-way analysis of variance model that adds quantitative variables (covariates). When used, it is assumed that their inclusion will reduce the size of the error variance and thus increase the power of the design. Assumptions Using the F test requires certain assumptions. One reason for the popularity of the F test is its robustness in the face of assumption violation. However, if an assumption is not even approximately met, the significance levels and the power of the F test are invalidated. Unfortunately, in practice it often happens that several assumptions are not met. This makes matters even worse. Hence, steps should be taken to check the assumptions before important decisions are made. The following assumptions are needed for a one-way analysis of variance: 1. The data are continuous (not discrete). 2. The data follow the normal probability distribution. Each group is normally distributed about the group mean. 3. The variances of the populations are equal. 4. The groups are independent. There is no relationship among the individuals in one group as compared to another. -
Contrast Coding in Multiple Regression Analysis: Strengths, Weaknesses, and Utility of Popular Coding Structures
Journal of Data Science 8(2010), 61-73 Contrast Coding in Multiple Regression Analysis: Strengths, Weaknesses, and Utility of Popular Coding Structures Matthew J. Davis Texas A&M University Abstract: The use of multiple regression analysis (MRA) has been on the rise over the last few decades in part due to the realization that analy- sis of variance (ANOVA) statistics can be advantageously completed using MRA. Given the limitations of ANOVA strategies it is argued that MRA is the better analysis; however, in order to use ANOVA in MRA coding structures must be employed by the researcher which can be confusing to understand. The present paper attempts to simplify this discussion by pro- viding a description of the most popular coding structures, with emphasis on their strengths, limitations, and uses. A visual analysis of each of these strategies is also included along with all necessary steps to create the con- trasts. Finally, a decision tree is presented that can be used by researchers to determine which coding structure to utilize in their current research project. Key words: Analysis of variance, contrast coding, multiple regression anal- ysis. According to Hinkle and Oliver (1986), multiple regression analysis (MRA) has begun to become one of the most widely used statistical analyses in educa- tional research, and it can be assumed that this popularity and frequent usage is still rising. One of the reasons for its large usage is that analysis of variance (ANOVA) techniques can be calculated by the use of MRA, a principle described