Estimation of Moments and Quantiles Using Censored Data

Total Page:16

File Type:pdf, Size:1020Kb

Estimation of Moments and Quantiles Using Censored Data WATER RESOURCES RESEARCH PAGE, 4 . , VOLNO S, 1005-1012.32 , APRIL 1996 Estimatio momentf no quantiled an s s using censored data Charles N. Kroll and Jery R. Stedinger School of Civil and Environmental Engineering, Cornell University, Ithaca, New York Abstract. Censored data setoftee sar n encountere waten di r quality investigationd an s streamflow analyses. A Monte Carlo analysis examined the performance of three technique r estimatinfo s momente gth quantiled san distributioa f so n using censored data sets. These techniques includ lognormaea l maximum likelihood estimator (MLE) loga , - probability plot regression log-partiaestimatorw ne a d lan , probability-weighted moment estimator. Data sets were generated fro mnumbea f distributiono r s commonly useo dt describe water quality and water quantity variables. A "robust" fill-in method, which circumvents transformation bias in the real space moments, was implemented with all three estimation technique obtaio t s completna e sampl computatior efo sample th f no e mea standard nan d deviation. Regardles underlyine th f so g distributionE ML e th , generally performe r bettewels o d a s a l r tha othee nth r estimators, thoug momene hth t and quantile estimators using all three techniques had comparable log-space root mean square errors (rmse) for censoring at or below the 20th percentile for samples sizes of n = 10, the 40th percentile for n = 25, and the 60th percentile for n = 50. Compariso log-space th f no e rms real-spacd ean e rmse indicated tha log-spaca t e rmse was a better overall metric of estimator precision. Introduction showed that more sophisticated statistical techniques per- formed better than these simple "replacement" methodsn I . Whe datna t containse a s some observations withi- re na particular, the log-probability plot regression method provided stricted rang valuef eo t otherwissbu t measuredeno calles i t i , d the best estimators of the mean and standard deviation, while a censored dat t [Cohen,se a 1991]. Censored data sete ar s lognormae th l maximum likelihood method provide bese dth t commonly fielde founth f waten o sdi r quality, where labora- estimators of the median and interquartile range. Estimation tory measurement f contaminanso t concentration e oftear s n of quantiles other than the median was not considered by reported as "less than the detection limit." Censored data sets Gillio Helseld man . Helsel Cohnd an [1988] extended Gilliom alse ar o foun waten di r quantity analyses when river discharges and HelseFs wor datko t a sets with several censoring thresholds. less than a measurement threshold level are reported as zero. This study extends the work of Gilliom and Helsel to the In some regions, historical river discharge records report over estimation of several quantiles and considers new estimators. halannuae fth l minimum flow zers sa o [Hammett, 1984]. These e log-probabilitTh y plot regression method e (LPPRth d an ) discharges may have been zero, or they may have been between lognormal maximum likelihood method (MLE) are evaluated zero and the measurement threshold and thus reported as along with a new method based on partial probability-weighted zero efficientlo t concerf O .w ho s ni y estimate moments, quan- moments (PPWM). As with the MLE and LPPR estimators, tiles othed an , r descriptive statistic underlyine th f so g contin- our PPWM estimator assumes that the data are described by a uous distribution using such censored data sets. situatioThe n wher dateall a below fixea d valu censoreeare d lognormal distribution. It employs with the logarithms of the is referre typs a eI censoringo dt . With typ eI censoring e th , flow xlata the censored-sample probability-weighted moment number of values censored is a random variable. With type II (PWM) estimators derived by Wang [1990] to obtain the pa- censoring, a fixed number of data points are always censored rameter lognormaa f o s l distribution. Wang employe- es s dhi ancensorine dth g threshol randoa s di m variable [David, 1981]. timator rean si lgeneralize a spac t fi o et d extreme value (GEV) Censored water qualit wated yan r quantity data should resem- distribution performance .Th probabilitf eo y weighted moment ble type I censoring because the censoring threshold is fixed by estimators with complete samples has been examined for a measuremene th t technolog physicae th d yan l setting. numbe f distributionso r mann i d y an ,estimatorcases M PW , s numbeA f studiero s have f simplsuggesteo e us e "ree dth - e higheoth f r moment d quantilean s a distributio f o s n have placement" techniques for estimating the mean and standard performed favorably with product-momen maximud an t m like- deviatio typf no ecensoreI d data sets [Cohen Ryan,d an 1989; lihood estimators [Landwehr et aL, 1979; Hashing et aL, 1985; Newman et al, 1990]. These techniques replace all the cen- Hashing Wallis,d an 1987]estimatorM PW . lineae sar r combi- sored observations with some value betwee- de e nth zerd oan nation observatione th f so thud lese an ssar s sensitive th o et tection limit. Gilliom Helseld an [1986 Helseld Gilliomd ]an an largest observation sampla n si e than product-moment estima- [1986] examine performance dth varieta f eo techniquef yo o st tors that squar cubd observationse ean e th merie .Th probf o t - estimate the mean, standard deviation, median interquard an , - ability-weighted moment estimators with censored samples has tile range using type I censored water quality data. They analyzede b ye o t t . This study focuse estimation so meane th f no , standar- dde Copyright Americae 199th y 6b n Geophysical Union. viation, and interquartile range of a distribution, as well as Paper number 95WR03294. quantiles with nonexceedance probabilities of 10% and 90%. A 0043-1397/96/95WR-03294$05.00 "robust" fill-in metho implementes di d with each estimation 1005 1006 KROLL AND STEDINGER: ESTIMATION OF MOMENTS AND QUANTILES techniqu obtaio et ncompleta e sampl computatior efo e th f no date th a r abovFo threshole eth - logarithe or dth e th f mo sample mea varianced nan . Gilliom Helseld an [1986] used this dered values, Y/5 are regressed against the corresponding "nor- "robust" fill-in method only with a log-probability plot regres- mal scores" corresponding to the model sion estimator thin I . s study this metho alss di o used wite hth lognormal maximum likelihoo d partiaan d l probability- I = (LY £/ = c (3) weighted moment estimators differeno Tw . t metric usee ar s d where < £ e invers1th (p s i ) e cumulative normal distribution to compare estimators. Data are generated from distributions i function e resultinevaluateth e d a ar d (LY an g t pYan a di9 commonly observed in the water quality and water quantity estimators of the mean and standard deviation of the log- fields, including three distribution t consideresno Gillioy db m transformed data obtained using ordinary least squares regres- and Helsel; their extreme case for the gamma distribution sion. These LPPR estimator similae sar thoso t r e derivey db (coefficient of variation = 2.0) was omitted. Gupta [1952] and have been implemented in a number of studies of estimation with censored data sets [Gilliom and Estimation Techniques Helsel, 1986; Helsel and Gilliom, 1986; Helsel and Cohn, 1988; Helsel, 1990]. All three estimation techniques make the assumption that underlyine th g distributio date th lognormal s ai f no . Helseld an Partial Probability Weighted Moments Hirsch [1992, p. 360] observe that the lognormal distribution flexibla s ha e shape thed an ,y provid reasonablea e description For a variable Y, probability-weighted moments are defined as of many positive random variables with positively skewed dis- p, = E{Y[F(Y)]r} (4) tributions lognormae Th . l distributio bees nha n a show e b o nt good descriptor of low river flows [Vogel and Kroll, 1989] and wher ecumulative F(Y)th s i e distribution function (CDFr )fo water quality data [Gilliom Helsel,and 1986]. continuoua r YFo . s random variable, PWMwrittee b n sca n Lognormal Maximum Likelihood Estimator |3r= Y(F)FdF (5) Conside orderen ra d censored data set Xl < X2 ''' ^ Xc < Xc +l '— ^ Xn, where the first c observations are censored and reported only as below some fixed measurement threshold. wher evaluateY e= f F inverse o F(Y) Y(F)d th F s an i de CD Let Yt = In (Xt) and let T be the log of the measurement e probabilitath t a censore yr F.Fo d sample, Wang [1990] threshold. Assuming that X is lognormally distributed and in- define PPWda Ms a dependent likelihooe th , d functio date th s ai r nfo Y(F)FrdF (6) T - L - /Y c\(n - c}\ T'n CTy 0-y l=C + l where PT — F(T), the probability of censoring, and T is the are where <£ and <f> the distribution and density function of a censoring threshold. standard normal variate e log-transe meath th , f s o i nIJL - Y Assumin lognormalle date ar g th a X y distributed= Y d an , forme de standar data th d os yi an , d deviatio e logth -f o n log (X), s normallthei e nY th f o y g distributedlo e th s i T . transformed data. By taking the logarithm of (1) and setting censoring threshold normae th r l Fo .distributio inverse nth e the partial derivatives with respect to JLLY and oy to zero, one CDF for a random variable Y is can solve for the maximum likelihood estimators (MLE) (LY and 6y [Cohen, 1991].
Recommended publications
  • The Statistical Analysis of Distributions, Percentile Rank Classes and Top-Cited
    How to analyse percentile impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes and top-cited papers Lutz Bornmann Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Hofgartenstraße 8, 80539 Munich, Germany; [email protected]. 1 Abstract According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalising the citation counts of individual publications in terms of the subject area, the document type and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles can be analysed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (here: universities) and on the other hand concentrate on the range of publications with the highest citation impact – that is, the range which is usually of most interest in the evaluation of scientific performance. Key words percentiles; research evaluation; institutional comparisons; percentile rank classes; top-cited papers 2 1 Introduction According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalising the citation counts of individual publications in terms of the subject area, the document type and the publication year (Bornmann, de Moya Anegón, & Leydesdorff, 2012; Bornmann, Mutz, Marx, Schier, & Daniel, 2011; Leydesdorff, Bornmann, Mutz, & Opthof, 2011). Until today, it has been customary in evaluative bibliometrics to use the arithmetic mean value to normalize citation data (Waltman, van Eck, van Leeuwen, Visser, & van Raan, 2011).
    [Show full text]
  • Lesson 7: Measuring Variability for Skewed Distributions (Interquartile Range)
    NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 7 M2 ALGEBRA I Lesson 7: Measuring Variability for Skewed Distributions (Interquartile Range) Student Outcomes . Students explain why a median is a better description of a typical value for a skewed distribution. Students calculate the 5-number summary of a data set. Students construct a box plot based on the 5-number summary and calculate the interquartile range (IQR). Students interpret the IQR as a description of variability in the data. Students identify outliers in a data distribution. Lesson Notes Distributions that are not symmetrical pose some challenges in students’ thinking about center and variability. The observation that the distribution is not symmetrical is straightforward. The difficult part is to select a measure of center and a measure of variability around that center. In Lesson 3, students learned that, because the mean can be affected by unusual values in the data set, the median is a better description of a typical data value for a skewed distribution. This lesson addresses what measure of variability is appropriate for a skewed data distribution. Students construct a box plot of the data using the 5-number summary and describe variability using the interquartile range. Classwork Exploratory Challenge 1/Exercises 1–3 (10 minutes): Skewed Data and Their Measure of Center Verbally introduce the data set as described in the introductory paragraph and dot plot shown below. Exploratory Challenge 1/Exercises 1–3: Skewed Data and Their Measure of Center Consider the following scenario. A television game show, Fact or Fiction, was cancelled after nine shows. Many people watched the nine shows and were rather upset when it was taken off the air.
    [Show full text]
  • A Simple Censored Median Regression Estimator
    Statistica Sinica 16(2006), 1043-1058 A SIMPLE CENSORED MEDIAN REGRESSION ESTIMATOR Lingzhi Zhou The Hong Kong University of Science and Technology Abstract: Ying, Jung and Wei (1995) proposed an estimation procedure for the censored median regression model that regresses the median of the survival time, or its transform, on the covariates. The procedure requires solving complicated nonlinear equations and thus can be very difficult to implement in practice, es- pecially when there are multiple covariates. Moreover, the asymptotic covariance matrix of the estimator involves the density of the errors that cannot be esti- mated reliably. In this paper, we propose a new estimator for the censored median regression model. Our estimation procedure involves solving some convex min- imization problems and can be easily implemented through linear programming (Koenker and D'Orey (1987)). In addition, a resampling method is presented for estimating the covariance matrix of the new estimator. Numerical studies indi- cate the superiority of the finite sample performance of our estimator over that in Ying, Jung and Wei (1995). Key words and phrases: Censoring, convexity, LAD, resampling. 1. Introduction The accelerated failure time (AFT) model, which relates the logarithm of the survival time to covariates, is an attractive alternative to the popular Cox (1972) proportional hazards model due to its ease of interpretation. The model assumes that the failure time T , or some monotonic transformation of it, is linearly related to the covariate vector Z 0 Ti = β0Zi + "i; i = 1; : : : ; n: (1.1) Under censoring, we only observe Yi = min(Ti; Ci), where Ci are censoring times, and Ti and Ci are independent conditional on Zi.
    [Show full text]
  • A Logistic Regression Equation for Estimating the Probability of a Stream in Vermont Having Intermittent Flow
    Prepared in cooperation with the Vermont Center for Geographic Information A Logistic Regression Equation for Estimating the Probability of a Stream in Vermont Having Intermittent Flow Scientific Investigations Report 2006–5217 U.S. Department of the Interior U.S. Geological Survey A Logistic Regression Equation for Estimating the Probability of a Stream in Vermont Having Intermittent Flow By Scott A. Olson and Michael C. Brouillette Prepared in cooperation with the Vermont Center for Geographic Information Scientific Investigations Report 2006–5217 U.S. Department of the Interior U.S. Geological Survey U.S. Department of the Interior DIRK KEMPTHORNE, Secretary U.S. Geological Survey P. Patrick Leahy, Acting Director U.S. Geological Survey, Reston, Virginia: 2006 For product and ordering information: World Wide Web: http://www.usgs.gov/pubprod Telephone: 1-888-ASK-USGS For more information on the USGS--the Federal source for science about the Earth, its natural and living resources, natural hazards, and the environment: World Wide Web: http://www.usgs.gov Telephone: 1-888-ASK-USGS Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government. Although this report is in the public domain, permission must be secured from the individual copyright owners to reproduce any copyrighted materials contained within this report. Suggested citation: Olson, S.A., and Brouillette, M.C., 2006, A logistic regression equation for estimating the probability of a stream in Vermont having intermittent
    [Show full text]
  • HST 190: Introduction to Biostatistics
    HST 190: Introduction to Biostatistics Lecture 8: Analysis of time-to-event data 1 HST 190: Intro to Biostatistics • Survival analysis is studied on its own because survival data has features that distinguish it from other types of data. • Nonetheless, our tour of survival analysis will visit all of the topics we have learned so far: § Estimation § One-sample inference § Two-sample inference § Regression 2 HST 190: Intro to Biostatistics Survival analysis • Survival analysis is the area of statistics that deals with time-to- event data. • Despite the name, “survival” analysis isn’t only for analyzing time until death. § It deals with any situation where the quantity of interest is amount of time until study subject experiences some relevant endpoint. • The terms “failure” and “event” are used interchangeably for the endpoint of interest • For example, say researchers record time from diagnosis to death (in months) for a sample of patients diagnosed with laryngeal cancer. § How would you summarize the survival of laryngeal cancer patients? 3 HST 190: Intro to Biostatistics • A simple idea is to report the mean or median survival time § However, sample mean is not robust to outliers, meaning small increases the mean don’t show if everyone lived a little bit longer or whether a few people lived a lot longer. § The median is also just a single summary measure. • If considering your own prognosis, you’d want to know more! • Another simple idea: report survival rate at � years § How to choose cutoff time �? § Also, just a single summary measure—graphs show same 5-year rate 4 HST 190: Intro to Biostatistics • Summary measures such as means and proportions are highly useful for concisely describing data.
    [Show full text]
  • Integration of Grid Maps in Merged Environments
    Integration of grid maps in merged environments Tanja Wernle1, Torgeir Waaga*1, Maria Mørreaunet*1, Alessandro Treves1,2, May‐Britt Moser1 and Edvard I. Moser1 1Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway; 2SISSA – Cognitive Neuroscience, via Bonomea 265, 34136 Trieste, Italy * Equal contribution Corresponding author: Tanja Wernle [email protected]; Edvard I. Moser, [email protected] Manuscript length: 5719 words, 6 figures,9 supplemental figures. 1 Abstract (150 words) Natural environments are represented by local maps of grid cells and place cells that are stitched together. How transitions between map fragments are generated is unknown. Here we recorded grid cells while rats were trained in two rectangular compartments A and B (each 1 m x 2 m) separated by a wall. Once distinct grid maps were established in each environment, we removed the partition and allowed the rat to explore the merged environment (2 m x 2 m). The grid patterns were largely retained along the distal walls of the box. Nearer the former partition line, individual grid fields changed location, resulting almost immediately in local spatial periodicity and continuity between the two original maps. Grid cells belonging to the same grid module retained phase relationships during the transformation. Thus, when environments are merged, grid fields reorganize rapidly to establish spatial periodicity in the area where the environments meet. Introduction Self‐location is dynamically represented in multiple functionally dedicated cell types1,2 . One of these is the grid cells of the medial entorhinal cortex (MEC)3. The multiple spatial firing fields of a grid cell tile the environment in a periodic hexagonal pattern, independently of the speed and direction of a moving animal.
    [Show full text]
  • Linear Regression with Type I Interval- and Left-Censored Response Data
    Environmental and Ecological Statistics 10, 221±230, 2003 Linear regression with Type I interval- and left-censored response data MARY LOU THOMPSON1,2 and KERRIE P. NELSON2 1Department of Biostatistics, Box 357232, University of Washington, Seattle, WA98195, USA 2National Research Center for Statistics and the Environment, University of Washington E-mail: [email protected] Received April2001; Revised November 2001 Laboratory analyses in a variety of contexts may result in left- and interval-censored measurements. We develop and evaluate a maximum likelihood approach to linear regression analysis in this setting and compare this approach to commonly used simple substitution methods. We explore via simulation the impact on bias and power of censoring fraction and sample size in a range of settings. The maximum likelihood approach represents only a moderate increase in power, but we show that the bias in substitution estimates may be substantial. Keywords: environmental data, laboratory analysis, maximum likelihood 1352-8505 # 2003 Kluwer Academic Publishers 1. Introduction The statisticalpractices of chemists are designed to protect against mis-identifying a sample compound and falsely reporting a detectable concentration. In environmental assessment, trace amounts of contaminants of concern are thus often reported by the laboratory as ``non-detects'' or ``trace'', in which case the data may be left- and interval- censored respectively. Samples below the ``non-detect'' threshold have contaminant levels which are regarded as being below the limits of detection and ``trace'' samples have levels that are above the limit of detection, but below the quanti®able limit. The type of censoring encountered in this setting is called Type I censoring: the censoring thresholds (``non-detect'' or ``trace'') are ®xed and the number of censored observations is random.
    [Show full text]
  • How to Use MGMA Compensation Data: an MGMA Research & Analysis Report | JUNE 2016
    How to Use MGMA Compensation Data: An MGMA Research & Analysis Report | JUNE 2016 1 ©MGMA. All rights reserved. Compensation is about alignment with our philosophy and strategy. When someone complains that they aren’t earning enough, we use the surveys to highlight factors that influence compensation. Greg Pawson, CPA, CMA, CMPE, chief financial officer, Women’s Healthcare Associates, LLC, Portland, Ore. 2 ©MGMA. All rights reserved. Understanding how to utilize benchmarking data can help improve operational efficiency and profits for medical practices. As we approach our 90th anniversary, it only seems fitting to celebrate MGMA survey data, the gold standard of the industry. For decades, MGMA has produced robust reports using the largest data sets in the industry to help practice leaders make informed business decisions. The MGMA DataDive® Provider Compensation 2016 remains the gold standard for compensation data. The purpose of this research and analysis report is to educate the reader on how to best use MGMA compensation data and includes: • Basic statistical terms and definitions • Best practices • A practical guide to MGMA DataDive® • Other factors to consider • Compensation trends • Real-life examples When you know how to use MGMA’s provider compensation and production data, you will be able to: • Evaluate factors that affect compensation andset realistic goals • Determine alignment between medical provider performance and compensation • Determine the right mix of compensation, benefits, incentives and opportunities to offer new physicians and nonphysician providers • Ensure that your recruitment packages keep pace with the market • Understand the effects thatteaching and research have on academic faculty compensation and productivity • Estimate the potential effects of adding physicians and nonphysician providers • Support the determination of fair market value for professional services and assess compensation methods for compliance and regulatory purposes 3 ©MGMA.
    [Show full text]
  • Statistical Methods for Censored and Missing Data in Survival and Longitudinal Analysis
    University of Pennsylvania ScholarlyCommons Publicly Accessible Penn Dissertations 2019 Statistical Methods For Censored And Missing Data In Survival And Longitudinal Analysis Leah Helene Suttner University of Pennsylvania, [email protected] Follow this and additional works at: https://repository.upenn.edu/edissertations Part of the Biostatistics Commons Recommended Citation Suttner, Leah Helene, "Statistical Methods For Censored And Missing Data In Survival And Longitudinal Analysis" (2019). Publicly Accessible Penn Dissertations. 3520. https://repository.upenn.edu/edissertations/3520 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/edissertations/3520 For more information, please contact [email protected]. Statistical Methods For Censored And Missing Data In Survival And Longitudinal Analysis Abstract Missing or incomplete data is a nearly ubiquitous problem in biomedical research studies. If the incomplete data are not appropriately addressed, it can lead to biased, inefficient estimation that can impact the conclusions of the study. Many methods for dealing with missing or incomplete data rely on parametric assumptions that can be difficult or impossibleo t verify. Here we propose semiparametric and nonparametric methods to deal with data in longitudinal studies that are missing or incomplete by design of the study. We apply these methods to data from Parkinson's disease dementia studies. First, we propose a quantitative procedure for designing appropriate follow-up schedules in time-to-event studies to address the problem of interval-censored data at the study design stage. We propose a method for generating proportional hazards data with an unadjusted survival similar to that of historical data. Using this data generation process we conduct simulations to evaluate the bias in estimating hazard ratios using Cox regression models under various follow-up schedules to guide the selection of follow-up frequency.
    [Show full text]
  • Normal Probability Distribution
    MATH2114 Week 8, Thursday The Normal Distribution Slides Adopted & modified from “Business Statistics: A First Course” 7th Ed, by Levine, Szabat and Stephan, 2016 Pearson Education Inc. Objective/Goals To compute probabilities from the normal distribution How to use the normal distribution to solve practical problems To use the normal probability plot to determine whether a set of data is approximately normally distributed Continuous Probability Distributions A continuous variable is a variable that can assume any value on a continuum (can assume an uncountable number of values) thickness of an item time required to complete a task temperature of a solution height, in inches These can potentially take on any value depending only on the ability to precisely and accurately measure The Normal Distribution Bell Shaped f(X) Symmetrical Mean=Median=Mode σ Location is determined by X the mean, μ μ Spread is determined by the standard deviation, σ Mean The random variable has an = Median infinite theoretical range: = Mode + to The Normal Distribution Density Function The formula for the normal probability density function is 2 1 (X μ) 1 f(X) e 2 2π Where e = the mathematical constant approximated by 2.71828 π = the mathematical constant approximated by 3.14159 μ = the population mean σ = the population standard deviation X = any value of the continuous variable The Normal Distribution Density Function By varying 휇 and 휎, we obtain different normal distributions Image taken from Wikipedia, released under public domain The
    [Show full text]
  • 2018 Statistics Final Exam Academic Success Programme Columbia University Instructor: Mark England
    2018 Statistics Final Exam Academic Success Programme Columbia University Instructor: Mark England Name: UNI: You have two hours to finish this exam. § Calculators are necessary for this exam. § Always show your working and thought process throughout so that even if you don’t get the answer totally correct I can give you as many marks as possible. § It has been a pleasure teaching you all. Try your best and good luck! Page 1 Question 1 True or False for the following statements? Warning, in this question (and only this question) you will get minus points for wrong answers, so it is not worth guessing randomly. For this question you do not need to explain your answer. a) Correlation values can only be between 0 and 1. False. Correlation values are between -1 and 1. b) The Student’s t distribution gives a lower percentage of extreme events occurring compared to a normal distribution. False. The student’s t distribution gives a higher percentage of extreme events. c) The units of the standard deviation of � are the same as the units of �. True. d) For a normal distribution, roughly 95% of the data is contained within one standard deviation of the mean. False. It is within two standard deviations of the mean e) For a left skewed distribution, the mean is larger than the median. False. This is a right skewed distribution. f) If an event � is independent of �, then � (�) = �(�|�) is always true. True. g) For a given confidence level, increasing the sample size of a survey should increase the margin of error.
    [Show full text]
  • Common Misunderstandings of Survival Time Analysis
    Common Misunderstandings of Survival Time Analysis Milensu Shanyinde Centre for Statistics in Medicine University of Oxford 2nd April 2012 C S M Outline . Introduction . Essential features of the Kaplan-Meier survival curves . Median survival times . Median follow-up times . Forest plots to present outcomes by subgroups C S M Introduction . Survival data is common in cancer clinical trials . Survival analysis methods are necessary for such data . Primary endpoints are considered to be time until an event of interest occurs . Some evidence to indicate inconsistencies or insufficient information in presenting survival data . Poor presentation could lead to misinterpretation of the data C S M 3 . Paper by DG Altman et al (1995) . Systematic review of appropriateness and presentation of survival analyses in clinical oncology journals . Assessed; - Description of data analysed (length and quality of follow-up) - Graphical presentation C S M 4 . Cohort sample size 132, paper publishing survival analysis . Results; - 48% did not include summary of follow-up time - Median follow-up was frequently presented (58%) but method used to compute it was rarely specified (31%) Graphical presentation; - 95% used the K-M method to present survival curves - Censored observations were rarely marked (29%) - Number at risk presented (8%) C S M 5 . Paper by S Mathoulin-Pelissier et al (2008) . Systematic review of RCTs, evaluating the reporting of time to event end points in cancer trials . Assessed the reporting of; - Number of events and censoring information - Number of patients at risk - Effect size by median survival time or Hazard ratio - Summarising Follow-up C S M 6 . Cohort sample size 125 .
    [Show full text]