CPS397-P1-S.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

CPS397-P1-S.Pdf Model selection curves for survival analysis with accelerated failure time models Karami, J. H.*, Luo, K., Fung, T. Macquarie University, Sydney, Australia – [email protected], [email protected], [email protected] Abstract Many model selection processes involve minimizing a loss function of the data over a set of models. A recent introduced approach is model selection curves, in which the selection criterion is expressed as a function of penalty multiplier in a linear (mixed) model or generalized linear model. In this article, we have adopted the model selection curves for accelerated failure time (AFT) models of survival data. In our simulation study, it was found that for data with small sample size and high proportion of censoring, the model selection curves approach outperformed the traditional model selection criteria, Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). In other situations with respect to sample size and proportion of censoring, the model selection curves correctly identify the true model whether it is a full or reduced model. Moreover, through bootstrapping, it was shown that the model selection curves can be used to enhance the model selection process in AFT models. Keywords: log-likelihood function; penalty function; penalty multiplier; survival data. 1. Introduction Many model selection strategies are based on minimizing 푙표푠푠 + 푝푒푛푎푙푡푦 over a set of models. The penalty term usually consists of a penalty multiplier 휆 and a penalty function 푓푛(푝훼), where 푝훼 is the number of parameters in model 훼 and 푛 is the sample size. For a fixed penalty function, say 푓푛(푝훼) = 푝훼, if we restrict ourselves to a single value of 휆, we get a specififc model selection criteria. For example, 휆 = 2 will give AIC (Akaike, 1974) and 휆 = log⁡(푛) will give BIC (Schwarz, 1978). Other model selection criteria can be obtained using other values of 휆, or other penalty functions. Recently, Müller and Welsh (2010) studied model selection criteria as a function of penalty multiplier, which resulted in what is known as model selection curves. In their study the model selection curves approach for linear regression models was illustrated, where they used residual sum of squares as the loss function. This approach allows researchers to study and rank candidate models before selecting a model. The stability of the selected model can be assessed in a range of 휆 values using the model selection curves. This approach has the potential to outperform other model selection criteria that are based on single value for penalty multiplier. For many models, including accelerated failure time (AFT) model in survival analysis, it is common to use the log-likelihood to compare models. See for example Liang and Zou (2008), and Murray et al (2012). Our article concentrates on using the log-likelihood (푙) for measuring the descriptive ability of an AFT model, and thus log-likelihood is a natural choice of loss function. There are also other loss functions, such as mean squared error, that can be used. Suppose, we wish to choose one model 훼 from all candidate models in 풜 using model selection curves. The function of 푙표푠푠 + 푝푒푛푎푙푡푦 can be expressed as: ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ℳ(휆; 훼) = −2푙 + ⁡휆푝훼, 휆 ≥ 0, 훼 ∈ 풜⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(1) For each specified 휆, a model is chosen, which minimizes ℳ(휆; 훼) over 훼 ∈ 풜 in (1). Now we can define a rank function: ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡푟(휆; 훼) = 푟푎푛푘⁡(ℳ(휆; 훼))⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(2) The 푟(휆; 훼) is computed at each 휆. Note that rank functions are step functions, pairs of which have jumps at the values of 휆 where the ranks of models change. Now, assuming that no ties of ℳ(휆; 훼) occur for 훼 ∈ 풜, the 푘 (1 ≤ 푘 ≤ 푚, and 푚 is the number of models in 풜) rank model selection curves can be defined by ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡퐶(푘)(휆; 풜) = 푚푎푥{ℳ(휆; 훼); ⁡⁡훼 ∈ 풜⁡ ∧ ⁡푟(휆, 훼) ≤ 푘}⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(3) This definition can be extended to tied ℳ(휆; 훼)’s for 훼 ∈ 풜 by considering continuous locus of 퐶(푘)(휆; 풜) over 휆 > 0. The 1 rank model selection curve is the lower enveloping curve, which is defined as ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡퐶(1)(휆; 풜) = 푚푖푛{ℳ(휆; 훼); ⁡⁡훼 ∈ 풜⁡}⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(4) This will be referred to as model selection curve. If perpendiculars are drawn from the vertices of the lower enveloping curve to the horizontal axis, the line segments on the horizontal axis give the respective length of cathetus corresponding to models that appear on the model selection curves. Thus model 훼 should be chosen or selected if it has the longest cathetus in 퐶(1)(휆; 풜), ie, the 1 rank model selection curve. Let’s now illustrate how to construct and use model selection curves using a simple example considering only four models, ie, 훼⁡ = 1, 2, 3, 4. The plot of ℳ(휆; 훼) in 푒푞푢푎푡푖표푛⁡(1) against 휆 are shown on the top left of Figure 1 where 푀1 to 푀4 corresponds to model 훼 = 1 to 4, respectively. Plot of ranks, 푟(휆; 훼) in (2), against 휆 is on top right, ie, Figure 1(ii). Figure 1(iii) is the four rank selection curves, based on (3), and the model selection curve, obtained via (4), is presented on the bottom right of Figure 1. According to the model selection curve, Figure 1(iv), and also with reference to Figure 1(ii), model 2 (ie, 푀2) has the longest cathetus and is chosen from the four models considered. In this paper, it is intended to apply model selection curves for accelerated failure time models of right censored survival data using a simulation study. The rest of the paper is organized as follows: In Section 2, we briefly discuss the model selection curves in the case of Weibull AFT models. In Section 3, a simulation study with different combination of coefficients, censoring proportions and sample sizes are considered and discussed. In Section 4, we conclude our findings. 2. Model selection curves in Weibull AFT model A general form of an AFT model is given by: ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡푙표푔푇 = 훼0 + 훼1푥1 + ⋯ + 훼푝푥푝 + 휏푊⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(5) where 푇 is survival time with a specific distribution, 푊 is random error term with a known form of density (eg, extreme value distribution), 푥푗 ’s are predictors and 훼푗 ’s are coefficients of 푥푗 (푗 = 1, 2, ⋯ , 푝), 훼0 is the model intercept and 휏(휏 > 0) is a scale parameter. Let {(푦푖, 훿푖), 푖 = 1, 2, ⋯ , 푛} denote the survival data with right censoring where 푦푖 = 푚푖푛(푇푖, 퐶푖), 퐶푖 is the censoring time for 푖푡ℎ individual and 훿푖 = 퐼(푇푖 ≤ 퐶푖) is the censoring indicator. Assuming the pairs (푦푖, 훿푖), 푖 = 1, 2, ⋯ , 푛 are independent, a general form of likelihood function for the 푛 훿푖 1−훿푖 survival data is 퐿 = ∏푖=1 푓(푦푖) 푆(푦푖) , and thus the log-likelihood function is 푛 푙(휽) = ∑ {훿푖푙표푔푓(푦푖; 푥푖) + (1 − 훿푖)푙표푔푆(푦푖)} 푖=1 푇 푇 푇 where, 휽 = (훼0, 훼 , 휏) , 훼 = (훼1, 훼2, ⋯ , 훼푝) , 푓(⋅) is the density function and 푆(⋅) is the survival function. The 푙(휽) can be expressed in terms of the density function (푔0(푤)) and the survival function (퐺0(푤)) of 푊 as below: 푛 1 푙(휽) = ∑ [훿푖푙표푔 { 푔0(푤푖)} + (1 − 훿푖)푙표푔퐺0(푤푖)] 푖=1 푦푖휏 or, 푛 푛 푛 푙(휽) = −푙표푔휏 ∑ 훿푖 + ∑ {훿푖푙표푔푔0(푤푖) + (1 − 훿푖)푙표푔퐺0(푤푖)} − ∑ 훿푖푙표푔푦푖 ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(6) 푖=1 푖=1 푖=1 푙표푔푦 −훼 −훼푇푥 where, 푤 = 푖 0 푖. 푖 휏 A Weibull AFT model arises if the error term 푊 in model (5) follows an extreme value distribution with density and survival functions respectively as 푔0(푤) = 푒푥푝[푤 − 푒푥푝(푤)] and 퐺0(푤) = 푒푥푝[−푒푥푝(푤)], 푤 ∈ ℝ. Plugging these two functions into equation (6) gives the log-likelihood for the Weibull AFT model as: 푛 푛 푇 푛 1 − 휏 푙표푔푦푖 − 훼0 − 훼 푥푖 훼0 푇 푙(휽) = ( ) ∑ 훿푖푙표푔푦푖 − ∑ 푒푥푝 ( ) − ∑ 훿푖 (푙표푔휏 + + 훼 푥푖) 휏 푖=1 푖=1 휏 푖=1 휏 The maximum likelihood estimates (MLE) of the parameters can be obtained from the above equation using an iterative procedure (eg, Newton-Raphson), and the model selection curves for the Weibull AFT model can then be constructed from ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ℳ(휆; 훼) = −2푙(휽̂) + 휆푝훼, ⁡⁡휆 > 0⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(7) where, 휽̂ is the MLE of 휽. As expected, both AIC and BIC would appear as single points on the model selection curves. To construct model selection curves, we choose 휆⁡ ∈ [0, 4푙표푔(푛)] in (7), same as Müller and Welsh’s study (2010). Such range of 휆 would cover most of existed model selection criteria that are based on single values of 휆, 푠푢푐ℎ⁡푎푠⁡퐴퐼퐶⁡푎푛푑⁡퐵퐼퐶. 3. A simulation study For our simulation study, we consider Weibull AFT model with four predictors, ie, 푙표푔푇푖 = 훼0 + 훼1푥1 + 훼2푥2+훼3푥3 + 훼4푥4 + 휏푊푖, 푖 = 1, ⁡2, ⁡ ⋯ , 푛. ⁡ Observations for the (weakly correlated) predictors 푥1, ⁡푥2, ⁡푥3, ⁡푥4 are drawn from a multivariate normal distribution 푀푁(ퟎ, 휮0 ) with zero mean vector and dispersion matrix, 1 0.001 0.001 0.001 0.001 1 0.001 0.001 휮 = [ ]. We considered different sample sizes ranging from 30 to 300 0 0.001 0.001 1 0.001 0.001 0.001 0.001 1 at various censoring proportions such as 10% and 50% along with different sets of coefficients: coefs1 = (0.1, 0, 0, 0.9, 0.8), coefs2 = (0.1, 0, 0, 0.9, 0), and coefs3 = (0.1, 1, 0.7, 0.9, 0.8). Both uncensored and censored observations (ie, 푇푖 and 퐶푖 ) are drawn separately from Weibull distribution with specified shape and scale.
Recommended publications
  • A Simple Censored Median Regression Estimator
    Statistica Sinica 16(2006), 1043-1058 A SIMPLE CENSORED MEDIAN REGRESSION ESTIMATOR Lingzhi Zhou The Hong Kong University of Science and Technology Abstract: Ying, Jung and Wei (1995) proposed an estimation procedure for the censored median regression model that regresses the median of the survival time, or its transform, on the covariates. The procedure requires solving complicated nonlinear equations and thus can be very difficult to implement in practice, es- pecially when there are multiple covariates. Moreover, the asymptotic covariance matrix of the estimator involves the density of the errors that cannot be esti- mated reliably. In this paper, we propose a new estimator for the censored median regression model. Our estimation procedure involves solving some convex min- imization problems and can be easily implemented through linear programming (Koenker and D'Orey (1987)). In addition, a resampling method is presented for estimating the covariance matrix of the new estimator. Numerical studies indi- cate the superiority of the finite sample performance of our estimator over that in Ying, Jung and Wei (1995). Key words and phrases: Censoring, convexity, LAD, resampling. 1. Introduction The accelerated failure time (AFT) model, which relates the logarithm of the survival time to covariates, is an attractive alternative to the popular Cox (1972) proportional hazards model due to its ease of interpretation. The model assumes that the failure time T , or some monotonic transformation of it, is linearly related to the covariate vector Z 0 Ti = β0Zi + "i; i = 1; : : : ; n: (1.1) Under censoring, we only observe Yi = min(Ti; Ci), where Ci are censoring times, and Ti and Ci are independent conditional on Zi.
    [Show full text]
  • HST 190: Introduction to Biostatistics
    HST 190: Introduction to Biostatistics Lecture 8: Analysis of time-to-event data 1 HST 190: Intro to Biostatistics • Survival analysis is studied on its own because survival data has features that distinguish it from other types of data. • Nonetheless, our tour of survival analysis will visit all of the topics we have learned so far: § Estimation § One-sample inference § Two-sample inference § Regression 2 HST 190: Intro to Biostatistics Survival analysis • Survival analysis is the area of statistics that deals with time-to- event data. • Despite the name, “survival” analysis isn’t only for analyzing time until death. § It deals with any situation where the quantity of interest is amount of time until study subject experiences some relevant endpoint. • The terms “failure” and “event” are used interchangeably for the endpoint of interest • For example, say researchers record time from diagnosis to death (in months) for a sample of patients diagnosed with laryngeal cancer. § How would you summarize the survival of laryngeal cancer patients? 3 HST 190: Intro to Biostatistics • A simple idea is to report the mean or median survival time § However, sample mean is not robust to outliers, meaning small increases the mean don’t show if everyone lived a little bit longer or whether a few people lived a lot longer. § The median is also just a single summary measure. • If considering your own prognosis, you’d want to know more! • Another simple idea: report survival rate at � years § How to choose cutoff time �? § Also, just a single summary measure—graphs show same 5-year rate 4 HST 190: Intro to Biostatistics • Summary measures such as means and proportions are highly useful for concisely describing data.
    [Show full text]
  • Linear Regression with Type I Interval- and Left-Censored Response Data
    Environmental and Ecological Statistics 10, 221±230, 2003 Linear regression with Type I interval- and left-censored response data MARY LOU THOMPSON1,2 and KERRIE P. NELSON2 1Department of Biostatistics, Box 357232, University of Washington, Seattle, WA98195, USA 2National Research Center for Statistics and the Environment, University of Washington E-mail: [email protected] Received April2001; Revised November 2001 Laboratory analyses in a variety of contexts may result in left- and interval-censored measurements. We develop and evaluate a maximum likelihood approach to linear regression analysis in this setting and compare this approach to commonly used simple substitution methods. We explore via simulation the impact on bias and power of censoring fraction and sample size in a range of settings. The maximum likelihood approach represents only a moderate increase in power, but we show that the bias in substitution estimates may be substantial. Keywords: environmental data, laboratory analysis, maximum likelihood 1352-8505 # 2003 Kluwer Academic Publishers 1. Introduction The statisticalpractices of chemists are designed to protect against mis-identifying a sample compound and falsely reporting a detectable concentration. In environmental assessment, trace amounts of contaminants of concern are thus often reported by the laboratory as ``non-detects'' or ``trace'', in which case the data may be left- and interval- censored respectively. Samples below the ``non-detect'' threshold have contaminant levels which are regarded as being below the limits of detection and ``trace'' samples have levels that are above the limit of detection, but below the quanti®able limit. The type of censoring encountered in this setting is called Type I censoring: the censoring thresholds (``non-detect'' or ``trace'') are ®xed and the number of censored observations is random.
    [Show full text]
  • Statistical Methods for Censored and Missing Data in Survival and Longitudinal Analysis
    University of Pennsylvania ScholarlyCommons Publicly Accessible Penn Dissertations 2019 Statistical Methods For Censored And Missing Data In Survival And Longitudinal Analysis Leah Helene Suttner University of Pennsylvania, [email protected] Follow this and additional works at: https://repository.upenn.edu/edissertations Part of the Biostatistics Commons Recommended Citation Suttner, Leah Helene, "Statistical Methods For Censored And Missing Data In Survival And Longitudinal Analysis" (2019). Publicly Accessible Penn Dissertations. 3520. https://repository.upenn.edu/edissertations/3520 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/edissertations/3520 For more information, please contact [email protected]. Statistical Methods For Censored And Missing Data In Survival And Longitudinal Analysis Abstract Missing or incomplete data is a nearly ubiquitous problem in biomedical research studies. If the incomplete data are not appropriately addressed, it can lead to biased, inefficient estimation that can impact the conclusions of the study. Many methods for dealing with missing or incomplete data rely on parametric assumptions that can be difficult or impossibleo t verify. Here we propose semiparametric and nonparametric methods to deal with data in longitudinal studies that are missing or incomplete by design of the study. We apply these methods to data from Parkinson's disease dementia studies. First, we propose a quantitative procedure for designing appropriate follow-up schedules in time-to-event studies to address the problem of interval-censored data at the study design stage. We propose a method for generating proportional hazards data with an unadjusted survival similar to that of historical data. Using this data generation process we conduct simulations to evaluate the bias in estimating hazard ratios using Cox regression models under various follow-up schedules to guide the selection of follow-up frequency.
    [Show full text]
  • Common Misunderstandings of Survival Time Analysis
    Common Misunderstandings of Survival Time Analysis Milensu Shanyinde Centre for Statistics in Medicine University of Oxford 2nd April 2012 C S M Outline . Introduction . Essential features of the Kaplan-Meier survival curves . Median survival times . Median follow-up times . Forest plots to present outcomes by subgroups C S M Introduction . Survival data is common in cancer clinical trials . Survival analysis methods are necessary for such data . Primary endpoints are considered to be time until an event of interest occurs . Some evidence to indicate inconsistencies or insufficient information in presenting survival data . Poor presentation could lead to misinterpretation of the data C S M 3 . Paper by DG Altman et al (1995) . Systematic review of appropriateness and presentation of survival analyses in clinical oncology journals . Assessed; - Description of data analysed (length and quality of follow-up) - Graphical presentation C S M 4 . Cohort sample size 132, paper publishing survival analysis . Results; - 48% did not include summary of follow-up time - Median follow-up was frequently presented (58%) but method used to compute it was rarely specified (31%) Graphical presentation; - 95% used the K-M method to present survival curves - Censored observations were rarely marked (29%) - Number at risk presented (8%) C S M 5 . Paper by S Mathoulin-Pelissier et al (2008) . Systematic review of RCTs, evaluating the reporting of time to event end points in cancer trials . Assessed the reporting of; - Number of events and censoring information - Number of patients at risk - Effect size by median survival time or Hazard ratio - Summarising Follow-up C S M 6 . Cohort sample size 125 .
    [Show full text]
  • Bayesian Analysis of Survival Data with Missing Censoring Indicators and Simulation of Interval Censored Data Veronica Bunn
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2018 Bayesian Analysis of Survival Data with Missing Censoring Indicators and Simulation of Interval Censored Data Veronica Bunn Follow this and additional works at the DigiNole: FSU's Digital Repository. For more information, please contact [email protected] FLORIDA STATE UNIVERSITY COLLEGE OF ARTS AND SCIENCES BAYESIAN ANALYSIS OF SURVIVAL DATA WITH MISSING CENSORING INDICATORS AND SIMULATION OF INTERVAL CENSORED DATA By VERONICA BUNN A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2018 Copyright c 2018 Veronica Bunn. All Rights Reserved. Veronica Bunn defended this dissertation on July 10, 2018. The members of the supervisory committee were: Debajyoti Sinha Professor Co-Directing Dissertation Naomi Brownstein Professor Co-Directing Dissertation Richard Nowakowski University Representative Elizabeth Slate Committee Member Antonio Linero Committee Member The Graduate School has verified and approved the above-named committee members, and certifies that the dissertation has been approved in accordance with university requirements. ii To my sister, who has done more for me than she will ever know. iii ACKNOWLEDGMENTS Special thanks are due to my advisors, Dr. Debajyoti Sinha and Dr. Naomi Brownstein, for guiding me through the research process. Additional thanks to my committee members Dr. Elizabeth Slate, Dr. Antonio Linero, and Dr. Richard Nowakowski for their time and valuable feedback. I would also like to thank my peers in the Department of Statistics at Florida State University for their camaraderie and lunch discussions. Lastly, I'd like to thank Rumy for his unwavering belief and support in my goals { here's to us, and the next chapter in our lives.
    [Show full text]
  • Censored Regression Models in R Using the Censreg Package
    Estimating Censored Regression Models in R using the censReg Package Arne Henningsen University of Copenhagen Abstract We demonstrate how censored regression models (including standard Tobit models) can be estimated in R using the add-on package censReg. This package provides not only the usual maximum likelihood (ML) procedure for cross-sectional data but also the random-effects maximum likelihood procedure for panel data using Gauss-Hermite quadrature. Keywords: censored regression, Tobit, econometrics, R. 1. Introduction In many statistical analyses of individual data, the dependent variable is censored, e.g. the number of hours worked, the number of extramarital affairs, the number of arrests after release from prison, purchases of durable goods, or expenditures on various commodity groups (Greene 2008, p. 869). If the dependent variable is censored (e.g. zero in the above examples) for a significant fraction of the observations, parameter estimates obtained by conventional regression methods (e.g. OLS) are biased. Consistent estimates can be obtained by the method proposed by Tobin(1958). This approach is usually called \Tobit" model and is a special case of the more general censored regression model. This paper briefly explains the censored regression model, describes function censReg of the R package censReg, and demonstrates how this function can be used to estimate censored regression models. There are also some other functions for estimating censored regression models available in R. For instance function tobit from the AER package (Kleiber and Zeileis 2008, 2009) and function cenmle from the NADA package are front ends to the survreg function from the survival package. Function tobit from the VGAM package estimates the censored regression model by using its own maximum likelihood routine.
    [Show full text]
  • Statistical Inference of Truncated Normal Distribution Based on the Generalized Progressive Hybrid Censoring
    entropy Article Statistical Inference of Truncated Normal Distribution Based on the Generalized Progressive Hybrid Censoring Xinyi Zeng and Wenhao Gui * Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China; [email protected] * Correspondence: [email protected] Abstract: In this paper, the parameter estimation problem of a truncated normal distribution is dis- cussed based on the generalized progressive hybrid censored data. The desired maximum likelihood estimates of unknown quantities are firstly derived through the Newton–Raphson algorithm and the expectation maximization algorithm. Based on the asymptotic normality of the maximum likelihood estimators, we develop the asymptotic confidence intervals. The percentile bootstrap method is also employed in the case of the small sample size. Further, the Bayes estimates are evaluated under various loss functions like squared error, general entropy, and linex loss functions. Tierney and Kadane approximation, as well as the importance sampling approach, is applied to obtain the Bayesian estimates under proper prior distributions. The associated Bayesian credible intervals are constructed in the meantime. Extensive numerical simulations are implemented to compare the performance of different estimation methods. Finally, an authentic example is analyzed to illustrate the inference approaches. Keywords: truncated normal distribution; generalized progressive hybrid censoring scheme; ex- pectation maximization algorithm; Bayesian estimate; Tierney and Kadane approximation; impor- tance sampling Citation: Zeng, X.; Gui, W. Statistical Inference of Truncated Normal 1. Introduction Distribution Based on the 1.1. Truncated Normal Distribution Generalized Progressive Hybrid Normal distribution has played a crucial role in a diversity of research fields like Entropy 2021 23 Censoring. , , 186. reliability analysis and economics, as well as many other scientific developments.
    [Show full text]
  • Censoring and Truncation
    Censoring and Truncation Time-to-event data present themselves in different ways which create special problems in analyzing such data. One peculiar feature, often present in time-to-event data, is known as censoring, which, broadly speaking, occurs when some lifetimes are known to have occurred only within certain intervals. The remainder of the lifetimes are known ex- actly. There are various categories of censoring, such as right censoring, left censoring, and interval censoring. Right censoring will be discussed in section 3.2. Left or interval censoring will be discussed in section 3.3. To deal adequately with censoring in the analysis, we must consider the design which was employed to obtain the survival data. There are several types of censoring schemes within both left and right censoring. Each type will lead to a different likelihood function which will be the basis for the inference. As we shall see in section 3.5, though the likeli- hood function is unique for each type of censoring, there is a common approach to be used in constructing it. A second feature which may be present in some survival studies is that of truncation, discussed in section 3.4. Left truncation occurs when subjects enter a study at a particular age (not necessarily the origin for the event of interest) and are followed from this delayed entry time until the event occurs or until the subject is censored. Right truncation 64 Chapter 3 Censoring and Truncation - 3.2 Right Censoring 65 occurs when only individuals who have experienced the event of in- her event time is censored at C,.
    [Show full text]
  • An Approach to Understand Median Follow-Up by the Reverse Kaplan
    PharmaSUG 2019 – Paper ST-81 Let’s Flip: An Approach to Understand Median Follow-up by the Reverse Kaplan-Meier Estimator from a Statistical Programmer’s Perspective Nikita Sathish and Chia-Ling Ally Wu, Seattle Genetics, Inc., Bothell, WA ABSTRACT In time-to-event analysis, sufficient follow-up time to capture enough events is the key element to have adequate statistical power. Achieving an adequate follow-up time may depend on the severity and prognosis of the disease. The median follow-up is the median observation time to the event of interest, which is an indicator to see the length of follow-up. There are several methods to calculate median follow- up, and we have chosen to use the more robust and code-efficient reverse Kaplan-Meier (KM) estimator in our paper. Median follow-up is a less commonly presented descriptive statistic in reporting survival analysis result, which could pose some challenges to understand. This paper aims to provide the concept of median follow- up, the statistical interpretation of median follow-up both numerically and visually, and the SAS® LIFETEST procedure to delineate survival plots and compute survival function using the reverse KM estimator. We present a simple and robust approach of calculating median follow-up using the reverse Kaplan-Meier estimator by flipping the meaning of event and censor (Schemper and Smith, 1996), i.e., event becomes censor while censor becomes the endpoint. INTRODUCTION In oncology studies, we use a lot of time-to-event analysis, which is to assess the time up to the event of interest, like from treatment to death, disease relapse or progression.
    [Show full text]
  • Estimation of Moments and Quantiles Using Censored Data
    WATER RESOURCES RESEARCH PAGE, 4 . , VOLNO S, 1005-1012.32 , APRIL 1996 Estimatio momentf no quantiled an s s using censored data Charles N. Kroll and Jery R. Stedinger School of Civil and Environmental Engineering, Cornell University, Ithaca, New York Abstract. Censored data setoftee sar n encountere waten di r quality investigationd an s streamflow analyses. A Monte Carlo analysis examined the performance of three technique r estimatinfo s momente gth quantiled san distributioa f so n using censored data sets. These techniques includ lognormaea l maximum likelihood estimator (MLE) loga , - probability plot regression log-partiaestimatorw ne a d lan , probability-weighted moment estimator. Data sets were generated fro mnumbea f distributiono r s commonly useo dt describe water quality and water quantity variables. A "robust" fill-in method, which circumvents transformation bias in the real space moments, was implemented with all three estimation technique obtaio t s completna e sampl computatior efo sample th f no e mea standard nan d deviation. Regardles underlyine th f so g distributionE ML e th , generally performe r bettewels o d a s a l r tha othee nth r estimators, thoug momene hth t and quantile estimators using all three techniques had comparable log-space root mean square errors (rmse) for censoring at or below the 20th percentile for samples sizes of n = 10, the 40th percentile for n = 25, and the 60th percentile for n = 50. Compariso log-space th f no e rms real-spacd ean e rmse indicated tha log-spaca t e rmse was a better overall metric of estimator precision.
    [Show full text]
  • Statistics of Extremes Under Random Censoring
    Bernoulli 14(1), 2008, 207–227 DOI: 10.3150/07-BEJ104 Statistics of extremes under random censoring JOHN H.J. EINMAHL1, AMELIE´ FILS-VILLETARD2 and ARMELLE GUILLOU3 1Dept. of Econometrics & OR and CentER, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands. E-mail: [email protected] 2Laboratoire de Statistique Th´eorique et Appliqu´ee, Universit´eParis VI, Boˆıte 158, 4 Place Jussieu, 75252 Paris Cedex 05, France. E-mail: fi[email protected] 3IRMA – D´epartement de Math´ematiques, Universit´eLouis Pasteur, 7, rue Ren´eDescartes, 67084 Strasbourg Cedex, France. E-mail: [email protected] We investigate the estimation of the extreme value index when the data are subject to ran- dom censorship. We prove, in a unified way, detailed asymptotic normality results for various estimators of the extreme value index and use these estimators as the main building block for estimators of extreme quantiles. We illustrate the quality of these methods by a small simulation study and apply the estimators to medical data. Keywords: asymptotic normality; extreme quantiles; extreme value index; random censoring 1. Introduction Let X1,...,Xn be independent and identically distributed (i.i.d.) random variables, dis- tributed according to an unknown distribution function (df) F . A question of great interest is how to obtain a good estimator for a quantile F ←(1 ε) = inf y : F (y) 1 ε , − { ≥ − } where ε is so small that this quantile is situated on the border of, or beyond, the range of the data. Estimating such extreme quantiles is directly linked to the accurate modeling and estimation of the tail of the distribution arXiv:0803.2162v1 [math.ST] 14 Mar 2008 F (x):=1 F (x)= P(X > x) − for large thresholds x.
    [Show full text]