
Contents Meta‐analysis of • Introduction to time‐to‐event data Time‐to‐event data • Meta‐analysis of time‐to‐event data • Estimating log(HR) and its variance Catrin Tudur Smith • Practical [email protected] • Other issues MRC North West Hub for Trials Methodology Research University of Liverpool SMG course Cardiff 2010 SMG course Cardiff 2010 Time‐to‐event data Censoring ● Data arising when we measure the length of time • Event is often not observed on all subjects taken for an event to occur • Reasons for this might be: ● The event might be – drop‐out discharge from hospital recurrence of tumour – the study ends before the event has occurred remission of a disease • However, we do know how long they were ● The time starting point might be time of diagnosis followed up for without the event being time of surgery observed time of randomisation • Individuals for whom the event is not observed are called censored SMG course Cardiff 2010 SMG course Cardiff 2010 Examples of censoring Why special methods of analysis? • A patient with cancer is still alive at the time of analysis. Time to death would be censored. •Why not analyse the time to event as a continuous response variable? • A patient with epilepsy who has had no seizures since entry moves GP and is lost to follow‐up. • Assuming censored observations are Time to first seizure would be censored. uncensored will underestimate average survival time • A patient with epilepsy who has had no seizures since entry dies from a non‐epilepsy related • Ignoring censored observations is inefficient cause. Time to first seizure would be censored. SMG course Cardiff 2010 SMG course Cardiff 2010 Why special methods of analysis? Kaplan‐Meier curves • Why not analyse the time to event as a binary response variable? • A Kaplan‐Meier plot gives a graphical – May be reasonable if... display of the survival event is likely to occur very early on (e.g. acute liver failure) function estimated event is rare from a set of data lengths of follow up are similar between patients • The curve starts at 1 interested in whether event occurs at all rather than time to event (or 100%) at time 0. All patients are 'alive' – But if… • The curve steps down an appreciable proportion of the patients do die each time an event death may take a considerable time occurs, and so tails off .. looking not only at how many patients died, but also at how long after towards 0 treatment they died, gives a more sensitive assessment SMG course Cardiff 2010 Plotting the results Interpreting the results Median survival: • The median survival time is time beyond which 50% of the Survival Number Survival Patient Died? Time at Risk Proportion individuals in the population 6No118 1 under study are expected to 10 Yes 152 11 0.91 11 No 297 survive 4No336 8No549 • Read off the survival time 7 Yes 614 7 0.78 corresponding to a cumulative 12 Yes 641 6 0.65 2No752 survival of 0.5. 9No854 1 Yes 910 3 0.43 Shape of the curve: 5No923 3 Yes 1006 1 0 • Poor survival is reflected by a curve that drops relatively rapidly towards 0. Time 0 152 614 641 910 1006 No. at risk 12 11 7 6 3 1 • The curve will only reach 0 if the Time 0 152 614 641 910 1006 patient with longest follow‐up No. at risk 12 11 7 6 3 1 was not censored. Accuracy of the K‐M estimates The Logrank test • The curve is only an estimate of 'true' survival. • The Logrank Test is a simple Kaplan-Meier Curve with 95% Confidence Intervals • We can calculate confidence 1.0 statistical test to compare the 0.9 1.0 intervals for the curve. 0.8 time to event of two groups. 0.9 0.7 0.8 • These give an area in which the true 0.6 0.7 survival curve lies. 0.5 • It takes censoring into account, is 0.6 • Note that the confidence intervals 0.4 0.5 non‐parametric, and compares 0.3 Survival Probability 0.4 get wider towards the right‐hand‐ 0.2 the groups over the whole time‐ Survival Probability Survival 0.3 side of the graph. 0.1 period. 0.2 • This is because the sample size gets 0.0 0.1 0 500 1000 0.0 smaller as subjects either die or Survival Time (days) 0 500 1000 Survival Time (Days) drop‐out. • The larger the sample size, the more accurate the K‐M curve will be of true survival. The logrank test cont’d… The hazard ratio • The logrank test compares the total number of deaths observed with • The hazard is the chance that at any given moment, the event will the number of deaths we would expect assuming that there is no occur, given that it hasn’t already done so. group effect. • The hazard ratio (HR) is a measure of the relative survival in two groups. • If deaths occur in the sample at the time‐points t1,…,tk, then the • It is a ratio of the hazard for one group compared to another. expected number of deaths ej at time tj in group A is: no. of deaths in sample at t j Suppose that we wish to compare group B relative to group A: e j no. at risk in group A at t j no. at risk in sample at t j 0 < HR < 1 Group B are at a decreased hazard compared to A. HR = 1The hazard is the same for both groups. • Then the total number of deaths expected for group A, E , is: A HR > 1Group B are at an increased hazard compared to group A. EA e1 e2 ... ek • Note that since HR is a ratio, a HR of 0.5 means a halving of risk, and a HR of 2 means a doubling of risk. • The logrank test looks at whether EA is significantly different to the observed number of deaths in group A. If it is, this provides evidence that group is associated with survival. SMG course Cardiff 2010 SMG course Cardiff 2010 More on the hazard ratio Interpretation of b1, b2 … • Find the hazard of death for a person on Treatment (x = 1) compared to • Cox proportional hazards (PH) regression model ‐ 1 Control (x1 = 0), assuming they are alike for all other covariates (x2, x3, most commonly used regression model etc.). • The hazard is modelled with the equation: – Hazard rate (risk of death) in Treatment group at time t: h(t) h0 (t)expb1x1 b2 x2 ... bk xk ht ho t exp(b1 1) ho t exp(b1 ) Underlying Parameters to be estimated Risk Factors (Covariates) – Hazard rate (risk of death) in Control group at time t: hazard – related to effect sizes ht ho t exp(b1 0) ho t 1 • So, we assume that the hazard function is partly described by an underlying hazard, and partly by – Hazard ratio is: the contribution of certain risk factors. h0 (t) exp(b1 ) HR exp(b1 ) h0 (t)1 Meta‐analysis of TTE data Meta‐analysis of TTE data • Suppose that there are K trials, and for each trial, i=1,2.. K, an estimate of the log hazard ratio and its variance are available • An estimate of the overall log hazard ratio across trials and its variance are given by SMG course Cardiff 2010 SMG course Cardiff 2010 Meta‐analysis of TTE data Direct (1) (2) SMG course Cardiff 2010 SMG course Cardiff 2010 Cox model coefficient and SE HR with confidence interval Report may present results from the Cox regression model Direct estimate of logHR and its variance (or standard error) can then be used Where UPPCIi and LOWCIi are the upper and lower confidence limits for log(HRi) is the cumulative distribution function of the Normal distribution SMG course Cardiff 2010 SMG course Cardiff 2010 P‐value p‐value (balanced randomisation) (3) pi is the reported (two sided) p‐value associated with the Mantel‐Haenszel version of the logrank statistic is the cumulative distribution function of the Normal distribution is the total observed number of events across both groups SMG course Cardiff 2010 SMG course Cardiff 2010 p‐value (balanced randomisation) p‐value (unequal randomisation) (5) (4) Number of patients in research and control groups SMG course Cardiff 2010 SMG course Cardiff 2010 Choice of Vri • Approximation (3) and (4) are identical when number of events are equal in both arms • Approximation (3) and (5) are identical when number randomised are equal in both arms (2) • Approximation (4) requires number of events in each group which may not always be given • Collette et al (1998) compared three approximations by simulation All 3 provide very close estimates to IPD Approximation (4) most precise for trial with low % of censoring Approximation (5) preferred for trials with unequal sample sizes SMG course Cardiff 2010 SMG course Cardiff 2010 Published survival curves Published survival curves 1. Estimating numbers at risk Parmar et al Statistics in Medicine 1998, 17:2815‐34. 2. Incorporating numbers at risk Williamson et al Statistics in Medicine 2002, 21:3337‐51 Chemotherapy in pancreatic cancer: results of a controlled prospective randomised multicentre study. SMG course Cardiff 2010 SMG course Cardiff 2010 BMJ: 281 1980 Survival curves – Parmar et al Survival curves – Parmar et al Step 1 ‐ For each trial split the time‐axis into T non‐ overlapping time intervals –chosen to limit number of events within any time interval 0.65 0.42 Step 2 ‐ For each arm and each time point, read off the corresponding survival probability 0.39 0.24 SMG course Cardiff 2010 SMG course Cardiff 2010 Survival curves – Parmar et al Survival curves – Parmar et al Step 3 Time point t s t e From reading the manuscript, estimate the minimum ( ) NAR at start of interval R(t s) and maximum ( ) follow‐up of patients – May be given directly – Censoring tick marks on curves Step 4 Research Group – Estimated from dates of accrual and date of submission, or Calculate Number at risk at start of interval perhaps publication of the manuscript For first interval R(0) = number of patients analysed in the relevant treatment group SMG course Cardiff 2010 SMG course Cardiff 2010 Survival curves – Parmar et al Survival
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-