Final Research Report

Total Page:16

File Type:pdf, Size:1020Kb

Final Research Report New Methods and Software for Designing Adaptive Clinical Trials of New Medical Treatments Michael Rosenblum1, PhD, Jon Arni Steingrimsson2, PhD, Josh Betz1, M.S., Affiliations: 1Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA 2Department of Biostatistics, Brown University, Providence, RI 02903, USA Original Project Title: Innovative Randomized Trial Designs to Generate Stronger Evidence about Subpopulation Benefits and Harms PCORI ID: ME-1306-03198 HSRProj ID: 20143600 Institution: Johns Hopkins University _______________________________ To cite this document, please use: Rosenblum M, Steingrimsson JA, Betz J. (2019). New Methods and Software for Designing Adaptive Clinical Trials of New Medical Treatments. Washington, DC: Patient- Centered Outcomes Research Institute (PCORI). https://doi.org/10.25302/10.2019.ME.130603198 1 Table of Contents Abstract ............................................................................................................................................... 3 Background ......................................................................................................................................... 4 Specific Aims ................................................................................................................................... 7 Participation of patients and/or other stakeholders in the design and conduct of research and dissemination of findings .................................................................................................................... 7 Methods .............................................................................................................................................. 8 Aim 1: Develop and evaluate adaptive enrichment designs for time-to-event and other delayed endpoints. ....................................................................................................................................... 8 Aim 2: Conduct extensive simulation studies. .............................................................................. 12 Aim 3: Produce user-friendly, free, open-source software to optimize our adaptive enrichment designs and compare performance versus standard designs. ...................................................... 22 Discussion ......................................................................................................................................... 24 Study results in context: ............................................................................................................... 24 Uptake of study results: ................................................................................................................ 24 Study limitations: .......................................................................................................................... 25 Future research: ............................................................................................................................ 25 Conclusions: ...................................................................................................................................... 26 References ........................................................................................................................................ 27 Related Publications Published Manuscripts: ................................................................................... 30 Acknowledgments: ........................................................................................................................... 31 2 Abstract Background: Standard clinical trial designs aim to determine whether a treatment is beneficial, on average, for a target population. Such trials can have low power if the treatment only benefits a subpopulation, e.g., defined by disease severity, a biomarker, or a risk score at baseline. Randomized trial designs that adaptively change enrollment criteria during a trial, called adaptive enrichment designs, have potential to provide improved information about which subpopulations benefit from new treatments. Objectives: We aimed to (i) develop new adaptive enrichment designs and prove their key statistical properties; (ii) conduct simulations that mimic features of completed trial data sets in order to evaluate the new trial designs’ performance (such as sample size, duration, power, bias); the data sets are from trials involving treatments for HIV, stroke, and heart failure; (iii) develop user-friendly software for optimizing the performance of our new adaptive designs and comparing them to standard designs. The goal was to construct designs that satisfy power and Type I error requirements at the minimum cost in terms of expected sample size, i.e., average sample size over a set of plausible scenarios. We also considered the maximum sample size, i.e., the number of participants enrolled if there is no early stopping. Methods: We constructed new adaptive trial designs (including new rules for modifying enrollment and new procedures for testing multiple hypotheses) and proved key statistical properties such as control of the study-wide Type I error rate. Results: For the simulation study involving stroke, the new adaptive design reduced expected sample size by 32% compared to standard designs; the tradeoff is that the maximum sample size was 22% larger for the adaptive design. For the simulation study involving the cardiac resynchronization device for treating heart failure, the benefit of the adaptive design was a 2 5 % reduction in expected sample size but an 8% increase in maximum sample size versus standard designs. For the simulation study involving HIV, the adaptive designs did not provide substantial benefits. Conclusions: Optimized, adaptive enrichment designs can lead to reduced expected sample size compared to standard designs, in some settings. For adaptive enrichment to substantially add value, a sufficient number of primary outcomes need to be observed before enrollment is exhausted; this depends on the enrollment rate and the time from enrollment to observation of the primary outcome. Adaptive designs often involve tradeoffs such as reduced expected sample size at the price of greater maximum sample size, compared to standard designs. Our software can reveal these tradeoffs and determine whether certain adaptive enrichment designs substantially add value for a given trial design problem; this enables trial statisticians to make informed decisions among trial design options. Our designs assumed that subpopulations are defined before the trial starts, which requires prior data and scientific understanding of who may be more likely to benefit from the treatment. The sample size required to determine treatment effects for subpopulations can be substantially greater than for the overall population. 3 Background Adaptive designs involve preplanned rules for modifying how the trial is conducted based on accruing data. For example, adaptations could be made to the number enrolled, the probability of being randomized to treatment or control, the inclusion criteria, the length of follow-up, etc. According to the Patient-Centered Outcomes Research Institute (PCORI) Methodology Report1 “Adaptive designs are particularly appealing for PCOR because they could maintain many of the advantages of randomized clinical trials while minimizing some of the disadvantages.” We focus on one type of adaptive design called adaptive enrichment designs. Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria in an ongoing trial.2 They typically involve multiple, pre-planned stages, each ending with an analysis of the cumulative data and a decision as to enrollment in the subsequent stage. These designs have potential to learn more about treatment effects in subpopulations.3 For example, enrollment of a subpopulation for which there is sufficient evidence of treatment efficacy, futility, or harm could be stopped, while enrollment continues for the remaining subpopulations. Figure 1, based on a similar figure in our paper4, gives a schematic of a 2- stage adaptive enrichment design in the context of a trial to evaluate a surgical treatment for stroke in 2 subpopulations. 4 Figure 1:4 Schematic of 2 stage adaptive enrichment design. 2 Stage Adaptive Enrichment Design Flow of Enrollment and Decision Stage 1 Stage 2 Enroll Both Pop. Enroll Both Subpopulation 1 Subpopulations Option 1 Subpopulation 2 Subpopulation 1 Enroll Only Subpop.1 Subpopulation 2 Option 2 Subpopulation 1 Enroll Only Subpop.2 Option 3 Subpopulation 2 Option 4 Trial A decision is made after stage 1 to: (1) continue enrolling both subpopulations; (2) enroll only subpopulation 1; (3) enroll only subpopulation 2; (4) stop the trial. The adaptive enrichment designs considered in this paper generally involve more than 2 stages, where similar decisions as in this figure are made at the interim analysis after each stage using the cumulative data available. We developed new statistical methodology and an open-source, freely available, software tool that optimizes new adaptive enrichment designs and compares their performance (via simulation) versus standard designs. Our designs aimed to determine treatment benefits and harms for subpopulations defined by a risk factor such as age, disease severity, or a biomarker measured at baseline. We also assessed tradeoffs involved in using adaptive enrichment designs versus standard designs. Our project addressed research priorities of the U.S Food and Drug Administration
Recommended publications
  • Challenging Issues in Clinical Trial Design: Part 4 of a 4-Part Series on Statistics for Clinical Trials
    Challenging Issues in Clinical Trial Design: Part 4 of a 4-part Series on Statistics for Clinical Trials Brief title: Challenges in Trial Design Stuart J. Pocock, PHD,* Tim C. Clayton, MSC,* Gregg W. Stone, MD† From the: *London School of Hygiene and Tropical Medicine, London, United Kingdom; †Columbia University Medical Center, New York-Presbyterian Hospital and the Cardiovascular Research Foundation, New York, New York <COR> Reprint requests and correspondence: Prof. Stuart J. Pocock, Department of Medical Statistics, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, United Kingdom Telephone: +44 20 7927 2413 Fax: +44 20 7637 2853 E-mail: [email protected] Disclosures: The authors declare no conflicts of interest for this paper. 1 Abstract As a sequel to last week’s article on the fundamentals of clinical trial design, this article tackles related controversial issues: noninferiority trials; the value of factorial designs; the importance and challenges of strategy trials; Data Monitoring Committees (including when to stop a trial early); and the role of adaptive designs. All topics are illustrated by relevant examples from cardiology trials. <KW>Key words: Noninferiority trials; Factorial designs; Strategy trials, Data Monitoring Committees; Statistical stopping guidelines; Adaptive designs; Randomized Controlled Trials As Topic; Abbreviations ACS = acute coronary syndrome CABG = coronary artery bypass graft CI = confidence interval CV = cardiovascular DMC = Data Monitoring Committee FDA = Food and Drug Administration MACE = major adverse cardiovascular event OMT = optimal medical therapy PCI = percutaneous coronary intervention 2 Introduction Randomized controlled trials are the cornerstone of clinical guidelines informing best therapeutic practices, however their design and interpretation may be complex and nuanced.
    [Show full text]
  • Adaptive Clinical Trials: an Introduction
    Adaptive clinical trials: an introduction What are the advantages and disadvantages of adaptive clinical trial designs? How and why were Introduction adaptive clinical Adaptive clinical trial design is trials developed? becoming a hot topic in healthcare research, with some researchers In 2004, the FDA published a report arguing that adaptive trials have the on the problems faced by the potential to get new drugs to market scientific community in developing quicker. In this article, we explain new medical treatments.2 The what adaptive trials are and why they report highlighted that the pace of were developed, and we explore both innovation in biomedical science is the advantages of adaptive designs outstripping the rate of advances and the concerns being raised by in the available technologies and some in the healthcare community. tools for evaluating new treatments. Outdated tools are being used to assess new treatments and there What are adaptive is a critical need to improve the effectiveness and efficiency of clinical trials? clinical trials.2 The current process for developing Adaptive clinical trials enable new treatments is expensive, takes researchers to change an aspect a long time and in some cases, the of a trial design at an interim development process has to be assessment, while controlling stopped after significant amounts the rate of type 1 errors.1 Interim of time and resources have been assessments can help to determine invested.2 In 2006, the FDA published whether a trial design is the most a “Critical Path Opportunities
    [Show full text]
  • Survival Analysis: Part I — Analysis Korean Journal of Anesthesiology of Time-To-Event
    Statistical Round pISSN 2005-6419 • eISSN 2005-7563 KJA Survival analysis: Part I — analysis Korean Journal of Anesthesiology of time-to-event Junyong In1 and Dong Kyu Lee2 Department of Anesthesiology and Pain Medicine, 1Dongguk University Ilsan Hospital, Goyang, 2Guro Hospital, Korea University School of Medicine, Seoul, Korea Length of time is a variable often encountered during data analysis. Survival analysis provides simple, intuitive results concerning time-to-event for events of interest, which are not confined to death. This review introduces methods of ana- lyzing time-to-event. The Kaplan-Meier survival analysis, log-rank test, and Cox proportional hazards regression model- ing method are described with examples of hypothetical data. Keywords: Censored data; Cox regression; Hazard ratio; Kaplan-Meier method; Log-rank test; Medical statistics; Power analysis; Proportional hazards; Sample size; Survival analysis. Introduction mation of the time [2]. The Korean Journal of Anesthesiology has thus far published In a clinical trial or clinical study, an investigational treat- several papers using survival analysis for clinical outcomes: a ment is administered to subjects, and the resulting outcome data comparison of 90-day survival rate for pressure ulcer develop- are collected and analyzed after a certain period of time. Most ment or non-development after surgery under general anesthesia statistical analysis methods do not include the length of time [3], a comparison of postoperative 5-year recurrence rate in as a variable, and analysis is made only on the trial outcomes patients with breast cancer depending on the type of anesthetic upon completion of the study period, as specified in the study agent used [4], and a comparison of airway intubation success protocol.
    [Show full text]
  • Delineating Virulence of Vibrio Campbellii
    www.nature.com/scientificreports OPEN Delineating virulence of Vibrio campbellii: a predominant luminescent bacterial pathogen in Indian shrimp hatcheries Sujeet Kumar1*, Chandra Bhushan Kumar1,2, Vidya Rajendran1, Nishawlini Abishaw1, P. S. Shyne Anand1, S. Kannapan1, Viswas K. Nagaleekar3, K. K. Vijayan1 & S. V. Alavandi1 Luminescent vibriosis is a major bacterial disease in shrimp hatcheries and causes up to 100% mortality in larval stages of penaeid shrimps. We investigated the virulence factors and genetic identity of 29 luminescent Vibrio isolates from Indian shrimp hatcheries and farms, which were earlier presumed as Vibrio harveyi. Haemolysin gene-based species-specifc multiplex PCR and phylogenetic analysis of rpoD and toxR identifed all the isolates as V. campbellii. The gene-specifc PCR revealed the presence of virulence markers involved in quorum sensing (luxM, luxS, cqsA), motility (faA, lafA), toxin (hly, chiA, serine protease, metalloprotease), and virulence regulators (toxR, luxR) in all the isolates. The deduced amino acid sequence analysis of virulence regulator ToxR suggested four variants, namely A123Q150 (AQ; 18.9%), P123Q150 (PQ; 54.1%), A123P150 (AP; 21.6%), and P123P150 (PP; 5.4% isolates) based on amino acid at 123rd (proline or alanine) and 150th (glutamine or proline) positions. A signifcantly higher level of the quorum-sensing signal, autoinducer-2 (AI-2, p = 2.2e−12), and signifcantly reduced protease activity (p = 1.6e−07) were recorded in AP variant, whereas an inverse trend was noticed in the Q150 variants AQ and PQ. The pathogenicity study in Penaeus (Litopenaeus) vannamei juveniles revealed that all the isolates of AQ were highly pathogenic with Cox proportional hazard ratio 15.1 to 32.4 compared to P150 variants; PP (5.4 to 6.3) or AP (7.3 to 14).
    [Show full text]
  • Adaptive Designs in Clinical Trials: Why Use Them, and How to Run and Report Them Philip Pallmann1* , Alun W
    Pallmann et al. BMC Medicine (2018) 16:29 https://doi.org/10.1186/s12916-018-1017-7 CORRESPONDENCE Open Access Adaptive designs in clinical trials: why use them, and how to run and report them Philip Pallmann1* , Alun W. Bedding2, Babak Choodari-Oskooei3, Munyaradzi Dimairo4,LauraFlight5, Lisa V. Hampson1,6, Jane Holmes7, Adrian P. Mander8, Lang’o Odondi7, Matthew R. Sydes3,SofíaS.Villar8, James M. S. Wason8,9, Christopher J. Weir10, Graham M. Wheeler8,11, Christina Yap12 and Thomas Jaki1 Abstract Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial’s course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented. We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results.
    [Show full text]
  • Adaptive Design: a Review of the Technical, Statistical, and Regulatory Aspects of Implementation in a Clinical Trial
    Adaptive Design: A Review of the Technical, Statistical, and Regulatory Aspects of Implementation in a Clinical Trial 1,2 1 Franck Pires Cerqueira, MSc , Angelo Miguel Cardoso Jesus, PhD , and Maria Dulce 2 Cotrim, PhD 1 Polytechnic Institute of Porto School of Health, Department of Pharmacy Porto, Portugal 2 Pharmacy Faculty of the University of Coimbra, Department of Pharmacology Coimbra, Portugal Abstract Background: In an adaptive trial, the researcher may have the option of responding to interim safety and efficacy data in a number of ways, including narrowing the study focus or increasing the number of subjects, balancing treatment allocation or different forms of randomization based on responses of subjects prior to treatment. This research aims at compiling the technical, statistical, and regulatory implications of the employment of adaptive design in a clinical trial. Methods: Review of adaptive design clinical trials in Medline, PubMed, EU Clinical Trials Register, and ClinicalTrials.gov. Phase I and seamless phase I/II trials were excluded. We selected variables extracted from trials that included basic study characteristics, adaptive design features, size and use of inde- pendent data-monitoring committees (DMCs), and blinded interim analysis. Results: The research retrieved 336 results, from which 78 were selected for analysis. Sixty-seven were published articles, and 11 were guidelines, papers, and regulatory bills. The most prevalent type of adaptation was the seamless phase II/III design 23.1%, followed by adaptive dose progression 19.2%, pick the winner / drop the loser 16.7%, sample size re-estimation 10.3%, change in the study objective 9.0%, adaptive sequential design 9.0%, adaptive randomization 6.4%, biomarker adaptive design 3.8%, and endpoint adaptation 2.6%.
    [Show full text]
  • Making Comparisons
    Making comparisons • Previous sessions looked at how to describe a single group of subjects • However, we are often interested in comparing two groups Data can be interpreted using the following fundamental questions: • Is there a difference? Examine the effect size • How big is it? • What are the implications of conducting the study on a sample of people (confidence interval) • Is the effect real? Could the observed effect size be a chance finding in this particular study? (p-values or statistical significance) • Are the results clinically important? 1 Effect size • A single quantitative summary measure used to interpret research data, and communicate the results more easily • It is obtained by comparing an outcome measure between two or more groups of people (or other object) • Types of effect sizes, and how they are analysed, depend on the type of outcome measure used: – Counting people (i.e. categorical data) – Taking measurements on people (i.e. continuous data) – Time-to-event data Example Aim: • Is Ventolin effective in treating asthma? Design: • Randomised clinical trial • 100 micrograms vs placebo, both delivered by an inhaler Outcome measures: • Whether patients had a severe exacerbation or not • Number of episode-free days per patient (defined as days with no symptoms and no use of rescue medication during one year) 2 Main results proportion of patients Mean No. of episode- No. of Treatment group with severe free days during the patients exacerbation year GROUP A 210 0.30 (63/210) 187 Ventolin GROUP B placebo 213 0.40 (85/213)
    [Show full text]
  • Survival Analysis Using a 5‐Step Stratified Testing and Amalgamation
    Received: 13 March 2020 Revised: 25 June 2020 Accepted: 24 August 2020 DOI: 10.1002/sim.8750 RESEARCH ARTICLE Survival analysis using a 5-step stratified testing and amalgamation routine (5-STAR) in randomized clinical trials Devan V. Mehrotra Rachel Marceau West Biostatistics and Research Decision Sciences, Merck & Co., Inc., North Wales, Randomized clinical trials are often designed to assess whether a test treatment Pennsylvania, USA prolongs survival relative to a control treatment. Increased patient heterogene- ity, while desirable for generalizability of results, can weaken the ability of Correspondence Devan V. Mehrotra, Biostatistics and common statistical approaches to detect treatment differences, potentially ham- Research Decision Sciences, Merck & Co., pering the regulatory approval of safe and efficacious therapies. A novel solution Inc.,NorthWales,PA,USA. Email: [email protected] to this problem is proposed. A list of baseline covariates that have the poten- tial to be prognostic for survival under either treatment is pre-specified in the analysis plan. At the analysis stage, using all observed survival times but blinded to patient-level treatment assignment, “noise” covariates are removed with elastic net Cox regression. The shortened covariate list is used by a condi- tional inference tree algorithm to segment the heterogeneous trial population into subpopulations of prognostically homogeneous patients (risk strata). After patient-level treatment unblinding, a treatment comparison is done within each formed risk stratum and stratum-level results are combined for overall statis- tical inference. The impressive power-boosting performance of our proposed 5-step stratified testing and amalgamation routine (5-STAR), relative to that of the logrank test and other common approaches that do not leverage inherently structured patient heterogeneity, is illustrated using a hypothetical and two real datasets along with simulation results.
    [Show full text]
  • Introduction to Survival Analysis in Practice
    machine learning & knowledge extraction Review Introduction to Survival Analysis in Practice Frank Emmert-Streib 1,2,∗ and Matthias Dehmer 3,4,5 1 Predictive Society and Data Analytics Lab, Faculty of Information Technolgy and Communication Sciences, Tampere University, FI-33101 Tampere, Finland 2 Institute of Biosciences and Medical Technology, FI-33101 Tampere, Finland 3 Steyr School of Management, University of Applied Sciences Upper Austria, 4400 Steyr Campus, Austria 4 Department of Biomedical Computer Science and Mechatronics, UMIT- The Health and Life Science University, 6060 Hall in Tyrol, Austria 5 College of Artificial Intelligence, Nankai University, Tianjin 300350, China * Correspondence: [email protected]; Tel.: +358-50-301-5353 Received: 31 July 2019; Accepted: 2 September 2019; Published: 8 September 2019 Abstract: The modeling of time to event data is an important topic with many applications in diverse areas. The collective of methods to analyze such data are called survival analysis, event history analysis or duration analysis. Survival analysis is widely applicable because the definition of an ’event’ can be manifold and examples include death, graduation, purchase or bankruptcy. Hence, application areas range from medicine and sociology to marketing and economics. In this paper, we review the theoretical basics of survival analysis including estimators for survival and hazard functions. We discuss the Cox Proportional Hazard Model in detail and also approaches for testing the proportional hazard (PH) assumption. Furthermore, we discuss stratified Cox models for cases when the PH assumption does not hold. Our discussion is complemented with a worked example using the statistical programming language R to enable the practical application of the methodology.
    [Show full text]
  • Research Methods & Reporting
    RESEARCH METHODS & REPORTING Interpreting and reporting clinical trials with results of borderline significance Allan Hackshaw, Amy Kirkwood Cancer Research UK and UCL Borderline significance in the primary especially if the trial is unique. It is incorrect to regard, Cancer Trials Centre, University for example, a relative risk of 0.75 with a 95% confidence College London, London W1T 4TJ end point of trials does not necessarily interval of 0.57 to 0.99 and P=0.048 as clear evidence of Correspondence to: Allan Hackshaw [email protected] mean that the intervention is not effective. an effect, but the same point estimate with a 95% confi- Accepted: 11 February 2011 Researchers and journals need to be more dence interval of 0.55 to 1.03 and P=0.07 as showing no effect, simply because one P value is just below 0.05 and Cite this as: BMJ 2011;343:d3340 consistent in how they report these results doi: 10.1136/bmj.d3340 the other just above. Although the issue has been raised before,2 3 it still occurs in practice. The quality of randomised clinical trials and how they P values are an error rate (like the false positive rate in are reported have improved over time, with clearer guide- medical screening). In the same way that a small P value lines on conduct and statistical analysis.1 Clinical trials does not guarantee that there is a real effect, a P value often take several years, but interpreting the results at just above 0.05 does not mean no effect.
    [Show full text]
  • University of South Florida Univers
    How Big is a Big Hazard Ratio? Yuanyuan Lu, Henian Chen, MD, Ph.D. Department of Epidemiology & Biostatistics, College of Public Health, University of South Florida Background Objective v A growing concern about the difficulty of evaluating research findings v The purpose of this study was to propose a new method for interpreting including treatment effects of medical interventions. the size of hazard ratio by relating hazard ratio with Cohen‘s D. v A statistically significant finding indicates only that the sample size was v We also proposed a new method to interpret the size of relative risk by large enough to detect a non-random effect since p value is related to relating relative risk to hazard ratio. sample size. v Odds Ratio = 1.68, 3.47, and 6.71 are equivalent to Cohen’s d = 0.2 Methods (small), 0.5 (medium), and 0.8 (large), respectively, when disease rate is 1% in the non-exposed group (Cohen 1988, Chen 2010). v Cohen’s d is the standardized mean difference between two group means. v Number of articles using keywords “Hazard Ratio” escalated rapidly since v Cox proportional hazards regression is semiparametric survival model, 2000. which uses the rank of time instead of using exact time. The hazard v About 153 hazard ratio were reported significant in 52 articles from function of the Cox proportional hazards model is: American Journal of Epidemiology from 2017/01/01 to 2017/11/02. Over 55 hazards ratios are within 1 to 1.5. v The hazard ratio is the ratio of the hazard rates corresponding to two levels of an explanatory variable.
    [Show full text]
  • Meta-Analysis of Time-To-Event Data
    Meta-analysis of time-to-event data Catrin Tudur Smith University of Liverpool, UK [email protected] Cochrane Learning Live webinar 3rd July 2018 1 Have you ever had to deal with time-to-event data while working on a systematic review? Yes No 2 Contents of the workshop • Analysis of time-to-event data from a single trial • Meta-analysis of (aggregate) time-to-event data • Estimating ln(퐻푅) and its variance • Practical Do not worry about equations highlighted in red – they are included for completeness but it is not essential to understand them 3 Analysis of time-to-event (TTE) data from a single trial 4 Time-to-event data ● Arise when we measure the length of time between a starting point and the occurrence of some event ● Starting point: ➢ date of diagnosis ➢ date of surgery ➢ date of randomisation (most appropriate in an RCT) ● Event: ➢ death ➢ recurrence of tumour ➢ remission of a disease 5 Example for Patient A Time to event = 730 days Starting point Date of event (e.g. Date of randomisation, (e.g. Date of death, 31st 1st January 2012) December 2013) 6 Censoring • Event is often not observed on all subjects • Reasons : – drop-out – the study ends before the event has occurred • However, we do know how long they were followed up for without the event being observed • Individuals for whom the event is not observed are called censored 7 Example for Patient B Time to event = 365 days, observation would be censored Starting point Date of censoring Unknown date (e.g. date of (e.g.
    [Show full text]