A Systematic Review and Meta-Analysis of Clinical Trials on Physical Interventions for Lateral Epicondylalgia
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Adaptive Clinical Trials: an Introduction
Adaptive clinical trials: an introduction What are the advantages and disadvantages of adaptive clinical trial designs? How and why were Introduction adaptive clinical Adaptive clinical trial design is trials developed? becoming a hot topic in healthcare research, with some researchers In 2004, the FDA published a report arguing that adaptive trials have the on the problems faced by the potential to get new drugs to market scientific community in developing quicker. In this article, we explain new medical treatments.2 The what adaptive trials are and why they report highlighted that the pace of were developed, and we explore both innovation in biomedical science is the advantages of adaptive designs outstripping the rate of advances and the concerns being raised by in the available technologies and some in the healthcare community. tools for evaluating new treatments. Outdated tools are being used to assess new treatments and there What are adaptive is a critical need to improve the effectiveness and efficiency of clinical trials? clinical trials.2 The current process for developing Adaptive clinical trials enable new treatments is expensive, takes researchers to change an aspect a long time and in some cases, the of a trial design at an interim development process has to be assessment, while controlling stopped after significant amounts the rate of type 1 errors.1 Interim of time and resources have been assessments can help to determine invested.2 In 2006, the FDA published whether a trial design is the most a “Critical Path Opportunities -
Contemporary Clinical Trials Communications 16 (2019) 100488
Contemporary Clinical Trials Communications 16 (2019) 100488 Contents lists available at ScienceDirect Contemporary Clinical Trials Communications journal homepage: http://www.elsevier.com/locate/conctc Opinion paper Creating an academic research organization to efficiently design, conduct, coordinate, and analyze clinical trials: The Center for Clinical Trials & Data Coordination Kaleab Z. Abebe *, Andrew D. Althouse, Diane Comer, Kyle Holleran, Glory Koerbel, Jason Kojtek, Joseph Weiss, Susan Spillane Center for Clinical Trials & Data Coordination, Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA ARTICLE INFO ABSTRACT Keywords: When properly executed, the randomized controlled trial is one of the best vehicles for assessing the effectiveness Data coordinating center of one or more interventions. However, numerous challenges may emerge in the areas of study startup, Clinical trial management recruitment, data quality, cost, and reporting of results. The use of well-run coordinating centers could help prevent these issues, but very little exists in the literature describing their creation or the guiding principles behind their inception. The Center for Clinical Trials & Data Coordination (CCDC) was established in 2015 through institutional funds with the intent of 1) providing relevant expertise in clinical trial design, conduct, coordination, and analysis; 2) advancing the careers of clinical investigators and CCDC-affiliated faculty; and 3) obtaining large data coordi nating center (DCC) grants. We describe the organizational structure of the CCDC as well as the homegrown clinical trial management system integrating nine crucial elements: electronic data capture, eligibility and randomization, drug and external data tracking, safety reporting, outcome adjudication, data and safety moni toring, statistical analysis and reporting, data sharing, and regulatory compliance. -
Quasi-Experimental Studies in the Fields of Infection Control and Antibiotic Resistance, Ten Years Later: a Systematic Review
HHS Public Access Author manuscript Author ManuscriptAuthor Manuscript Author Infect Control Manuscript Author Hosp Epidemiol Manuscript Author . Author manuscript; available in PMC 2019 November 12. Published in final edited form as: Infect Control Hosp Epidemiol. 2018 February ; 39(2): 170–176. doi:10.1017/ice.2017.296. Quasi-experimental Studies in the Fields of Infection Control and Antibiotic Resistance, Ten Years Later: A Systematic Review Rotana Alsaggaf, MS, Lyndsay M. O’Hara, PhD, MPH, Kristen A. Stafford, PhD, MPH, Surbhi Leekha, MBBS, MPH, Anthony D. Harris, MD, MPH, CDC Prevention Epicenters Program Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore, Maryland. Abstract OBJECTIVE.—A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN.—Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS.—Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS.—Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. -
A Guide to Systematic Review and Meta-Analysis of Prediction Model Performance
RESEARCH METHODS AND REPORTING A guide to systematic review and meta-analysis of prediction model performance Thomas P A Debray,1,2 Johanna A A G Damen,1,2 Kym I E Snell,3 Joie Ensor,3 Lotty Hooft,1,2 Johannes B Reitsma,1,2 Richard D Riley,3 Karel G M Moons1,2 1Cochrane Netherlands, Validation of prediction models is diagnostic test accuracy studies. Compared to therapeu- University Medical Center tic intervention and diagnostic test accuracy studies, Utrecht, PO Box 85500 Str highly recommended and increasingly there is limited guidance on the conduct of systematic 6.131, 3508 GA Utrecht, Netherlands common in the literature. A systematic reviews and meta-analysis of primary prognosis studies. 2Julius Center for Health review of validation studies is therefore A common aim of primary prognostic studies con- Sciences and Primary Care, cerns the development of so-called prognostic predic- University Medical Center helpful, with meta-analysis needed to tion models or indices. These models estimate the Utrecht, PO Box 85500 Str 6.131, 3508 GA Utrecht, summarise the predictive performance individualised probability or risk that a certain condi- Netherlands of the model being validated across tion will occur in the future by combining information 3Research Institute for Primary from multiple prognostic factors from an individual. Care and Health Sciences, Keele different settings and populations. This Unfortunately, there is often conflicting evidence about University, Staffordshire, UK article provides guidance for the predictive performance of developed prognostic Correspondence to: T P A Debray [email protected] researchers systematically reviewing prediction models. For this reason, there is a growing Additional material is published demand for evidence synthesis of (external validation) online only. -
Is It Worthwhile Including Observational Studies in Systematic Reviews of Effectiveness?
CRD_mcdaid05_Poster.qxd 13/6/05 5:12 pm Page 1 Is it worthwhile including observational studies in systematic reviews of effectiveness? The experience from a review of treatments for childhood retinoblastoma Catriona Mc Daid, Suzanne Hartley, Anne-Marie Bagnall, Gill Ritchie, Kate Light, Rob Riemsma Centre for Reviews and Dissemination, University of York, UK Background • Overall there were considerable problems with quality Figure 1: Mapping of included studies (see Table). Without randomised allocation there was a Retinoblastoma is a rare malignant tumour of the retina and high risk of selection bias in all studies. Studies were also usually occurs in children under two years old. It is an susceptible to detection and performance bias, with the aggressive tumour that can lead to loss of vision and, in retrospective studies particularly susceptible as they were extreme cases, death. The prognoses for vision and survival less likely to have a study protocol specifying the have significantly improved with the development of more intervention and outcome assessments. timely diagnosis and improved treatment methods. Important • Due to the considerable limitations of the evidence clinical factors associated with prognosis are age and stage of identified, it was not possible to make meaningful and disease at diagnosis. Patients with the hereditary form of robust conclusions about the relative effectiveness of retinoblastoma may be predisposed to significant long-term different treatment approaches for childhood complications. retinoblastoma. Historically, enucleation was the standard treatment for unilateral retinoblastoma. In bilateral retinoblastoma, the eye ■ ■ ■ with the most advanced tumour was commonly removed and EBRT Chemotherapy EBRT with Chemotherapy the contralateral eye treated with external beam radiotherapy ■ Enucleation ■ Local Treatments (EBRT). -
Adaptive Enrichment Designs in Clinical Trials
Annual Review of Statistics and Its Application Adaptive Enrichment Designs in Clinical Trials Peter F. Thall Department of Biostatistics, M.D. Anderson Cancer Center, University of Texas, Houston, Texas 77030, USA; email: [email protected] Annu. Rev. Stat. Appl. 2021. 8:393–411 Keywords The Annual Review of Statistics and Its Application is adaptive signature design, Bayesian design, biomarker, clinical trial, group online at statistics.annualreviews.org sequential design, precision medicine, subset selection, targeted therapy, https://doi.org/10.1146/annurev-statistics-040720- variable selection 032818 Copyright © 2021 by Annual Reviews. Abstract All rights reserved Adaptive enrichment designs for clinical trials may include rules that use in- terim data to identify treatment-sensitive patient subgroups, select or com- pare treatments, or change entry criteria. A common setting is a trial to Annu. Rev. Stat. Appl. 2021.8:393-411. Downloaded from www.annualreviews.org compare a new biologically targeted agent to standard therapy. An enrich- ment design’s structure depends on its goals, how it accounts for patient heterogeneity and treatment effects, and practical constraints. This article Access provided by University of Texas - M.D. Anderson Cancer Center on 03/10/21. For personal use only. first covers basic concepts, including treatment-biomarker interaction, pre- cision medicine, selection bias, and sequentially adaptive decision making, and briefly describes some different types of enrichment. Numerical illus- trations are provided for qualitatively different cases involving treatment- biomarker interactions. Reviews are given of adaptive signature designs; a Bayesian design that uses a random partition to identify treatment-sensitive biomarker subgroups and assign treatments; and designs that enrich superior treatment sample sizes overall or within subgroups, make subgroup-specific decisions, or include outcome-adaptive randomization. -
The Α7 Nicotinic Agonist ABT-126 in the Treatment
Neuropsychopharmacology (2016) 41, 2893–2902 © 2016 American College of Neuropsychopharmacology. All rights reserved 0893-133X/16 www.neuropsychopharmacology.org The ɑ7 Nicotinic Agonist ABT-126 in the Treatment of Cognitive Impairment Associated with Schizophrenia in Nonsmokers: Results from a Randomized Controlled Phase 2b Study ,1 1 1,2 1 George Haig* , Deli Wang , Ahmed A Othman and Jun Zhao 1 2 Abbvie Inc., North Chicago, IL, USA; Faculty of Pharmacy, Cairo University, Cairo, Egypt A double-blind, placebo-controlled, parallel-group, 24-week, multicenter trial was conducted to evaluate the efficacy and safety of 3 doses of ABT-126, an α7 nicotinic receptor agonist, for the treatment of cognitive impairment in nonsmoking subjects with schizophrenia. Clinically stable subjects were randomized in 2 stages: placebo, ABT-126 25 mg, 50 mg or 75 mg once daily (stage 1) and placebo or ABT-126 50 mg (stage 2). The primary analysis was the change from baseline to week 12 on the MATRICS Consensus Cognitive Battery (MCCB) neurocognitive composite score for ABT-126 50 mg vs placebo using a mixed-model for repeated-measures. A key secondary measure was the University of California Performance-based Assessment-Extended Range (UPSA-2ER). A total of 432 subjects were randomized and 80% (344/431) completed the study. No statistically significant differences were observed in either the change from baseline for the MCCB neurocognitive composite score (+2.66 [ ± 0.54] for ABT-126 50 mg vs +2.46 [ ± 0.56] for placebo at week 12; P40.05) or the UPSA-2ER. A trend for improvement was seen at week 24 on the 16-item Negative Symptom Assessment Scale total − ± − ± = score for ABT-126 50 mg (change from baseline 4.27 [0.58] vs 3.00 [0.60] for placebo; P 0.059). -
How to Get Started with a Systematic Review in Epidemiology: an Introductory Guide for Early Career Researchers
Denison et al. Archives of Public Health 2013, 71:21 http://www.archpublichealth.com/content/71/1/21 ARCHIVES OF PUBLIC HEALTH METHODOLOGY Open Access How to get started with a systematic review in epidemiology: an introductory guide for early career researchers Hayley J Denison1*†, Richard M Dodds1,2†, Georgia Ntani1†, Rachel Cooper3†, Cyrus Cooper1†, Avan Aihie Sayer1,2† and Janis Baird1† Abstract Background: Systematic review is a powerful research tool which aims to identify and synthesize all evidence relevant to a research question. The approach taken is much like that used in a scientific experiment, with high priority given to the transparency and reproducibility of the methods used and to handling all evidence in a consistent manner. Early career researchers may find themselves in a position where they decide to undertake a systematic review, for example it may form part or all of a PhD thesis. Those with no prior experience of systematic review may need considerable support and direction getting started with such a project. Here we set out in simple terms how to get started with a systematic review. Discussion: Advice is given on matters such as developing a review protocol, searching using databases and other methods, data extraction, risk of bias assessment and data synthesis including meta-analysis. Signposts to further information and useful resources are also given. Conclusion: A well-conducted systematic review benefits the scientific field by providing a summary of existing evidence and highlighting unanswered questions. For the individual, undertaking a systematic review is also a great opportunity to improve skills in critical appraisal and in synthesising evidence. -
Guidelines for Reporting Meta-Epidemiological Methodology Research
EBM Primer Evid Based Med: first published as 10.1136/ebmed-2017-110713 on 12 July 2017. Downloaded from Guidelines for reporting meta-epidemiological methodology research Mohammad Hassan Murad, Zhen Wang 10.1136/ebmed-2017-110713 Abstract The goal is generally broad but often focuses on exam- Published research should be reported to evidence users ining the impact of certain characteristics of clinical studies on the observed effect, describing the distribu- Evidence-Based Practice with clarity and transparency that facilitate optimal tion of research evidence in a specific setting, exam- Center, Mayo Clinic, Rochester, appraisal and use of evidence and allow replication Minnesota, USA by other researchers. Guidelines for such reporting ining heterogeneity and exploring its causes, identifying are available for several types of studies but not for and describing plausible biases and providing empirical meta-epidemiological methodology studies. Meta- evidence for hypothesised associations. Unlike classic Correspondence to: epidemiological studies adopt a systematic review epidemiology, the unit of analysis for meta-epidemio- Dr Mohammad Hassan Murad, or meta-analysis approach to examine the impact logical studies is a study, not a patient. The outcomes Evidence-based Practice Center, of certain characteristics of clinical studies on the of meta-epidemiological studies are usually not clinical Mayo Clinic, 200 First Street 6–8 observed effect and provide empirical evidence for outcomes. SW, Rochester, MN 55905, USA; hypothesised associations. The unit of analysis in meta- In this guide, we adapt the items used in the PRISMA murad. mohammad@ mayo. edu 9 epidemiological studies is a study, not a patient. The statement for reporting systematic reviews and outcomes of meta-epidemiological studies are usually meta-analysis to fit the setting of meta- epidemiological not clinical outcomes. -
Adaptive Designs in Clinical Trials: Why Use Them, and How to Run and Report Them Philip Pallmann1* , Alun W
Pallmann et al. BMC Medicine (2018) 16:29 https://doi.org/10.1186/s12916-018-1017-7 CORRESPONDENCE Open Access Adaptive designs in clinical trials: why use them, and how to run and report them Philip Pallmann1* , Alun W. Bedding2, Babak Choodari-Oskooei3, Munyaradzi Dimairo4,LauraFlight5, Lisa V. Hampson1,6, Jane Holmes7, Adrian P. Mander8, Lang’o Odondi7, Matthew R. Sydes3,SofíaS.Villar8, James M. S. Wason8,9, Christopher J. Weir10, Graham M. Wheeler8,11, Christina Yap12 and Thomas Jaki1 Abstract Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial’s course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented. We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. -
Downloadable Database from January 1 to October 21, 2020
medRxiv preprint doi: https://doi.org/10.1101/2020.11.29.20237875; this version posted November 30, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC 4.0 International license . CLINICAL TRIALS IN COVID-19 MANAGEMENT & PREVENTION: A META-EPIDEMIOLOGICAL STUDY EXAMINING METHODOLOGICAL QUALITY Authors: Kimia Honarmand MD MSc1, Jeremy Penn BHSc (c)2, Arnav Agarwal3,4, Reed Siemieniuk3, Romina Brignardello- Petersen3, Jessica J Bartoszko3, Dena Zeraatkar3,5, Thomas Agoritsas3,6, Karen Burns3,7, Shannon M. Fernando MD MSc8, Farid Foroutan9, Long Ge PhD10, Francois Lamontagne11, Mario A Jimenez-Mora MD12, Srinivas Murthy13, Juan Jose Yepes Nuñez12,14 MD, MSc, PhD, Per O Vandvik MD, Ph.D15, Zhikang Ye3, Bram Rochwerg MD MSc3,16 Affiliations: 1. Division of Critical Care, Department of Medicine, Western University, London, Canada 2. Faculty of Health Sciences, McMaster University, Hamilton, Canada 3. Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Canada 4. Department of Medicine, University of Toronto, Toronto, Canada 5. Department of Biomedical Informatics, Harvard Medical School, Boston, USA 6. Division General Internal Medicine, University Hospitals of Geneva, Geneva, Switzerland 7. Unity Health Toronto, St. Michael’s Hospital, Li Ka Shing Knowledge Institute, Toronto, Canada 8. Division of Critical Care, Department of Medicine, University of Ottawa, Ottawa, Canada 9. Ted Rogers Centre for Heart Research, University Health Network, Toronto General Hospital, Toronto, Canada 10. Evidence Based Social Science Research Center, School of Public Health, Lanzhou University, Lanzhou, Gansu, China 11. -
Exploring the Application of a Multicenter Study Design in the Preclinical Phase of Translational Research
Exploring the Application of a Multicenter Study Design in the Preclinical Phase of Translational Research Victoria Hunniford Thesis submitted to the University of Ottawa in partial Fulfillment of the requirements for the Master of Science in Health Systems Telfer School of Management University of Ottawa © Victoria Hunniford, Ottawa, Canada, 2019 Abstract Multicenter preclinical studies have been suggested as a method to improve potential clinical translation of preclinical work by testing reproducibility and generalizability of findings. In these studies, multiple independent laboratories collaboratively conduct a research experiment using a shared protocol. The use of a multicenter design in preclinical experimentation is a recent approach and only a handful of these studies have been published. In this thesis, I aimed to provide insight into preclinical multicenter studies by 1) systematically synthesizing all published preclinical multicenter studies; and 2) exploring the experiences of, barriers and enablers to, and the extent of collaboration within preclinical multicenter studies. In Part One, I conducted a systematic review of preclinical multicenter studies. The database searches identified 3150 citations and 13 studies met inclusion criteria. The multicenter design was applied across a diverse range of diseases including stroke, heart attack, and traumatic brain injury. The median number of centers was 4 (range 2-6) and the median sample size was 133 (range 23-384). Most studies had lower risk of bias and higher completeness of reporting than typically seen in single-centered studies. Only five of the thirteen studies produced results consistent with previous single-center studies, highlighting a central concern of preclinical research: irreproducibility and poor generalizability of findings from single laboratories.