What Is a Systematic Review?

Total Page:16

File Type:pdf, Size:1020Kb

What Is a Systematic Review? What is...? series Second edition Evidence-based medicine Supported by sanofi-aventis What is a systematic review? G Systematic reviews have increasingly replaced traditional Pippa Hemingway narrative reviews and expert commentaries as a way of summarising PhD BSc (Hons) RGN research evidence. RSCN Research Fellow G Systematic reviews attempt to bring the same level of rigour to in Systematic reviewing research evidence as should be used in producing that Reviewing, School of research evidence in the first place. Health and Related G Systematic reviews should be based on a peer-reviewed protocol so that Research (ScHARR), they can be replicated if necessary. University of Sheffield Nic Brereton PhD BSc G High quality systematic reviews seek to: (Hons) Health G Identify all relevant published and unpublished evidence Economist, NB G Select studies or reports for inclusion Consulting Services, G Assess the quality of each study or report Sheffield G Synthesise the findings from individual studies or reports in an unbiased way G Interpret the findings and present a balanced and impartial summary of the findings with due consideration of any flaws in the evidence. G Many high quality peer-reviewed systematic reviews are available in journals as well as from databases and other electronic sources. G Systematic reviews may examine quantitative or qualitative evidence; put simply, when the two or more types of evidence are examined within one review it is called a mixed-method systematic review. G Systematic reviewing techniques are in a period of rapid development. Many systematic reviews still look at clinical effectiveness, but methods now exist to enable reviewers to examine issues of appropriateness, feasibility and meaningfulness. G Not all published systematic reviews have been produced with meticulous care; therefore, the findings may sometimes mislead. Interrogating For further titles in the series, visit: published reports by asking a series of questions can uncover www.whatisseries.co.uk deficiencies. Date of preparation: April 2009 1 NPR09/1111 What is a systematic review? What is a systematic review? Why systematic reviews If the need for information is to be are needed fulfilled, there must be an evidence The explosion in medical, nursing and allied translation stage. This is ‘the act of healthcare professional publishing within the transferring knowledge to individual health latter half of the 20th century (perhaps professionals, health facilities and health 20,000 journals and upwards of two million systems (and consumers) by means of articles per year), which continues well into publications, electronic media, education, the new millennium, makes keeping up with training and decision support systems. primary research evidence an impossible feat. Evidence transfer is seen to involve careful There has also been an explosion in internet development of strategies that identify target access to articles, creating sometimes an awe- audiences – such as clinicians, managers, inspiring number of hits to explore. In policy makers and consumers – and designing addition, there is the challenge to build and methods to package and transfer information maintain the skills to use the wide variety of that is understood and used in decision- electronic media that allow access to large making’.1 amounts of information. Moreover, clinicians, nurses, therapists, healthcare managers, policy makers and Failings in traditional consumers have wide-ranging information reviews needs; that is, they need good quality Reviews have always been a part of the information on the effectiveness, healthcare literature. Experts in their field meaningfulness, feasibility and have sought to collate existing knowledge and appropriateness of a large number of publish summaries on specific topics. healthcare interventions; not just one or two. Traditional reviews may, for instance, be For many, this need conflicts with their busy called literature reviews, narrative reviews, clinical or professional workload. For critical reviews or commentaries within the consumers, the amount of information can literature. Although often very useful be overwhelming, and a lack of expert background reading, they differ from a knowledge can potentially lead to false belief systematic review in that they are not led via a in unreliable information, which in turn may peer-reviewed protocol and so it is not often raise health professional workload and patient possible to replicate the findings. In addition, safety issues. such attempts at synthesis have not always Even in a single area, it is not unusual for been as rigorous as might have been hoped. the number of published studies to run into In the worst case, reviewers may not have hundreds or even thousands (before they are begun with an open mind as to the likely sifted for inclusion in a review). Some of these recommendations, and they may then build a studies, once read in full text, may give case in support of their personal beliefs, unclear, confusing or contradictory results; selectively citing appropriate studies along the sometimes they may not be published in our way. Indeed, those involved in developing a own language or there may be lack of clarity review may well have started a review (or have whether the findings can be generalised to been commissioned to write one) precisely our own country. Looked at individually, each because of their accumulated experience and article may offer little insight into the professional opinions. Even if the reviewer problem at hand; the hope is that, when does begin with an open mind, traditional taken together within a systematic review, a reviews are rarely explicit about how studies clearer (and more consistent) picture will are selected, assessed and integrated. Thus, emerge. the reader is generally unable to assess the Date of preparation: April 2009 2 NPR09/1111 What is a systematic review? likelihood of prior beliefs or of selection or effectiveness of an intervention or publication biases clouding the review drug. Increasingly, however, they process. Despite all this, such narrative are required to establish if an intervention reviews were and are widespread and or activity is feasible, if it is appropriate influential. (ethically or culturally) or if it relates The lack of rigour in the creation of to evidence of experiences, values, traditional reviews went largely unremarked thoughts or beliefs of clients and their until the late 1980s when several relatives.1 commentators exposed the inadequacies of the process and the consequent bias in Systematic reviews are also: recommendations.2,3 Not least of the G Needed to propose a future problems was that small but important effects research agenda7 when the way were being missed, different reviewers were forward may be unclear or existing reaching different conclusions from the same agendas have failed to address a research base and, often, the findings clinical problem reported had more to do with the specialty of G Increasingly required by authors who wish the reviewer than with the underlying to secure substantial grant funding for evidence.4 primary healthcare research The inadequacy of traditional reviews and G Increasingly part of student dissertations or the need for a rigorous systematic approach postgraduate theses were emphasised in 1992 with the publication G Central to the National Institute for Health of two landmark papers.5,6 In these papers, and Clinical Excellence health technology Elliot Antman, Joseph Lau and colleagues assessment process for multiple reported two devastating findings. technology appraisals and single G First, if original studies of the effects of technology appraisals. clot busters after heart attacks had However, systematic reviews are most been systematically reviewed, the needed whenever there is a substantive benefits of therapy would have been question, several primary studies – perhaps apparent as early as the mid-1970s. with disparate findings – and substantial G Second, narrative reviews were uncertainty. One famous case is described woefully inadequate in summarising by The Cochrane Library:8 a single the current state of knowledge. These research paper, published in 1998 and based reviews either omitted mention of effective on 12 children, cast doubt on the safety of therapies or suggested that the treatments the mumps, measles and rubella (MMR) should be used only as part of an ongoing vaccine by implying that the MMR investigation – when in fact the evidence vaccine might cause the development (if it had been collated) was near of problems such as Crohn’s disease and incontrovertible. autism. The paper by Wakefield et al9 These papers showed that there was much has since been retracted by most of the knowledge to be gained from collating original authors because of potential bias, existing research but that traditional but before that it had triggered a worldwide approaches had largely failed to extract this scare, which in turn resulted in reduced knowledge. What was needed was the same uptake of the vaccine.10 A definitive rigour in secondary research (research where systematic review by Demicheli et al on the objects of study are other research studies) MMR vaccines in children concluded that as is expected from primary research exposure to MMR was unlikely to be (original study). associated with Crohn’s disease, autism or other conditions.11 Here, then, is an area where a systematic When systematic reviews review helped clarify a vital issue to the public are needed
Recommended publications
  • Adaptive Clinical Trials: an Introduction
    Adaptive clinical trials: an introduction What are the advantages and disadvantages of adaptive clinical trial designs? How and why were Introduction adaptive clinical Adaptive clinical trial design is trials developed? becoming a hot topic in healthcare research, with some researchers In 2004, the FDA published a report arguing that adaptive trials have the on the problems faced by the potential to get new drugs to market scientific community in developing quicker. In this article, we explain new medical treatments.2 The what adaptive trials are and why they report highlighted that the pace of were developed, and we explore both innovation in biomedical science is the advantages of adaptive designs outstripping the rate of advances and the concerns being raised by in the available technologies and some in the healthcare community. tools for evaluating new treatments. Outdated tools are being used to assess new treatments and there What are adaptive is a critical need to improve the effectiveness and efficiency of clinical trials? clinical trials.2 The current process for developing Adaptive clinical trials enable new treatments is expensive, takes researchers to change an aspect a long time and in some cases, the of a trial design at an interim development process has to be assessment, while controlling stopped after significant amounts the rate of type 1 errors.1 Interim of time and resources have been assessments can help to determine invested.2 In 2006, the FDA published whether a trial design is the most a “Critical Path Opportunities
    [Show full text]
  • D-Lab Scale-Ups User Research Framework Table of Contents
    User Research Framework Rebecca Smith Kendra Leith D-LAB SCALE-UPS USER RESEARCH FRAMEWORK Table of Contents ACKNOWLEDGEMENTS 4 INTRODUCTION 5 WHAT IS USER RESEARCH? 5 USERS AND CUSTOMERS 6 WHY USER RESEARCH? 6 CASE STUDY 6 QUICK-START GUIDE 8 PREPARATION 8 IMPLEMENTATION 8 PROCESSING 9 GETTING STARTED: CREATING A USER RESEARCH PLAN 10 WHAT: DETERMINING RESEARCH SCOPE AND CONDUCTING SECONDARY RESEARCH 10 DEFINING YOUR RESEARCH GOALS AND DESIGN CHALLENGE 10 SECONDARY RESEARCH 10 WHO: STAKEHOLDERS AND RESEARCH PARTICIPANTS 11 STAKEHOLDER ANALYSIS 11 DETERMINING NUMBER OF PARTICIPANTS 12 SELECTING PARTICIPANTS 12 HOW: METHODS, TEAM, LOCATION, TIMING, AND BUDGET 14 SELECTING RESEARCH METHODS 14 BUILDING THE TEAM 15 WHERE: SELECTING LOCATION(S) 16 WHEN: DETERMINING RESEARCH TIMELINE 17 BUDGETING RESOURCES 18 COMMUNICATING THE PLAN 19 OBSERVATION 20 WHY IT IS IMPORTANT AND WHEN TO USE 20 CHALLENGES 20 PLANNING AND CARRYING OUT 21 INTERVIEWING 22 WHY IT IS IMPORTANT AND WHEN TO USE 22 TYPES OF INTERVIEWS 22 INDIVIDUAL INTERVIEWS 23 EXPERT INTERVIEWS 23 GROUP INTERVIEWS 23 D-LAB SCALE-UPS USER RESEARCH FRAMEWORK FOCUS GROUPS 24 PLANNING AND CARRYING OUT 25 DEVELOPING AN INTERVIEW GUIDE 25 SCHEDULING INTERVIEWS (TIME AND LOCATION) 27 INTERVIEWING APPROACH 29 WORKING WITH INTERPRETERS 29 FOCUS GROUPS 30 IMMERSION 32 WHY IT IS IMPORTANT AND WHEN TO USE 32 CHALLENGES 32 PLANNING AND CARRYING OUT 32 CO-DESIGN 33 WHY IT IS IMPORTANT AND WHEN TO USE 34 CHALLENGES 34 PLANNING AND CARRYING OUT 34 RECORDING INFORMATION 36 DOCUMENTATION METHODS 36 WRITTEN
    [Show full text]
  • From Explanatory to Pragmatic Clinical Trials: a New Era for Effectiveness Research
    From Explanatory to Pragmatic Clinical Trials: A New Era for Effectiveness Research Jerry Jarvik, M.D., M.P.H. Professor of Radiology, Neurological Surgery and Health Services Adjunct Professor Orthopedic Surgery & Sports Medicine and Pharmacy Director, Comparative Effectiveness, Cost and Outcomes Research Center (CECORC) UTSW April 2018 Acknowledgements •NIH: UH2 AT007766; UH3 AT007766 •NIH P30AR072572 •PCORI: CE-12-11-4469 Disclosures Physiosonix (ultrasound company): Founder/stockholder UpToDate: Section Editor Evidence-Based Neuroimaging Diagnosis and Treatment (Springer): Co-Editor The Big Picture Comparative Effectiveness Evidence Based Practice Health Policy Learning Healthcare System So we need to generate evidence Challenge #1: Clinical research is slow • Tradi>onal RCTs are slow and expensive— and rarely produce findings that are easily put into prac>ce. • In fact, it takes an average of 17 ye ars before research findings Howlead pragmaticto widespread clinical trials canchanges improve in care. practice & policy Challenge #1: Clinical research is slow “…rarely produce findings that are easily put into prac>ce.” Efficacy vs. Effectiveness Efficacy vs. Effectiveness • Efficacy: can it work under ideal conditions • Effectiveness: does it work under real-world conditions Challenge #2: Clinical research is not relevant to practice • Tradi>onal RCTs study efficacy “If we want of txs for carefully selected more evidence- popula>ons under ideal based pr actice, condi>ons. we need more • Difficult to translate to real practice-based evidence.” world. Green, LW. American Journal • When implemented into of Public Health, 2006. How pragmatic clinical trials everyday clinical prac>ce, oZen seecan a “ voltage improve drop practice”— drama>c & decreasepolicy from efficacy to effec>veness.
    [Show full text]
  • Triangulation in Social Research: Qualitative and Quantitative Methods Can Really Be Mixed
    Triangulation in Social Research: Qualitative and Quantitative Methods Can Really Be Mixed FINAL VERSION. Forthcoming as a chapter in Developments in Sociology, 2004, ed. M. Holborn, Ormskirk: Causeway Press. By Wendy Olsen Abstract For those who teach methodology within social science departments, notably sociology, the mixing of quantitative and qualitative methods presents an ongoing problem. Recent developments in the philosophy of science have argued that the two traditions should not have a separate-but-equal status, and should instead interact. By reviewing three positions about this issue ('empiricist', constructionist, and realist) the chapter offers a review of the sociological approach now known as triangulation. The chapter refers to several empirical examples that illustrate the realist position and its strengths. The conclusion of the chapter is a more abstract review of the debate over pluralism in methodology. Triangulation, I argue, is not aimed merely at validation but at deepening and widening one's understanding. As a research aim, this one can be achieved either by a person or by a research team or group. Triangulation and pluralism both tend to support interdisciplinary research rather than a strongly bounded discipline of sociology. (For a copy of this book, you may contact Causeway Press on 01695 576048, email [email protected], ISBN 1902796829.) 1 Biographical Note Wendy Olsen Wendy Olsen grew up in Indiana and moved at age 18 to Beloit College in Wisconsin, where she studied economics and politics in a liberal arts degree. She moved to Britain in 1981 to study at Oxford University, where she received a masters and doctoral degree in economics.
    [Show full text]
  • Quasi-Experimental Studies in the Fields of Infection Control and Antibiotic Resistance, Ten Years Later: a Systematic Review
    HHS Public Access Author manuscript Author ManuscriptAuthor Manuscript Author Infect Control Manuscript Author Hosp Epidemiol Manuscript Author . Author manuscript; available in PMC 2019 November 12. Published in final edited form as: Infect Control Hosp Epidemiol. 2018 February ; 39(2): 170–176. doi:10.1017/ice.2017.296. Quasi-experimental Studies in the Fields of Infection Control and Antibiotic Resistance, Ten Years Later: A Systematic Review Rotana Alsaggaf, MS, Lyndsay M. O’Hara, PhD, MPH, Kristen A. Stafford, PhD, MPH, Surbhi Leekha, MBBS, MPH, Anthony D. Harris, MD, MPH, CDC Prevention Epicenters Program Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore, Maryland. Abstract OBJECTIVE.—A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN.—Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS.—Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS.—Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature.
    [Show full text]
  • A Guide to Systematic Review and Meta-Analysis of Prediction Model Performance
    RESEARCH METHODS AND REPORTING A guide to systematic review and meta-analysis of prediction model performance Thomas P A Debray,1,2 Johanna A A G Damen,1,2 Kym I E Snell,3 Joie Ensor,3 Lotty Hooft,1,2 Johannes B Reitsma,1,2 Richard D Riley,3 Karel G M Moons1,2 1Cochrane Netherlands, Validation of prediction models is diagnostic test accuracy studies. Compared to therapeu- University Medical Center tic intervention and diagnostic test accuracy studies, Utrecht, PO Box 85500 Str highly recommended and increasingly there is limited guidance on the conduct of systematic 6.131, 3508 GA Utrecht, Netherlands common in the literature. A systematic reviews and meta-analysis of primary prognosis studies. 2Julius Center for Health review of validation studies is therefore A common aim of primary prognostic studies con- Sciences and Primary Care, cerns the development of so-called prognostic predic- University Medical Center helpful, with meta-analysis needed to tion models or indices. These models estimate the Utrecht, PO Box 85500 Str 6.131, 3508 GA Utrecht, summarise the predictive performance individualised probability or risk that a certain condi- Netherlands of the model being validated across tion will occur in the future by combining information 3Research Institute for Primary from multiple prognostic factors from an individual. Care and Health Sciences, Keele different settings and populations. This Unfortunately, there is often conflicting evidence about University, Staffordshire, UK article provides guidance for the predictive performance of developed prognostic Correspondence to: T P A Debray [email protected] researchers systematically reviewing prediction models. For this reason, there is a growing Additional material is published demand for evidence synthesis of (external validation) online only.
    [Show full text]
  • Using Randomized Evaluations to Improve the Efficiency of US Healthcare Delivery
    Using Randomized Evaluations to Improve the Efficiency of US Healthcare Delivery Amy Finkelstein Sarah Taubman MIT, J-PAL North America, and NBER MIT, J-PAL North America February 2015 Abstract: Randomized evaluations of interventions that may improve the efficiency of US healthcare delivery are unfortunately rare. Across top journals in medicine, health services research, and economics, less than 20 percent of studies of interventions in US healthcare delivery are randomized. By contrast, about 80 percent of studies of medical interventions are randomized. The tide may be turning toward making randomized evaluations closer to the norm in healthcare delivery, as the relevant actors have an increasing need for rigorous evidence of what interventions work and why. At the same time, the increasing availability of administrative data that enables high-quality, low-cost RCTs and of new sources of funding that support RCTs in healthcare delivery make them easier to conduct. We suggest a number of design choices that can enhance the feasibility and impact of RCTs on US healthcare delivery including: greater reliance on existing administrative data; measuring a wide range of outcomes on healthcare costs, health, and broader economic measures; and designing experiments in a way that sheds light on underlying mechanisms. Finally, we discuss some of the more common concerns raised about the feasibility and desirability of healthcare RCTs and when and how these can be overcome. _____________________________ We are grateful to Innessa Colaiacovo, Lizi Chen, and Belinda Tang for outstanding research assistance, to Katherine Baicker, Mary Ann Bates, Kelly Bidwell, Joe Doyle, Mireille Jacobson, Larry Katz, Adam Sacarny, Jesse Shapiro, and Annetta Zhou for helpful comments, and to the Laura and John Arnold Foundation for financial support.
    [Show full text]
  • Is It Worthwhile Including Observational Studies in Systematic Reviews of Effectiveness?
    CRD_mcdaid05_Poster.qxd 13/6/05 5:12 pm Page 1 Is it worthwhile including observational studies in systematic reviews of effectiveness? The experience from a review of treatments for childhood retinoblastoma Catriona Mc Daid, Suzanne Hartley, Anne-Marie Bagnall, Gill Ritchie, Kate Light, Rob Riemsma Centre for Reviews and Dissemination, University of York, UK Background • Overall there were considerable problems with quality Figure 1: Mapping of included studies (see Table). Without randomised allocation there was a Retinoblastoma is a rare malignant tumour of the retina and high risk of selection bias in all studies. Studies were also usually occurs in children under two years old. It is an susceptible to detection and performance bias, with the aggressive tumour that can lead to loss of vision and, in retrospective studies particularly susceptible as they were extreme cases, death. The prognoses for vision and survival less likely to have a study protocol specifying the have significantly improved with the development of more intervention and outcome assessments. timely diagnosis and improved treatment methods. Important • Due to the considerable limitations of the evidence clinical factors associated with prognosis are age and stage of identified, it was not possible to make meaningful and disease at diagnosis. Patients with the hereditary form of robust conclusions about the relative effectiveness of retinoblastoma may be predisposed to significant long-term different treatment approaches for childhood complications. retinoblastoma. Historically, enucleation was the standard treatment for unilateral retinoblastoma. In bilateral retinoblastoma, the eye ■ ■ ■ with the most advanced tumour was commonly removed and EBRT Chemotherapy EBRT with Chemotherapy the contralateral eye treated with external beam radiotherapy ■ Enucleation ■ Local Treatments (EBRT).
    [Show full text]
  • Secondary Data Analysis in Educational Research: Opportunities for Phd Students
    75 SHS W eb o f Conferences , 04005 (2020) https://doi.org/10.1051/shsconf/20207504005 ICHTML 2020 Secondary data analysis in educational research: opportunities for PhD students Liubov Panchenko1,*, and Nataliia Samovilova2 1National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, 37 Peremohy Ave., Kyiv, 03056, Ukraine 2Luhansk Taras Shevchenko National University, 1 Gogol Sq., Starobilsk, 92703, Ukraine Abstract. The article discusses the problem of using secondary data analysis (SDA) in educational research. The definitions of the SDA are analyzed; the statistics of journals articles with secondary data analysis in the field of sociology, social work and education is discussed; the dynamics of articles with data in the Journal of Peace Research 1988 to 2018 is conducted; the papers of Ukrainian conference “Implementation of European Standards in Ukrainian Educational Research” (2019) are analyzed. The problems of PhD student training to use secondary data analysis in their dissertation are discussed: the sources of secondary data analysis in the education field for Ukrainian PhD students are proposed, and the model of training of Ukrainian PhD students in the field of secondary data analysis is offered. This model consists of three components: theory component includes the theoretic basic of secondary data analysis; practice component contains the examples and tasks of using SDA in educational research with statistics software and Internet tools; the third component is PhD student support in the process of their thesis writing. 1 Introduction scientific research has received wide recognition in the global scientific community [2-9]. In the modern digital globalized world, we see a large J.
    [Show full text]
  • Interpreting Indirect Treatment Comparisons and Network Meta
    VALUE IN HEALTH 14 (2011) 417–428 available at www.sciencedirect.com journal homepage: www.elsevier.com/locate/jval SCIENTIFIC REPORT Interpreting Indirect Treatment Comparisons and Network Meta-Analysis for Health-Care Decision Making: Report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: Part 1 Jeroen P. Jansen, PhD1,*, Rachael Fleurence, PhD2, Beth Devine, PharmD, MBA, PhD3, Robbin Itzler, PhD4, Annabel Barrett, BSc5, Neil Hawkins, PhD6, Karen Lee, MA7, Cornelis Boersma, PhD, MSc8, Lieven Annemans, PhD9, Joseph C. Cappelleri, PhD, MPH10 1Mapi Values, Boston, MA, USA; 2Oxford Outcomes, Bethesda, MD, USA; 3Pharmaceutical Outcomes Research and Policy Program, School of Pharmacy, School of Medicine, University of Washington, Seattle, WA, USA; 4Merck Research Laboratories, North Wales, PA, USA; 5Eli Lilly and Company Ltd., Windlesham, Surrey, UK; 6Oxford Outcomes Ltd., Oxford, UK; 7Canadian Agency for Drugs and Technologies in Health (CADTH), Ottawa, ON, Canada; 8University of Groningen / HECTA, Groningen, The Netherlands; 9University of Ghent, Ghent, Belgium; 10Pfizer Inc., New London, CT, USA ABSTRACT Evidence-based health-care decision making requires comparisons of all of randomized, controlled trials allow multiple treatment comparisons of relevant competing interventions. In the absence of randomized, con- competing interventions. Next, an introduction to the synthesis of the avail- trolled trials involving a direct comparison of all treatments of interest, able evidence with a focus on terminology, assumptions, validity, and statis- indirect treatment comparisons and network meta-analysis provide use- tical methods is provided, followed by advice on critically reviewing and in- ful evidence for judiciously selecting the best choice(s) of treatment. terpreting an indirect treatment comparison or network meta-analysis to Mixed treatment comparisons, a special case of network meta-analysis, inform decision making.
    [Show full text]
  • Patient Reported Outcomes (PROS) in Performance Measurement
    Patient-Reported Outcomes Table of Contents Introduction ............................................................................................................................................ 2 Defining Patient-Reported Outcomes ............................................................................................ 2 How Are PROs Used? ............................................................................................................................ 3 Measuring research study endpoints ........................................................................................................ 3 Monitoring adverse events in clinical research ..................................................................................... 3 Monitoring symptoms, patient satisfaction, and health care performance ................................ 3 Example from the NIH Collaboratory ................................................................................................................... 4 Measuring PROs: Instruments, Item Banks, and Devices ....................................................... 5 PRO Instruments ............................................................................................................................................... 5 Item Banks........................................................................................................................................................... 8 Devices .................................................................................................................................................................
    [Show full text]
  • A National Strategy to Develop Pragmatic Clinical Trials Infrastructure
    A National Strategy to Develop Pragmatic Clinical Trials Infrastructure Thomas W. Concannon, Ph.D.1,2, Jeanne-Marie Guise, M.D., M.P.H.3, Rowena J. Dolor, M.D., M.H.S.4, Paul Meissner, M.S.P.H.5, Sean Tunis, M.D., M.P.H.6, Jerry A. Krishnan, M.D., Ph.D.7,8, Wilson D. Pace, M.D.9, Joel Saltz, M.D., Ph.D.10, William R. Hersh, M.D.3, Lloyd Michener, M.D.4, and Timothy S. Carey, M.D., M.P.H.11 Abstract An important challenge in comparative effectiveness research is the lack of infrastructure to support pragmatic clinical trials, which com- pare interventions in usual practice settings and subjects. These trials present challenges that differ from those of classical efficacy trials, which are conducted under ideal circumstances, in patients selected for their suitability, and with highly controlled protocols. In 2012, we launched a 1-year learning network to identify high-priority pragmatic clinical trials and to deploy research infrastructure through the NIH Clinical and Translational Science Awards Consortium that could be used to launch and sustain them. The network and infrastructure were initiated as a learning ground and shared resource for investigators and communities interested in developing pragmatic clinical trials. We followed a three-stage process of developing the network, prioritizing proposed trials, and implementing learning exercises that culminated in a 1-day network meeting at the end of the year. The year-long project resulted in five recommendations related to developing the network, enhancing community engagement, addressing regulatory challenges, advancing information technology, and developing research methods.
    [Show full text]