Critical Appraisal of the Evidence: Part I an Introduction to Gathering, Evaluating, and Recording the Evidence

Total Page:16

File Type:pdf, Size:1020Kb

Load more

By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP, FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M. Williamson, PhD, RN Critical Appraisal of the Evidence: Part I An introduction to gathering, evaluating, and recording the evidence. This is the fifth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence - Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub- lished with September’s Evidence-Based Practice, Step by Step. n May’s evidence-based prac- database’s own indexing lan- library subscription or those tice (EBP) article, Rebecca R., guage, or controlled vocabulary, flagged as “free full text” by a Iour hypothetical staff nurse, matched the keywords or syn- database or journal’s Web site. and Carlos A., her hospital’s ex- onyms, those terms were also Others are available through in- pert EBP mentor, learned how to searched. At the end of the data- terlibrary loan, when another search for the evidence to answer base searches, Rebecca and Car- hos pital library shares its articles their clinical question (shown los chose to retain 18 of the 18 with Rebecca and Carlos’s hospi- here in PICOT format): “In hos- studies found in PubMed; six of tal library. pitalized adults (P), how does a the 79 studies found in CINAHL; Carlos explains to Rebecca that rapid response team (I) compared and the one study found in the the purpose of critical appraisal with no rapid response team (C) Cochrane Database of System- isn’t solely to find the flaws in a affect the number of cardiac ar- atic Reviews, because they best study, but to determine its worth rests (O) and unplanned admis- answered the clinical question. to practice. In this rapid critical sions to the ICU (O) during a As a final step, at Lynne’s rec- appraisal (RCA), they will review three-month period (T)?” With ommendation, Rebecca and Car- each study to determine the help of Lynne Z., the hospi- los conducted a hand search of • its level of evidence. tal librarian, Rebecca and Car- the reference lists of each study • how well it was conducted. los searched three databases, they retained looking for any rele- • how useful it is to practice. PubMed, the Cumulative Index vant studies they hadn’t found in Once they determine which of Nursing and Allied Health their original search; this process studies are “keepers,” Rebecca Literature (CINAHL), and the is also called the ancestry method. and Carlos will move on to the Cochrane Database of Systematic The hand search yielded one ad- final steps of critical appraisal: Reviews. They used keywords ditional study, for a total of 26. evaluation and synthesis (to be from their clinical question, in- discussed in the next two install- cluding ICU, rapid response RAPID CRITICAL APPRAISAL ments of the series). These final team, cardiac arrest, and un- The next time Rebecca and Car- steps will determine whether planned ICU admissions, as los meet, they discuss the next overall findings from the evi- well as the following synonyms: step in the EBP process—critically dence review can help clinicians failure to rescue, never events, appraising the 26 studies. They improve patient outcomes. medical emergency teams, rapid obtain copies of the studies by Rebecca is a bit apprehensive response systems, and code printing those that are immedi- because it’s been a few years since blue. Whenever terms from a ately available as full text through she took a research class. She [email protected] AJN ▼ July 2010 ▼ Vol. 110, No. 7 47 shares her anxiety with Chen M., new EBP team, Carlos provides and the Boston University Medi- a fellow staff nurse, who says Rebecca and Chen with a glossary cal Center Alumni Medical Li- she never studied research in of terms so they can learn basic brary [http://medlib.bu.edu/ school but would like to learn; research terminology, such as sam- bugms/content.cfm/content/ she asks if she can join Carlos ple, independent variable, and de- ebmglossary.cfm#R].) and Rebecca’s EBP team. Chen’s pendent variable. The glossary Determining the level of evi- spirit of inquiry encourages Re- also defines some of the study de- dence. The team begins to divide becca, and they talk about the signs the team is likely to come the 26 studies into categories ac- opportunity to learn that this across in doing their RCA, such cording to study design. To help project affords them. Together as systematic review, randomized in this, Carlos provides a list of they speak with the nurse man- controlled trial, and cohort, qual- several different study designs ager on their medical–surgical itative, and descriptive studies. (see Hierarchy of Evidence for unit, who agrees to let them use (For the definitions of these terms Intervention Studies). Rebecca, their allotted continuing educa- and others, see the glossaries pro- Carlos, and Chen work together tion time to work on this project, vided by the Center for the Ad- to determine each study’s design after they discuss their expecta- vancement of Evidence-Based by reviewing its abstract. They tions for the project and how its Practice at the Arizona State Uni- also create an “I don’t know” outcome may benefit the patients, versity College of Nursing and pile of studies that don’t appear the unit staff, and the hospital. Health Innovation [http://nursing to fit a specific design. When they Learning research terminol- andhealth.asu.edu/evidence-based- find studies that don’t actively ogy. At the first meeting of the practice/resources/glossary.htm] answer the clinical question but Hierarchy of Evidence for Intervention Studies Type of evidence Level of evidence Description Systematic review or I A synthesis of evidence from all relevant random ized controlled trials. meta-analysis Randomized con- II An experiment in which subjects are randomized to a treatment group trolled trial or control group. Controlled trial with- III An experiment in which subjects are nonrandomly assigned to a out randomization treatment group or control group. Case-control or IV Case-control study: a comparison of subjects with a condition (case) cohort study with those who don’t have the condition (control) to determine characteristics that might predict the condition. Cohort study: an observation of a group(s) (cohort[s]) to determine the development of an outcome(s) such as a disease. Systematic review of V A synthesis of evidence from qualitative or descrip tive studies to qualitative or descrip- answer a clinical question. tive studies Qualitative or de- VI Qualitative study: gathers data on human behavior to understand why scriptive study and how decisions are made. Descriptive study: provides background information on the what, where, and when of a topic of interest. Expert opinion or VII Authoritative opinion of expert committee. consensus Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins. 48 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com Critical Appraisal Guide for Quantitative Studies 1. Why was the study done? • Was there a clear explanation of the purpose of the study and, if so, what was it? 2. What is the sample size? • Were there enough people in the study to establish that the findings did not occur by chance? 3. Are the instruments of the major variables valid and reliable? • How were variables defined? Were the instruments designed to measure a concept valid (did they measure what the researchers said they measured)? Were they reliable (did they measure a concept the same way every time they were used)? 4. How were the data analyzed? • What statistics were used to determine if the purpose of the study was achieved? 5. Were there any untoward events during the study? • Did people leave the study and, if so, was there something special about them? 6. How do the results fit with previous research in the area? • Did the researchers base their work on a thorough literature review? 7. What does this research mean for clinical practice? • Is the study purpose an important clinical issue? Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins. may inform thinking, such as or a meta-analysis—is the most appraisal process (to appear in descriptive research, expert opin- reliable and the best evidence to future installments of this series). ions, or guidelines, they put them answer their clinical question. Creating a study evaluation aside. Carlos explains that they’ll Using a critical appraisal table. Carlos provides an online be used later to support Rebecca’s guide.
Recommended publications
  • Cebm-Levels-Of-Evidence-Background-Document-2-1.Pdf

    Cebm-Levels-Of-Evidence-Background-Document-2-1.Pdf

    Background document Explanation of the 2011 Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence Introduction The OCEBM Levels of Evidence was designed so that in addition to traditional critical appraisal, it can be used as a heuristic that clinicians and patients can use to answer clinical questions quickly and without resorting to pre-appraised sources. Heuristics are essentially rules of thumb that helps us make a decision in real environments, and are often as accurate as a more complicated decision process. A distinguishing feature is that the Levels cover the entire range of clinical questions, in the order (from top row to bottom row) that the clinician requires. While most ranking schemes consider strength of evidence for therapeutic effects and harms, the OCEBM system allows clinicians and patients to appraise evidence for prevalence, accuracy of diagnostic tests, prognosis, therapeutic effects, rare harms, common harms, and usefulness of (early) screening. Pre-appraised sources such as the Trip Database (1), (2), or REHAB+ (3) are useful because people who have the time and expertise to conduct systematic reviews of all the evidence design them. At the same time, clinicians, patients, and others may wish to keep some of the power over critical appraisal in their own hands. History Evidence ranking schemes have been used, and criticised, for decades (4-7), and each scheme is geared to answer different questions(8). Early evidence hierarchies(5, 6, 9) were introduced primarily to help clinicians and other researchers appraise the quality of evidence for therapeutic effects, while more recent attempts to assign levels to evidence have been designed to help systematic reviewers(8), or guideline developers(10).
  • The Hierarchy of Evidence Pyramid

    The Hierarchy of Evidence Pyramid

    The Hierarchy of Evidence Pyramid Available from: https://www.researchgate.net/figure/Hierarchy-of-evidence-pyramid-The-pyramidal-shape-qualitatively-integrates-the- amount-of_fig1_311504831 [accessed 12 Dec, 2019] Available from: https://journals.lww.com/clinorthop/Fulltext/2003/08000/Hierarchy_of_Evidence__From_Case_Reports_to.4.aspx [accessed 14 March 2020] CLINICAL ORTHOPAEDICS AND RELATED RESEARCH Number 413, pp. 19–24 © 2003 Lippincott Williams & Wilkins, Inc. Hierarchy of Evidence: From Case Reports to Randomized Controlled Trials Brian Brighton, MD*; Mohit Bhandari, MD, MSc**; Downloaded from https://journals.lww.com/clinorthop by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AWnYQp/IlQrHD3oaxD/v Paul Tornetta, III, MD†; and David T. Felson, MD* In the hierarchy of research designs, the results This hierarchy has not been supported in two re- of randomized controlled trials are considered cent publications in the New England Journal of the highest level of evidence. Randomization is Medicine which identified nonsignificant differ- the only method for controlling for known and ences in results between randomized, controlled unknown prognostic factors between two com- trials, and observational studies. The current au- parison groups. Lack of randomization predis- thors provide an approach to organizing pub- poses a study to potentially important imbal- lished research on the basis of study design, a hi- ances in baseline characteristics between two erarchy of evidence, a set of principles and tools study groups. There is a hierarchy of evidence, that help clinicians distinguish ignorance of evi- with randomized controlled trials at the top, con- dence from real scientific uncertainty, distin- trolled observational studies in the middle, and guish evidence from unsubstantiated opinions, uncontrolled studies and opinion at the bottom.
  • Causality in Medicine with Particular Reference to the Viral Causation of Cancers

    Causality in Medicine with Particular Reference to the Viral Causation of Cancers

    Causality in medicine with particular reference to the viral causation of cancers Brendan Owen Clarke A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy of UCL Department of Science and Technology Studies UCL August 20, 2010 Revised version January 24, 2011 I, Brendan Owen Clarke, confirm that the work presented in this thesis is my own. Where information has been derived from other sources, I confirm that this has been indicated in the thesis. ................................................................ January 24, 2011 2 Acknowledgements This thesis would not have been written without the support and inspiration of Donald Gillies, who not only supervised me, but also introduced me to history and philosophy of science in the first instance. I have been very privileged to be the beneficiary of so much of his clear thinking over the last few years. Donald: thank you for everything, and I hope this thesis lives up to your expectations. I am also extremely grateful to Michela Massimi, who has acted as my second supervisor. She has provided throughout remarkably insightful feedback on this project, no matter how distant it is from her own research interests. I’d also like to thank her for all her help and guidance on teaching matters. I should also thank Vladimir Vonka for supplying many important pieces of both the cervical cancer history, as well as his own work on causality in medicine from the perspective of a real, working medical researcher. Phyllis McKay-Illari provided a critical piece of the story, as well as many interesting and stim- ulating discussions about the more philosophical aspects of this thesis.
  • A Meta-Review of Transparency and Reproducibility-Related Reporting Practices in Published Meta- Analyses on Clinical Psychological Interventions (2000-2020)

    A Meta-Review of Transparency and Reproducibility-Related Reporting Practices in Published Meta- Analyses on Clinical Psychological Interventions (2000-2020)

    A meta-review of transparency and reproducibility-related reporting practices in published meta- analyses on clinical psychological interventions (2000-2020). Rubén López-Nicolás1, José Antonio López-López1, María Rubio-Aparicio2 & Julio Sánchez- Meca1 1 Universidad de Murcia (Spain) 2 Universidad de Alicante (Spain) Author note: This paper has been published at Behavior Research Methods. The published version is available at: https://doi.org/10.3758/s13428-021-01644-z. All materials, data, and analysis script coded have been made publicly available on the Open Science Framework: https://osf.io/xg97b/. Correspondence should be addressed to Rubén López-Nicolás, Facultad de Psicología, Campus de Espinardo, Universidad de Murcia, edificio nº 31, 30100 Murcia, España. E-mail: [email protected] 1 Abstract Meta-analysis is a powerful and important tool to synthesize the literature about a research topic. Like other kinds of research, meta-analyses must be reproducible to be compliant with the principles of the scientific method. Furthermore, reproducible meta-analyses can be easily updated with new data and reanalysed applying new and more refined analysis techniques. We attempted to empirically assess the prevalence of transparency and reproducibility-related reporting practices in published meta-analyses from clinical psychology by examining a random sample of 100 meta-analyses. Our purpose was to identify the key points that could be improved with the aim to provide some recommendations to carry out reproducible meta-analyses. We conducted a meta-review of meta-analyses of psychological interventions published between 2000 and 2020. We searched PubMed, PsycInfo and Web of Science databases. A structured coding form to assess transparency indicators was created based on previous studies and existing meta-analysis guidelines.
  • Evidence Based Policymaking’ in a Decentred State Abstract

    Evidence Based Policymaking’ in a Decentred State Abstract

    Paul Cairney, Professor of Politics and Public Policy, University of Stirling, UK [email protected] Paper accepted for publication in Public Policy and Administration 21.8.19 Special Issue (Mark Bevir) The Decentred State The myth of ‘evidence based policymaking’ in a decentred state Abstract. I describe a policy theory story in which a decentred state results from choice and necessity. Governments often choose not to centralise policymaking but they would not succeed if they tried. Many policy scholars take this story for granted, but it is often ignored in other academic disciplines and wider political debate. Instead, commentators call for more centralisation to deliver more accountable, ‘rational’, and ‘evidence based’ policymaking. Such contradictory arguments, about the feasibility and value of government centralisation, raise an ever-present dilemma for governments to accept or challenge decentring. They also accentuate a modern dilemma about how to seek ‘evidence-based policymaking’ in a decentred state. I identify three ideal-type ways in which governments can address both dilemmas consistently. I then identify their ad hoc use by UK and Scottish governments. Although each government has a reputation for more or less centralist approaches, both face similar dilemmas and address them in similar ways. Their choices reflect their need to appear to be in control while dealing with the fact that they are not. Introduction: policy theory and the decentred state I use policy theory to tell a story about the decentred state. Decentred policymaking may result from choice, when a political system contains a division of powers or a central government chooses to engage in cooperation with many other bodies rather than assert central control.
  • 1 Lessons from the Evidence on Evidence-Based Policy Richard D. French Introduction What Can We Learn from the Evidence on Evide

    1 Lessons from the Evidence on Evidence-Based Policy Richard D. French Introduction What Can We Learn from the Evidence on Evide

    Lessons from the Evidence on Evidence-based Policy Richard D. French Introduction What can we learn from the evidence on evidence-based policy (EBP)? We cannot do a randomized controlled test, as dear to the hearts of many proponents of EBP as that method may be. Much of the “evidence” in question will not meet the standards of rigour demanded in the central doctrine of EBP, but we are nevertheless interested in knowing, after a number of years as an idea in good currency, what we have gathered from the pursuit of EBP. In one of the most important surveys of research use, Sandra Nutley, Isabel Walter and Huw Davies (Nutley et al 2007, 271) note that: As anyone working in the field of research use knows, a central irony is the only limited extent to which evidence advocates can themselves draw on a robust evidence base to support their convictions that greater evidence use will ultimately be beneficial to public services. Our conclusions…are that we are unlikely any time soon to see such comprehensive evidence neatly linking research, research use, and research impacts, and that we should instead be more modest about what we can attain through studies that look for these. If then, we want to pursue our interest in the experience of EBP, we need to be open- minded about the sources we use. A search for “evidence-based policy” on the PAIS database produced 132 references in English. These references, combined with the author’s personal collection developed over the past several years, plus manual searches of the bibliographies of the books, book chapters and articles from these two original sources, produced around 400 works on EBP.
  • Evidence for Improved Policy Making: from Hierarchies to Appropriateness

    Evidence for Improved Policy Making: from Hierarchies to Appropriateness

    Working Paper # 2 ‘Good’ evidence for improved policy making: from hierarchies to appropriateness Sudeepa Abeysinghe, Justin Parkhurst June 2013 London School of Hygiene and Tropical Medicine GRIP-Health Programme www.lshtm.ac.uk/groups/griphealth The Getting Research Into Policy in Health (GRIP-Health) project is supported by a grant from the European Research Council (Project ID# 282118). The views expressed here are solely those of the authors and do not necessarily reflect the funding body or the host institution. Summary Within the field of public health, and increasingly across other areas of social policy, there are widespread calls to increase or improve the use of evidence for policy making. Often these calls rest on an assumption that improved evidence utilisation will be a more efficient or effective means of achieving social goals. Yet, a clear elucidation of what can be considered ‘good evidence’ for policy use is rarely articulated. Many of the current discussions of best practice in the health policy sector derive from the evidence-based medicine (EBM) movement, embracing the ‘hierarchy of evidence’ in framing the selection of evidence – a hierarchy that places experimental trials as preeminent in terms of methodological quality. However, there are a number of difficulties associated with applying EBM methods of grading evidence onto policy making. Numerous public health authors have noted that the hierarchy of evidence is a judgement of quality specifically developed for measuring intervention effectiveness, and as such it cannot address other important health policy considerations such as affordability, salience, or public acceptability (Petticrew and Roberts, 2003). Social scientists and philosophers of knowledge have illustrated other problems in the direct application of the hierarchy of evidence to guide policy.
  • Meta-Analysis Understanding Systematic Reviews

    Meta-Analysis Understanding Systematic Reviews

    Downloaded from adc.bmjjournals.com on 27 September 2006 Understanding systematic reviews and meta-analysis A K Akobeng Arch. Dis. Child. 2005;90;845-848 doi:10.1136/adc.2004.058230 Updated information and services can be found at: http://adc.bmjjournals.com/cgi/content/full/90/8/845 These include: References This article cites 11 articles, 10 of which can be accessed free at: http://adc.bmjjournals.com/cgi/content/full/90/8/845#BIBL Rapid responses You can respond to this article at: http://adc.bmjjournals.com/cgi/eletter-submit/90/8/845 Email alerting Receive free email alerts when new articles cite this article - sign up in the box at the service top right corner of the article Topic collections Articles on similar topics can be found in the following collections Systematic reviews (incl meta-analyses): examples (338 articles) Notes To order reprints of this article go to: http://www.bmjjournals.com/cgi/reprintform To subscribe to Archives of Disease in Childhood go to: http://www.bmjjournals.com/subscriptions/ Downloaded from adc.bmjjournals.com on 27 September 2006 837 EVIDENCE BASED CHILD HEALTH 1 Principles of evidence based medicine A K Akobeng ............................................................................................................................... Arch Dis Child 2005;90:837–840. doi: 10.1136/adc.2005.071761 Health care professionals are increasingly required to base preferences, and should also incorporate exper- tise in performing clinical history and physical clinical decisions on the best available evidence. Evidence examination. Figure 1 illustrates a typical flow based medicine (EBM) is a systematic approach to clinical chart of EBM, depicting how knowledge and problem solving which allows the integration of the best experience may be integrated with patients’ preferences and available evidence in the making available research evidence with clinical expertise and of clinical decisions.
  • Evidence, Hierarchies, and Typologies: Horses for Courses M Petticrew, H Roberts

    Evidence, Hierarchies, and Typologies: Horses for Courses M Petticrew, H Roberts

    527 J Epidemiol Community Health: first published as 10.1136/jech.57.7.527 on 1 July 2003. Downloaded from THEORY AND METHODS Evidence, hierarchies, and typologies: horses for courses M Petticrew, H Roberts ............................................................................................................................. J Epidemiol Community Health 2003;57:527–529 Debate is ongoing about the nature and use of evidence criteria that are used to appraise public health in public health decision making, and there seems to be interventions.6 This provided a valuable guide to the other types of public health knowledge that an emerging consensus that the “hierarchy of evidence” are needed to guide interventions, and also may be difficult to apply in other settings. It may be outlined the role of different types of research unhelpful however to simply abandon the hierarchy based information; particularly observational and qualitative data. At its heart is a recognition that without having a framework or guide to replace it. One the hierarchy of evidence is a difficult construct to such framework is discussed. This is based around a apply in evidence based medicine, and even more matrix, and emphasises the need to match research so in public health, and the paper points to the continuing debate about the appropriateness of questions to specific types of research. This emphasis on relying on study design as a marker for the cred- methodological appropriateness, and on typologies ibility of evidence. Our paper further pursues this rather than hierarchies of evidence may be helpful in issue of the hierarchy of evidence, and advocates its revision on two main grounds. It also suggests organising and appraising public health evidence.
  • GRADE Judging the Applicability and Strength of Evidence in Health Technology Assessments (Htas)

    GRADE Judging the Applicability and Strength of Evidence in Health Technology Assessments (Htas)

    GRADE Judging the applicability and strength of evidence in Health Technology Assessments (HTAs) Linda Long and Chris Hyde STEPS in a Systematic Review • Ask the question • Get the stuff – SEARCH • Check the stuff find appraise act – APPRAISAL • Synthesise the stuff – what does it mean? – meta-analysis (quantitative data) – meta-synthesis (qualitative data) • Apply in the decision Using evidence from systematic reviews • The endpoint of a systematic review is: – A summary of the evidence – An estimate of effect for various outcomes • We need to make a judgement on the strength of evidence across all studies for each important outcome • Guideline developers can use this judgement to formulate their recommendation(s) and consider the direction (for or against) and strength (strong or weak) of their recommendation(s) GRADE - Grading of Recommendations, Assessment, Development and Evaluation • The GRADE approach is a transparent system for grading the quality of evidence in systematic reviews • GRADE Working Group (www.gradeworkinggroup.org) – International group of guideline developers, methodologists and clinicians from around the world (>100 contributors) – since 2000 What is GRADE? • GRADE describes the extent to which you can be confident that an estimate of effect is near the true value for each outcome, across all studies • When judging the quality of a body of evidence, the assessment considers several factors , including The Risk of Bias of each included study • Other factors for assessing quality are: – imprecision, inconsistency,
  • Building a Foundation for an Evidence-Based Approach to Practice: Teaching Basic Concepts to Undergraduate Freshman Students

    Building a Foundation for an Evidence-Based Approach to Practice: Teaching Basic Concepts to Undergraduate Freshman Students

    Building a Foundation for an Evidence-Based Approach to Practice: Teaching Basic Concepts to Undergraduate Freshman Students HELEN K. BURNS,PHD, RN,* AND SUSAN M. FOLEY,PHD, RN* The University of Pittsburgh School of Nursing intro- and deaths that occur as a result of errors in care duced evidence-based practice (EBP) into the fresh- (Committee on Quality Healthcare in America man-level curriculum this year. This article describes [CQHA], IOM, 1999). In Crossing the Quality the reasons behind this decision, the development of the curriculum, the specific EBP content, and the Chasm: A New Health System for the 21st Century, student assignments. The introduction of EBP into the the IOM discussed reform measures necessary to first-level nursing curriculum has proven to be suc- ensure quality care delivery (CQHA, IOM, 2001). cessful; consideration is even being given to expand- Safer, high-quality care must come from redesigned ing the EBP content. (Index words: Evidence-based systems of care, ones that deliver state-of-the-art practice; Critical thinking; Research utilization; Bac- calaureate education; Curriculum) J Prof Nurs 21: health care (CQHA,IOM,2001). This report 351-357, 2005. A 2005 Elsevier Inc. All rights reserved. disparages the 20-year gap existing between research generation and implementation of findings into practice; it supports the use of information technol- HE UNIVERSITY OF Pittsburgh School of ogy and EBP as essential factors for health care T Nursing decided to incorporate evidence-based reform. More importantly, this report recommended practice (EBP) into its freshman-level curriculum, that there be a meeting of an interdisciplinary team of beginning with the 2004–2005 freshmen class.
  • USING RESEARCH EVIDENCE a Practice Guide 1

    USING RESEARCH EVIDENCE a Practice Guide 1

    USING RESEARCH EVIDENCE A Practice Guide 1 USING RESEARCH EVIDENCE A Practice Guide ACKNOWLEDGEMENTS This guide was written by Jonathan Breckon, edited by Isobel Roberts and produced by Nesta’s Innovation Skills team. Thank you to the following for their comments, suggestions and help with this guide: Helen Anderson (Policy Profession, Civil Service Learning); Jonathan Bamber (Nesta); Albert Bravo- Biosca (Nesta); Professor Paul Cairney (University of Stirling); Michael O’Donnell (BOND); Triin Edovald (Nesta); Caroline Kenny (Parliamentary Office for Science and Technology); Geoff Mulgan (Nesta); Tony Munton (Centre for Evidence Based Management and RTK Ltd); Nicky Miller (College of Policing); Helen Mthiyane (Alliance for Useful Evidence); Jonathan Sharples (Education Endowment Foundation); Louise Shaxson (Overseas Development Institute); Howard White (Campbell Collaboration). We would also like to thank all the participants and partners in our Evidence Masterclasses during 2014-15, including Civil Service Learning, ACEVO (Association of Chief Executives of Voluntary Organisations), Scottish Council for Voluntary Organisations, SOLACE and the Welsh Government Policy Profession. The views do not, however, necessarily reflect the suggestions made by these individuals or their affiliated organisations. Any errors in the report remain the author’s own. About the Alliance for Useful Evidence The Alliance champions the use of evidence in social policy and practice. We are an open–access network of 2,500 individuals from across government, universities, charities, business and local authorities in the UK and internationally. Membership is free. To sign up visit: alliance4usefulevidence.org/join Nesta’s Practice Guides This guide is part of a series of Practice Guides developed by Nesta’s Innovation Skills team.