An Updated Review of Published Simulation Evaluation Instruments
Total Page:16
File Type:pdf, Size:1020Kb
Clinical Simulation in Nursing (2013) 9, e393-e400 www.elsevier.com/locate/ecsn Featured Article An Updated Review of Published Simulation Evaluation Instruments Katie Anne Adamson, PhD, RNa,*, Suzan Kardong-Edgren, PhD, RNb, Janet Willhaus, MSN, RNc aUniversity of Washington Tacoma, Tacoma, WA 98402-3100, USA bBoise State University, WA, USA cWashington State University, Spokane, WA 99210, USA KEYWORDS Abstract: Interest in simulation as a teaching and evaluation strategy in nursing education continues to evaluation; grow. Mirroring this growth, we have seen a proliferation of instruments designed to evaluate simulation instrument; participant performance. This article describes two frameworks for categorizing simulation evaluation strat- simulation egies and provides a review of recent simulation evaluation instruments. The review focuses on four instru- ments that have been used extensively in the literature, objective structured clinical examinations (OSCE’s) including four OSCE instruments, and an extensive list of new instruments for simulation evaluation. Cite this article: Adamson, K. A., Kardong-Edgren, S., & Willhaus, J. (2013, September). An updated review of published simulation evaluation instruments. Clinical Simulation in Nursing, 9(9), e393-e400. http://dx.doi.org/ 10.1016/j.ecns.2012.09.004. Ó 2013 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. Simulation use continues to grow and develop in nursing literature in nursing (Davis & Kimble, 2011; Yuan, and other programs educating health care providers around Williams, Fang, & Ye, 2012), pharmacy (Bray, Schwartz, the world. DeVita (2009) argues that simulation should be Odegard, Hammer, & Seybert, 2011) and medicine a core educational strategy because it is ‘‘measurable, (Kogan, Holmboe, & Hauer, 2009) echo a continued quest focused, reproducible, mass producible, and importantly, for meaningful ways to evaluate participants in simulation very memorable’’ (p. 46). Both the National Council of activities. State Boards of Nursing and the National League for Nurs- In response to repeated requests for an updated and ing are conducting research about the use of simulation as expanded list of evaluation instruments, this article provides a teaching and evaluation method (Hayden, 2011; Rizzolo, a follow-up to the original instrument review article (Kardong- Oermann, Jeffries, & Kardong-Edgren, 2011). However, Edgren, Adamson, & Fitzgerald, 2010). The purposes for this Tanner (2011) recently noted how ‘‘little investment there article include (a) discussing existing frameworks for catego- has been in developing suitable measures for the assess- rizing simulation evaluation strategies and (b) using an adap- ment of learning outcomes, particularly those relevant for tation of these frameworks to provide the following: a practice discipline’’ (p. 491). Recent reviews of the 1. An update on four instruments from our original review * Corresponding author: [email protected] (K. A. Adamson). that have been cited extensively in the literature 1876-1399/$ - see front matter Ó 2013 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ecns.2012.09.004 Updated Review of Published Simulation Evaluation Instruments e394 2. A review of objective structured clinical examinations health outcomes. Phases 1 to 3 help describe the quality and (OSCEs), including the development of four OSCE in- applicability of simulation evaluation activities, with Phase struments in undergraduate nursing education 3 activities being the pinnacle of research because they de- 3. A report on instruments that are either new or were scribe how simulation affects health outcomes. not included in the original instrument review article (Kardong-Edgren, Kirkpatrick’s Levels of Evaluation et al., 2010) and that Key Points are appropriate for sim- In a similar fashion, Kirkpatrick’s (1994) four levels of Translational Science ulation evaluation evaluation are helpful in describing what type of evidence and Kirkpatrick’s different simulation evaluation strategies produce. The (1994) Levels of eval- four levels, reaction, learning, behavior, and outcomes, uation are two useful Frameworks for are described in Figure 1, using language from Boulet frameworks for cate- Categorizing Jeffries, Hatala, Korndorffer, Feinstein, & Roche, (2011, gorizing simulation Evaluation Strategies p. S50), along with the corresponding TSR nomenclature. evaluations. In this combination of the TSR and Kirkpatrick frameworks Existing evaluation Two useful frameworks that for describing types of simulation evaluation evidence, instruments continue have emerged to categorize learning at Level 2 (Translation Phase 1) may be subdi- to be adapted and various evaluation strategies vided into affective, cognitive, and psychomotor learning. applied. are translational science re- Also, Kirkpatrick’s Level 1, reaction, is not applicable to New evaluation in- search (TSR; McGaghie, translational research. struments continue to Draycott, Dunn, Lopez, & emerge. Stefanidis, 2011) and Review of Simulation Evaluation Instruments Kirkpatrick’s (1994) levels of evaluation. The following is a brief overview of these The following sections include an update regarding four frameworks, which will be used to categorize instruments instruments from our original review, a review of OSCEs, in following sections. and a report on instruments that either are new or were not included in the original instrument review article. They TSR refer to Tables 1 through 3, which indicate TSR phases and Kirkpatrick’s levels. The National Institutes of Health (2011, 2012) describe translational research as a continuum on which scientific Update on Instruments From Original Review discoveries move from preclinical (or bench) research to practical applications in patient care at the bedside and ulti- Four instruments from our original review have been used mately affect health care outcomes. In short, TSR can be repeatedly to evaluate performance and learning and are thought of as research that takes new knowledge from reported in the literature: the Sweeny-Clark Simulation Ó ‘‘bench to bedside and beyond.’’ Nomenclature in the field Performance Evaluation Tool (Clark (2006) Tool ); the of TSR is somewhat contested (Woolf, 2008). However, Clinical Simulation Evaluation Tool (CSET; Radhakrishnan, the concept is highly applicable to simulation evaluation re- Roche, & Cunningham, 2007); the Lasater Clinical Ó search. For the purposes of this article, we are adopting the Judgment Rubric (LCJR ; Lasater, 2007); and the Creighton Ô overview provided by Dougherty and Conway (2008) and Simulation Evaluation Instrument (C-SEI ; Todd, Manz, applied to simulation evaluation by McGaghie et al. (2011). Hawkins, Parsons, & Hercinger, 2008), subsequently Translation Phase 1 designates preclinical activities modified to create the Creighton Competency Evaluation (Woolf, 2008) that are meant to assess the efficacy of Instrument (Hayden, Kardong-Edgren, Smiley, & Keegan, care. Relating this to simulation, we might say that this Reliability and validity testing of the Creighton Competency level of research demonstrates, in the simulation lab, Evaluation Instrument [CCEI] for use in the NCSBN whether students have learned something. Translation National Simulation Study [2012 unpublished data]). Phase 2 designates activities meant to assess who benefits Table 1 provides an overview of each of these instruments. from care. Relating this to simulation, we might say that The Sweeny-Clark Simulation Performance Evaluation Ó these activities demonstrate whether what students learned Tool (Clark (2006) Tool ) has been modified for specific in the simulation lab carries over to the actual patient care evaluation needs, but the authors have not described addi- setting. Finally, Translation Phase 3 designates activities tional reliability or validity assessments of these modifica- that are meant to assess whether improved care yields im- tions. Similarly, the CSET, developed and originally proved outcomes in the broader health care arena. Relating published by Radhakrishnan et al. (2007), has been modi- this to simulation, we might say that these activities demon- fied to suit diverse performance evaluation needs, including strate whether what was learned in the simulation lab and simulated and standardized patient encounters involving demonstrated in the patient care setting results in improved congestive heart failure (Adamson, in press), myocardial pp e393-e400 Clinical Simulation in Nursing Volume 9 Issue 9 Updated Review of Published Simulation Evaluation Instruments e395 Figure 1 T-1, Translation Phase 1; T-2, Translation Phase 2; T-3, Translation Phase 3; T-0, not applicable to translational research. infarction, and a chest wound (Grant, Moss, Epps, & Watts, of specific tasks and behaviors. Students rotate through 2010). Grant et al. (2010) report interrater reliability find- each station, where performance is assessed with checklists ings from their modification of the CSET. The LCJR and rating scales. Stations may include the use of task (Lasater, 2007) has been used for a variety of purposes, in- trainers, standardized patients, manikins, screen-based sim- cluding debriefing (Mariani, Cantrell, Meakim, Prieto, & ulation, and role playing. Four of the articles reviewed for Dreifuerst, in press) and evaluation of technical skills