Supplement

NCSBN Regulatory Guidelines and Evidence-Based Quality Indicators for Education Programs

Nancy Spector, PhD, RN, FAAN Director Regulatory Innovations National Council of State Boards of Nursing

Josephine Silvestre, MSN, RN Senior Associate, Regulatory Innovations National Council of State Boards of Nursing

Maryann Alexander, PhD, RN, FAAN Chief Officer, Nursing Regulation National Council of State Boards of Nursing

Brendan Martin, PhD Associate Director, Research National Council of State Boards of Nursing

Janice I Hooper, PhD, RN, FRE, CNE, FAAN, ANEF Lead Nursing Consultant for Education Texas Board of Nursing

Allison Squires, PhD, RN, FAAN Distinguished Nurse Scholar of the National Academy of Medicine 2019-2020 Associate Professor, Rory Meyers College of Nursing, New York University

Melissa Ojemeni, PhD, RN Leadership Fellow, Partners in Health, Boston, MA CONTENTS July 2020 • Volume 11 • Issue 2 Supplement

EXECUTIVE SUMMARY...... S3 PART I Literature Review ...... S7 . . . PART II A National Mixed-Methods Study to Identify Quality Indicators and Warning Signs of Nursing Education Program Performance ...... S15 . . A National Delphi Study to Determine Quality Indicators and Warning Signs of Nursing Education Program Performance ...... S15 . . Methods ...... S15. . . . Defining Consensus...... S15 Study Sample Selection...... S15 . . . Procedure...... S17 Advancing nursing excellence Statistical Analysis...... S19. . . Results ...... S19 . . . for public protection Discussion ...... S20 . . . Quality Indicators ...... S20. . . Warning Signs...... S21 . . . Performance metrics ...... S21 . . . Achieving Consensus...... S22 . . . Limitations ...... S22 Conclusions ...... S22 . . . Acknowledgment ...... S22. . . A Quantitative Analysis of 5 years of BONs Annual Report Documents . . . . S23 . Methods ...... S23. . . . Data Collection ...... S23 Variables...... S23. . . . Statistical Methods...... S25. . . Results ...... S25 . . . Program Demographics Characteristics ...... S25 Faculty Characteristics Related to Full Approval ...... S27. . Program Characteristics Related to Full Approval ...... S27 Relationship Between NCLEX Pass Rates and Faculty Characteristics ...... S28 Program Characteristics Related to NCLEX Pass Rates ...... S29 Discussion ...... S30 . . . Mission Limitations ...... S31 . . . The Journal of Nursing Regulation provides a Conclusion ...... S31 . . . worldwide forum for sharing research, A Qualitative Analysis of 5 years of BONs Site Visit Documents...... S32 . evidence-based practice, and innovative Methods ...... S32. . . . strategies and solutions related to nursing regulation, with the ultimate goal of Document Sample Inclusion and Exclusion Criteria...... S32 . . safeguarding the public. The journal Data Analysis...... S33 maintains and promotes National Council Results ...... S34 . . . of State Boards of Nursing’s (NCSBN’s) Emerging Themes ...... S35. . . values of integrity, accountability, quality, Categorical Findings Associated With the Theoretical Framework ...... S36 . vision, and collaboration in meeting readers’ Limitations ...... S38 knowledge needs. Conclusion ...... S38 . . . Manuscript Information SUMMARY...... S39. . . . The Journal of Nursing Regulation accepts timely articles that may advance the science PART III of nursing regulation, promote the mission NCSBN Guidelines for Nursing Education Program Approval ...... S42 and vision of NCSBN, and enhance com- Quality Indicators ...... S43. . . munication and collaboration among nurse Warning Signs ...... S44 regulators, educators, practitioners, and the scientific community. Manuscripts must be Supportive Evidence for the Approval Guidelines ...... S45. . original and must not have been nor will be REFERENCES ...... S47 submitted elsewhere for publication. See APPENDICES ...... S50 www.journalofnursingregulaton.com for Appendix A: Definition of Terms ...... S50 . . . author guidelines and manuscript submis- sion information. Appendix B1: The Johns Hopkins Evidence Levels and Quality Ratings ...... S51 . Appendix B2: Evidence-Based Publications and Key Findings for Nursing Education Letters to the Editor Send to Maryann Alexander at Performance Metrics ...... S52 [email protected]. Appendix C: Final Codebook for Program Analysis ...... S60 Appendix D: Site Visit Template...... S60 . . Appendix E: Annual Report Core Data Template...... S62 . .

S2 Journal of Nursing Regulation NCSBN Regulatory Guidelines and Evidence-Based Quality Indicators for Nursing Education Programs

EXECUTIVE SUMMARY Boards of nursing (BON) approval of programs is an integral part of their mission of public protection. In the United States, nursing education programs are required to be approved by the BON* in the state where the program is officially located. The purpose of program approval is to ensure the program comprehensively covers the knowledge and skills students need to be licensed as a nurse and to practice safely as new graduate nurses, thereby providing society a competent nurse workforce. To obtain BON nursing education program approval, nursing programs must meet the nursing education standards established by their BON. Only students graduating from officially recognized and approved programs are permitted to take the the NCLEX, the official nursing licensure exam in the US and Canada. (Spector & Woods, 2013). To determine whether graduates are eligible to take the NCLEX, BONs rely on verification from the nursing education program that each student has successfully completed all program requirements, including successfully meeting clinical learning objectives. BONs offer two types of nursing education program approval: initial approval of new programs before they open for enrollment and ongoing monitoring and continued approval of programs. For a new program, the approval process begins with an initial application and proposal to the BON. The BON conducts an extensive evaluation to ensure that the program has the proper facilities, resources, administration and faculty, curriculum, clinical agreements, policies, and procedures, among other requirements set forth in state regula- tions. The process for continued approval of established programs is based upon monitoring the programs’ performance outcomes and compliance with BON rules over time (Spector et al., 2018). BONs use different models for approving nursing programs, and nursing education rules and regulations are not always consistent across all jurisdictions. Most BONs hire graduate-prepared education consultants with experience in nursing education to make recom- mendations on the approval status of the nursing programs in their state. In a few states, the BON’s executive officer and board members from the BON’s education committee (or educators on the board) may make these recommendations. About half of the BONs make site visits as needed, while the other half make regular visits (National Council of State Boards of Nursing [NCSBN], 2016a). In most states, the approval status will be designated as (a) full approval when all requirements are met; (b) conditional or probationary when some, but not all, of the requirements are met; or (c) approval removal when programs fail to meet requirements of correct cited deficiencies (Spector et al., 2018). The three most common performance outcome measures used by BONs and other health profession accreditors are employment rates, graduation rates, and NCLEX pass rates (Spector et al, 2018). Although BONs use different models for approving nursing programs, the approval process is well recognized overall. Questions have arisen from nursing education experts regarding valid measures of nurs- ing program quality. One is the use of NCLEX pass rates (Bernier et al., 2005; Giddens, 2009; Taylor et al., 2014). Of the 36 states that require a percent of first-time pass rates,** 64% of them require an 80% pass rate (NSCBN, 2019). BONs are also questioning whether this method of measuring program performance is appropriate and asking whether other metrics exist that should amend or replace the current regulatory standards, which are set by each state. A large mixed-method study was conducted by NCSBN starting in 2017 to answer these questions and, more specifically, to identify the quality indicators of approved nursing education programs and the warning signs indicating a program may be falling below required standards for approval.

* In Mississippi, the programs are approved by the Mississippi Institutions of Higher Learning and the practical nursing programs are approved by the BON. In New York, the programs are approved by the New York Board of Regents. In Idaho, programs are approved as long as they are accredited by a national nursing accrediting agency recognized by the U.S. Department of Education, though the BON takes over if that accreditation is lost. ** Fifteen states require the national pass rate or a percentage thereof. Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S3 Methods and Selected Findings The study consists of a comprehensive literature review; a national Delphi study providing data on consensus of experts in nursing edu- cation, regulation, and practice; a study analysis of 5 years’ worth of BON annual reports of nursing programs; and a study analysis of 5 years’ worth of BON site visit documents.

Literature Review The literature review yielded 65 relevant published articles that were reviewed and graded using the Johns Hopkins Levels of Evidence and Quality Guide. Overall, the literature review revealed a number of quality indicators and warning signs that may serve as metrics for evaluating higher education programs, although the identified articles did not provide the levels of evidence needed for making policy decisions (Loversidge, 2016).

National Delphi Study Identifying Quality Indicators and Warning Signs of Nursing Education Program Performance For our national Delphi study, data were provided on consensus from experts in nursing education, regulation, and practice on nursing education quality indicators, warning signs when programs are beginning to fall below standards, and performance outcome measures of nursing education programs. Consensus among the experts was reached after 2 rounds of discussion. This Delphi study identified 18 quality indicators (characteristics of nursing programs that graduate safe and competent students), 11 warning signs when nursing pro- grams begin to fall below standards, and eight program performance outcomes that nursing regulatory bodies could measure. The quality indicators fall into the categories of (a) school leadership and faculty support; (b) consistent and competent faculty; (c) quality, hands-on clinical experiences with meaningful collaboration with clinical partners; and (d) an evidence-based curriculum emphasizing quality and safety and critical thinking/clinical reasoning. Although the warning signs are similar to the quality indicators (only the opposite), there are additional ones that are of interest, including over-reliance on simulation to replace clinical experiences and refusal of clinical facili- ties to host clinical experiences. There were few surprises with the outcomes that were identified (NCLEX pass rates, graduation rates, employment rates, etc.).

A Quantitative Analysis of 5 Years of BONs Annual Report Document This study was a retrospective cohort study of 11,378 annual report data collected over a 5-year period (2012-2017) by 43 BONs. This quantitative analysis examined data contained within the BONs’ annual reports to learn about indicators associated with full approval of nursing education program performance and those associated with programs that have lost approval. Statistically significant characteristics of approved programs and those with ≥ 80% NCLEX pass rates included (a) national accreditation, (b) traditional or hybrid modalities, (c) longer-standing programs, (d) higher enrollment capacity, (e) multiple program sites, (f) private nonprofit or public institutions, (g) program director with a PhD, (h) /licensed vocational nurse (LPN/LVN) and bachelor of science in nursing (BSN) programs, and (i) no more than three program directors in 5 years. A marginally significant finding was that programs with more than 35% full-time faculty had ≥ 80% first-time NCLEX pass rates and full approval. Importantly, the NCLEX was viewed as a lagging indicator in this study; meaning, lower licensure examination performance was considered indicative of other program deficiencies, not vice versa.

A Quantitative Analysis of 5 Years of BONs Annual Report Documents The qualitative study of 5 years’ of BONs’ site visit documents was conducted to better understand the qualifiable descriptors of why programs either become at risk for failing or do fail. After the inclusion/exclusion criteria were applied, there were 1,278 site visit reports for LPN/LVN and registered nurse (RN) programs eligible for the analysis, which included documents from programs that were on probation, under review, or did not have full approval. Two researchers used MaxQDA qualitative data analysis software to analyze the documents. Considerable, specific data on what causes nursing programs to begin to fail or fail, were found in the site visit documents. The main signal for a “site visit trigger” was NCLEX pass rates ≤80% for four or more quarters. The length of time it took to trigger a site visit related to NCLEX performance concerns varied by state regulations. Administrative processes, such as a lack of policies and procedures, were found to be problematic for nursing programs. High faculty turnover and the inability to recruit qualified faculty were linked to poor NCLEX performance. Faculty with little training in basic pedagogies was a persistent theme found in failing programs. Similarly, heavy faculty workloads and limited faculty development opportunities were also identified. Many failing programs had no overarching philosophy and curricular framework that tied the curriculum together. This gap resulted in curricula that were task-oriented, which masked the curricula as being “competency-based.” The issues identified in this study coalesce nicely with the data found in the literature, our Delphi study, and our quantitative study of annual reports.

Guideline Development After all the evidence was collected, NCSBN invited experts from nursing regulation, education, research, and law to review the data and findings and to develop guidelines for BONs to use when approving nursing education programs that include evidence-based criteria, quality indicators, and warning signs.

S4 Journal of Nursing Regulation Conclusion This study provides substantial evidence-based criteria for identifying quality indicators of successful nursing education programs as well as warning signs for high-risk programs. The quality indicators and warning signs can serve as the basis for legally defensible and evidence-based guidelines for nursing education approval. It is hoped that these guidelines will enhance collaboration between educators and regulators. Together, they will be able to use the quality indicators to guide nursing programs to approval and to identify warning signs when the nursing program is beginning to fall below standards. This early intervention will assist nursing programs to act before BON sanctions or program closures, thus continuing to graduate safe and competent nurses, in adequate numbers, to care for patients.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S5 NCSBN Regulatory Guidelines and Evidence-Based Quality Indicators for Nursing Education Programs

Keywords: Nursing education program approval, nursing education quality indicators and predictors, NCLEX pass rates.

n the United States, prelicensure nursing education programs are required to be approved by the BON in the state where the program is officially located.* This approval process begins with an initial application and extensive proposal to the BON, which performs an Iextensive evaluation ensuring the program has the proper facilities, resources, administration and faculty, curriculum, clinical agree- ments, policies, and procedures, among many other requirements set forth in state regulations. Once the program is approved, the BON continually monitors the program. The monitoring process consists of overseeing NCLEX pass rates and may include other metrics such as student retention and/or graduation rates. Additionally, many BONs periodically conduct formal site visits to the program. A common cause for conditional or loss of approval is a drop in the required NCLEX pass rate. Although the approval process is well recognized at the state level, there are questions regarding the prelicensure nursing program approval criteria, particularly using the NCLEX pass rates as the sole criteria for ongoing approval. Of the 57 BONs that regulate nursing education programs surveyed in NCSBN’s Member Board Profiles, 36 use first-time NCLEX pass rates; of those, 64% use the 80% pass rate as their standard. Six do not use pass rates as a performance measure, and another 15 use the national pass rate or a percentage thereof (NCSBN, 2019). Questions by nurse educators remain as to whether first-time NCLEX pass rates are valid measures, when used alone, of nursing education program quality (Bernier et al., 2005; Giddens, 2009; Taylor et al., 2014). BONs also are asking whether other evidence-based metrics exist that should amend or replace the current regulatory standards set by each state. Therefore, in 2017, NCSBN embarked on a 3-year journey to identify the evidence needed to answer these questions. This report presents a literature review that found there is currently little evidence with the rigor needed to support quality indica- tors of nursing education. Additionally, it details a three-part, mixed-methods national study that NCSBN conducted to identify quality indicators of nursing education programs, as well as warning signs when programs begin to fall below standards. From this large study, consisting of three national studies using very different research methodologies (Delphi, quantitative, and qualitative studies), nursing education approval guidelines were developed. This groundbreaking work provides nursing regulators with evidence-based and legally defensible tools for approving programs. In their missions of public protection, regulators will be able to first identify warning signs when nursing programs are beginning to fall below standards. They can then use the quality indicators to guide programs before sanctions or program closures occur, thus continuing to graduate safe and competent nurses, in adequate numbers, to care for patients. Additionally, nurse educators will find this evidence valuable as they plan for and evaluate their programs. Definitions of terms used in this study are provided in Appendix A.

* In Mississippi, the registered nurse programs are approved by the Mississippi Institutions of Higher Learning and the practical nurse programs are approved by the BON. In New York, the programs are approved by the New York Board of Regents. In Idaho, programs are approved as long as they are accredited by a national nursing accrediting agency recognized by the U.S. Department of Education, though the BON takes over if that accreditation is lost. S6 Journal of Nursing Regulation Part I

Literature Review The following three criteria relative to nursing program approval formed the basis of the literature review: ⦁ Use of NCLEX pass rates as a performance measure of prelicensure nursing programs. ⦁ Additional metrics used to measure performance of higher education programs and the supporting evidence. ⦁ Warning signs indicating a nursing program is falling below standards and at risk of losing BON approval. Medline, PsychInfo, ERIC (Education Resources Information Center), and CINAHL (Cumulative Index of Nursing and Allied Health Literature) Complete were queried using the following keywords: (a) nursing education outcomes (and higher education outcomes); (b) nursing education (and higher education) quality indicators; and (c) predictors of nursing education (and higher education) quality. Because of the lack of literature in these areas, we also searched gray literature, which included literature and publications related to nationally recognized expert reports from organizations, governmental agencies (eg, U.S. Department of Education [USDE]), international nursing regulatory bodies, national healthcare regulatory and accreditation bodies, and national education workshops. Additionally, literature reviews, case reports, and opinions of nationally recognized experts based on experiential evidence were included. Our literature search also included U.S. and international databases from education and other related fields. Citations from 65 relevant articles and reports were retrieved and reviewed. These articles and reports are summarized citing the type of publication, its purpose, key findings (Appendix B2), using the level of evidence according to Johns Hopkins Levels of Evidence and Quality Guide (Dang & Dearholt, 2017) (Appendix B1). For the levels of evidence, two researchers (one external to NCSBN) rated the evidence levels separately and then came to consensus on the final rating.

Use of NCLEX Pass Rates as a Performance Measure of Prelicensure Nursing Programs Regulatory bodies in other professions use licensure or certification pass rates to assess program performance, though not as the sole measure (Association of Specialized and Professional Accreditors, 2016; Barrett et al., 2016; The National Academies of Sciences, Engineering, and Medicine [NASEM], 2018). Barrett et al. (2016) stated that the Fundamentals of Engineering licensing examination should not be used to determine program content because the examination is meant to measure competency for licensure and the criterion is too broad to be effective in program improvement. They instead assert that more specific measures are needed (Barrett et al., 2016). Often, too much attention on examination pass-rates leads to “teaching to the examination,” which has been reported in nursing as well (Hickerson et al., 2016, p. 2). The Association of Specialized and Professional Accreditors (2016) surveyed 45 specialized and professional accreditors and found that not all professions use licensure or certification examinations to measure education outcomes. They reported that while 84% of the professions have certification or licensure examinations, 64% of those accreditors require education programs to use pass rates as part of their self-assessments. In 2018, NASEM held a workshop on graduate medical education outcomes and metrics (NASEM, 2018). The workshop partici- pants agreed that test results are being used as outcomes to assess graduate medical education. However, they also acknowledged that measuring outcomes in graduate medical programs is complex and encouraged institutions to pilot other criteria and innovative ways to provide feedback to the programs and trainees. Similarly, how does program use of standardized examinations such as admission and course-related progression tests relate to pass rates? Odom-Maryon et al. (2018), in a large national study of 832 nursing programs, found that higher NCLEX pass rates were associ- ated with programs that did not use standardized examinations for either admission or progression. The investigators caution that they did not assess overall performance of the programs and the standardized examinations may have been implemented by low-performing schools and thus had nothing to do with the influence of standardized tests on NCLEX pass rates. Similarly, Randolph’s (2016) statewide study of 34 nursing programs found that when programs required a set score on an NCLEX predictor examination for graduation, NCLEX pass rates and on-time graduation rates were statistically significantly lower than those programs that did not have cut scores on predictor examinations. Randolph theorized that the low-performing programs use predictor examinations to eliminate students who would fail the NCLEX and lower their pass rates. She further concluded that if BONs use NCLEX pass rates as the only metric, nursing programs that use predictor examinations as exit examinations could be falsely elevating their NCLEX pass rates because they are preventing lower performing students from taking the examination. Many questions from faculty exist regarding the use of first-time pass rates as the primary metric. Foreman (2017), in a study of NCLEX pass rates from 2010–2014 of 1,792 programs across the United States, found that 28% of the programs that failed to meet states’

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S7 pass rate standards were within the 95% confidence interval (CI), meaning that 28% of the programs that failed to meet their respective states’ pass rates had a 95% CI that included and at times surpassed the passing threshold. He concluded that it was perhaps by chance these programs fell below the pass rate standard. For this reason, most BONs take action after 2 or more years of below-standard pass rates. The USDE recommends higher education use employment rates and graduation rates in addition to licensure or certification ex- amination pass rates because these metrics are necessary for graduates to enter the workforce (The Secretary’s Recognition of Accrediting Agencies, 2009). The national nursing accreditors (Accreditation Commission for Education in Nursing, 2017; Commission on Collegiate Nursing Education, 2018; National League for Nursing, 2016), and other U.S. healthcare accreditors use the USDE’s requirements for their outcome metrics. While these outcomes have face validity, as new nurses must graduate, pass the NCLEX, and become employed in order to enter the workforce, there is little evidence these outcomes are indicators of program quality (Spector et al., 2018). Table 1 provides a comparison of standards from nursing and other healthcare accreditors.

TABLE 1 Comparison of Healthcare Professions’ Accreditation Standards by Accrediting Agency

Accrediting Licensure Completion/ Student Job Placement Employer Length Agency Examination Graduation/ Feedback Rate Evaluation Outcomes Retention Rates Accreditation ≥ 80% first-time Unique to Previously Unique to No longer Initial accreditation is Commission for pass rate on program; faculty required but program; faculty required due to 5 y; continuing Education in NCLEX in set target rates no longer set target rates difficulty in accreditation is 8 y; Nursing (2019) 12-month period based on specified based on program collecting data annual reports and program demographics substantive change demographics reports must be filed Commission on ≥ 80% pass rate ≥ 70% Student ≥70% in 12-month Employer Initial accreditation is Collegiate on NCLEX completion rate satisfaction period satisfaction data up to 5 y; continuing Nursing data optional optional accreditation is up to Education (2018) 10 y; annual reports and substantive change reports must be filed National League ≥80% first-time Programs set Students Programs set Employers Initial approval is 6 y for Nursing pass rate on target rates express target rates based express with mid-cycle report Commission for NCLEX over 3-y based on unit, satisfaction on unit, satisfaction with due after first 3 y; Nursing period demographics, with program demographics, etc. program continuing Education etc. effectiveness effectiveness accreditation is Accreditation granted for up to 10 y; (2016) mid-cycle report due at 5 y; annual reports and substantive change reports must be filed Liaison Performance on Required but no AAMC Residency Assessment of 8-y cycle after second Committee on USMLE compared rate specified Graduation matching through graduates’ full survey visit Medical to national data Questionnaire the NRMP residency Education (2019) for all medical performance schools and medical students Accreditation ≥80% pass rate on Required in 3-y Student Required but rate Graduates’ Initial approval is 5 y Council for NBCOT reporting period satisfaction not specified performance as followed by 7-y cycle Occupational examination over but no rate with the determined by Therapy 3-y period for specified program employer Education (2019) graduates satisfaction attempting examination within 12 mo. of graduation

S8 Journal of Nursing Regulation Comparison of Healthcare Professions’ Accreditation Standards by Accrediting Agency (continued) Accrediting Licensure Completion/ Student Job Placement Employer Length Agency Examination Graduation/ Feedback Rate Evaluation Outcomes Retention Rates Commission of ≥85% pass rate on ≥80% graduation Not addressed ≥90% employment Not addressed Initial approval is 5 y Accreditation in NPTE averaged rate averaged rate averaged over followed by 10-y cycle Physical Therapy over 2 y over 2 y 2 y Education (2017) Accreditation First-time Required but no AACP Not addressed Not addressed Initial approval is 2 y Council for performance on rate specified Graduating followed by 8-y cycle Pharmacy NAPLEX Student Education (2015) compared to Survey national average first-time pass rate Note. USMLE = United States Medical Licensing Examination; AAMC = Association of American Medical Colleges; NRMP = National Resident Matching Program; NBCOT = National Board for Certification of Occupational Therapy; NPTE = National Physical Therapy Examination; NAPLEX = North American Pharmacist Licensure Examination; AACP = American Association of Colleges of Pharmacy.

Additional Metrics Used to Measure Performance of Higher Education Programs Academia is changing what it considers performance metrics in undergraduate education. The NASEM workshop also explored the quality of undergraduate education and found student performance outcomes were the most appropriate set of indicators determining education quality over program inputs (i.e., student, faculty, and program characteristics) (NASEM, 2016, 2018). In the past, input measures such as faculty-student ratios, expenditures, or student test scores were used as metrics of education; however, many data elements measuring performance outcomes are not comparable across institutions due to different conceptual definitions and populations (NASEM, 2016, pp. 57–80). Institutional or program quality is multidimensional and subjective because both students and the public expect different results.

Employment Rates Although widely used as measures of institutional quality, the validity of employment rates as a metric of education performance has been debated (Ferrante, 2017; NASEM, 2016, pp. 57–80; Spector et al., 2018; Taylor et al., 2014). Personal communications with represen- tatives of national accreditors (February 28 and March 1, 2017) confirmed that employment rates are viewed as the least valid measure of quality, despite widespread use. While it may be presumed that a higher quality program produces a higher number of employed graduates, there is no sound method of using employment rates as a proxy for program performance without accounting for geographic differences in labor markets (NASEM, 2016, pp. 57–80). Additionally, a graduate can be newly employed only to be terminated after a few weeks for incompetence (Spector et al., 2018). In a descriptive study of 5,182 engineering students in Italy, researchers measured both incoming quality and outgoing performance and determined that if employment rates are used to measure institutional quality, then assessors need to collect data on the average regional unemployment rate for the age group, in both the region of student residence and the region of the institution (Ferrante, 2017). Ferrante suggested that such data collection requires a significant investment of time and resources, one that yields little return because employment rates are mostly used with other quality measures. Feeg and Mancino (2016, 2018) provided evidence that the changing job market, which nursing programs cannot control, reflected in various regions and the U.S. economy has a large impact on employment rates. For example, the U.S. economy negatively affected the job market from 2009 to 2012. The decreasing rates of new graduate employment from 2008 to 2010 (Feeg & Mancino, 2016) reflected the economy and not the quality of the nursing program. Similarly, new graduate employment rates in the West and Northeast parts of the United States tend to be significantly lower than those in the South and Midwest parts of the United States (Feeg & Mancino, 2016, 2018). These variables need to be accounted for if nursing relies on employment rates as a measure of quality. Currently, only seven U.S. nursing regulatory bodies report using employment data for their approval processes (Nursing Education Outcomes and Metrics Committee, 2017). Given the difficulty obtaining the data and that employment rates reflect regional economics and job availability, using employment rates as a metric for BON approval is not recommended because it is burdensome and without evidence of validity as a measure of education performance (Spector & Woods, 2013).

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S9 Graduation Rates Another common metric used to measure institutional quality is graduation rates, which is considered a more valid assessment tool than employment rates (Cohen & Ibrahim, 2008; Giddens, 2009; NASEM, 2016, pp. 57–80; Randolph, 2013; Reyna, 2010; Wellman et al., 2012). In nursing, the Commission on Collegiate Nursing Education (2018) requires a 70% graduation (or completion) rate with some exceptions. The National League for Nursing Commission for Nursing Education Accreditation (2016) allows programs to set their own benchmarks and reach them for 3 averaged academic years. Similarly, the Accreditation Commission for Education in Nursing (2019) allows the faculty to establish an expected level of completion that reflects student demographics (Table 1). According to an NCSBN survey of the education consultants (Nursing Education Outcomes and Metrics Committee, 2017), 17 U.S. nursing regulatory bodies use graduation rates for their approval requirements. How graduation rates are calculated is a point of debate. A common calculation is the Integrated Postsecondary Education Data System (IPEDS) calculation, which is also used to measure the quality of an institution. The IPEDS calculation counts only those students who enroll in an institution as full-time degree-seekers and finish a degree at the same institution within a prescribed period. IPEDS ignores certain students such as nontraditional (i.e., older) students, transfer students, part-time students, and students who enroll mid- year (Cook & Hartle, 2011; NASEM, 2016). This calculation gap risks incentivizing programs to implement more selective admissions policies, prioritizing students who are more likely to maintain the full-time, 6-year-or-less graduation track. Conversely, institutions that admit student populations not captured by the current calculation are at a disadvantage because the system does not count all graduates. Graduation rates should account for all students if they are going to be used as a metric of program performance. Another method for calculating graduation rates proposed by Cohen and Ibrahim (2008) is the graduation efficiency metric. This metric emerged out of the perceived problems with the standard graduation rates and attempted to capture more of the student popula- tion, including part-time students and transfer students. To capture these students in the calculation, the graduation efficiency metric measures an institution’s “production of graduates” in relation to the size of its full-time equivalent undergraduate student body adjusted for the balance of beginning and transfer students. Graduation rate calculations that account for different types of students (e.g., students who leave programs for maternity leave, illness, family issues, etc.) quickly become much more complex than the standard calculation. The graduation efficiency metric, however, is relatively straightforward and can be calculated from existing data that schools already col- lect. Moreover, this metric does not encourage institutions to turn away part-time students and transfer students in order to increase raw graduation rates in accordance with the traditional graduation rate calculation. With a goal of holding institutions accountable for their graduation rates, DeAngelo et al. (2011) studied 210,056 students in 356 nonprofit, 4-year-degree institutions, merging data from the National Student Clearinghouse and freshman surveys. They found student characteristics, rather than the institution’s characteristics, had an impact on student outcomes. Using multiple regression models, they could predict graduation rates based on student high school grades, SAT scores, race/ethnicity, and gender, as well as a variety of other indicators as obtained from the freshmen surveys (i.e., volunteer work in high school, student finances, parental background, working full-time, etc.). Students living off campus during their first year had 35% lower odds of finishing their degree than students living in campus residence halls. Additionally, they found that the 4-year degree attainment for public universities would increase to 140% if they admitted students with the same characteristics as those admitted to private universities. They concluded that differences in graduation rates among higher education institutions are largely attributable to the profiles and characteristics of their incoming students. This study demonstrates that graduation rates may have little to do with the school’s performance but rather are impacted by student characteristics.

Retention Another way of measuring program performance is to shift the focus from raw rates of completion (using the number of degrees awarded to the population of first-time, full-time students who graduate from the institution that admitted them) to rates of student retention (Cohen & Ibrahim, 2008). The rate of retention is defined by the National Governor’s Association (Reyna, 2010) as the number and percentage of entering undergraduate students who enroll consecutively from fall to spring and fall to fall at an institution of higher education. Retention rates are also known as persistence rates (Papes & Lopez, 2007). As Papes and Lopez (2007) suggested, rates of retention should be approached by asking the general question, “What proportion of a university’s nursing students graduate with a nursing degree within the typical time frame plus 50%?” The additional 50% is added to account for students who take 6 years to graduate. The typical timeframe can be adjusted depending on the type of nursing program and track option (e.g., baccalaureate, accelerated). Furthermore, the timeframe can be set according to the date a given student took their first nursing course at the institution and the date the degree was awarded. Students who do not graduate should also be counted. How to measure the standard for each nursing program option requires discussion and modification among stakeholders until the data accurately report retention/persistence. Papes and Lopez also suggested that, from an assessment perspective, low persistence rates may be considered an aberration, but a declining rate should be taken seriously and considered a warning sign.

S10 Journal of Nursing Regulation In a study of 489 public and 820 private nonprofit universities, researchers investigated retention rates from 2003 to 2013 (Eberle- Sudré et al., 2015) and found that universities with students of similar profiles had differing retention rates. The researchers concluded that what universities do above and beyond traditional teaching methods can influence retention rates. For example, San Diego State University employed several strategies to improve retention rates. They partnered with local junior high and high schools to connect students to college earlier, they pushed all students to carry a minimum of 15 credit hours, and they instituted proactive advising and degree planning, fostered communities for first year students, and used data to improve curricula. As a result, San Diego State University vastly improved retention of underrepresented students (Eberle-Sudre et al., 2015). These results add another perspective to DeAngelo et al.’s (2011) graduation rate findings previously discussed. Student profiles and characteristics, as well as strategies that supplement traditional teaching methods, influence retention and graduation rates. Odom-Maryon et al. (2018) also found multiple factors not directly related to teaching that influenced outcomes on the NCLEX. They conducted a national study of 832 nursing programs in the United States and compared program, faculty, and cur- riculum characteristics to NCLEX pass rates. Using multilevel modeling and regression analyses and controlling for variables, they found a statistically significant increase in NCLEX first-time pass rates with public schools, semester schedules, larger admission cohorts, more students per didactic faculty, and a higher percentage of full-time faculty. There were no statistically significant findings associated with the use of simulation, integrated curricula (i.e., specialties are not offered as separate courses), online learning environments, individual course grades, minimal course grades, clinical evaluations, or allowing students to repeat courses (Odom-Maryon et al., 2018). Studies and experts have examined program quality in terms of passage on the licensure examination, employment, graduation, and retention rates of students up to this point. However, the literature examining clinical experiences shifts the quality discussion to production of graduates that are prepared to safely care for patients.

Quality Clinical Experiences There is consensus in the international literature that quality, direct-care clinical experiences with actual patients are the foundation of quality nursing education (Beauvais et al., 2017; Benner et al., 2010; Candela & Bowles, 2008; El Haddad et al., 2017; Hungerford et al., 2019; Jamshidi et al., 2016; Kavanagh & Szweda, 2017; Killam et al., 2011; NCSBN, 2005; Spector et al., 2018) and other professional programs (Luhanga et al., 2014). Similarly, nurse regulators recognize clinical experiences with actual patients are an integral part of the program approval process in the United States and Canada (Alexander, 2019; College of Nurses of Ontario, 2018; Hooper & Ayars, 2017). A suggested quality indicator for nursing education is the number of clinical hours required in a nursing curriculum, although this indicator has not been extensively studied. In an integrative literature review of 50 articles, Hickerson et al. (2016) reported that novice nurses believed nursing programs should devote more hours to clinical experiences. Hungerford et al. (2019) conducted a scoping review of four countries’ clinical hours and found disparity among clinical hour requirements with little evidence to support any of them.* They suggest further research to determine the number of practice experience hours needed and how to ensure the practice hours experienced are of a high quality (Hungerford et al, 2019, p. 39). Benner et al. (2010) recommends integrating clinical and classroom experiences. As part of a series of studies known as the Carnegie Foundation’s Preparation for the Professions Program, Benner et al. used an ethnographic, interpretive, and evaluative design to study all aspects of nursing education. They researched nine programs they deemed to be excellent** at all prelicensure levels. The find- ings demonstrated that nursing programs that provided hands-on, interactive clinical experiences and integrated those experiences into the classroom had higher ratings of student satisfaction. A fragmented system where clinical and classroom learning are not linked may not provide for comprehensive understanding and does not allow for students to make astute clinical judgments (Benner et al., 2010; Kavanagh & Szweda, 2017). Researchers in a statewide survey of 352 nurses (12% response rate) found the majority of respondents reported that they needed more clinical hours in the nursing program to enhance their readiness for practice (Candela & Bowles, 2008). However, Kumm et al.’s (2016) study to evaluate student outcomes using two different models of clinical immersion with senior nursing students had different results. Kumm et al. evaluated the difference between the original 16-week clinical immersion experience and a revised 8-week experience in preparing senior nursing students for practice by using the New Graduate Nurse Performance Survey (later changed to the Nursing Practice Readiness Tool) developed by the Nursing Executive Center. The survey evaluates new graduates in six distinct competency areas (i.e., clinical knowledge, technical skills, critical thinking, communication, professionalism, and management of responsibilities) using 36 items. The researchers found no statistically significant differences in the new graduate nurses’ performance between the two clinical immersion models, suggesting that it is the quality of experience that is important rather than the length of time. Similarly, El Haddad et al. (2017) cited literature from the 1970s (Sax, 1978) asserting hospital-trained graduates in Australia had too much practice * Australia requires 800 hours at the baccalaureate level, not including simulation. New Zealand requires 1,100 hours of clinical experience, with 360 hours in the final semester. The United Kingdom requires 2,300 hours of clinical experience. The United States has no required hours nationally (although a few states have requirements), but the national median is 712 hours for baccalaureate programs; 683 for diploma programs; and 573 for ADN programs (Smiley, 2019). ** Excellence was defined as follows: (a) reputation for teaching and learning; (b) high NCLEX pass rates; (c) recommended by either the NRB or the accrediting body; and (d) additional consideration given to geographic sampling and accommodating the school’s calendar (Benner et al., 2010). Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S11 and not enough theory in the 1970s. Today, the argument related to the education-practice gap seems to be the opposite. This suggests the discourse should change from quantity of hours to quality of the direct care clinical experiences. A number of studies in the United States (Beauvais et al., 2017; Berkow et al., 2008; Candela & Bowles, 2008; Hayden et al., 2014; Kavanagh & Szweda, 2017; Rusch et al., 2019; Spector et al., 2015), and other countries (Cantlay et al., 2017; El Haddad et al., 2017; Hsu & Hsieh, 2013; Missen et al., 2016) have addressed preparation for practice by obtaining the input of practicing professionals and nursing graduates. The need for quality clinical hours, either with supervised clinical experiences with actual patients or with simulation, is a major research finding (Alexander et al., 2015; Beauvais et al., 2017; Candela & Bowles, 2008; El Haddad et al., 2017; Hayden et al, 2014; Kavanagh & Szweda, 2017). However, what are quality clinical hours, and how can BONs be sure that clinical experiences are providing the needed knowledge to prepare students for entry to practice? The following are cited in the literature as elements integral to a quality clinical experience: ⦁ Clinical decision making and reasoning (Benner et al., 2010; Candela & Bowles, 2008; Cantlay et al., 2017; Gonzalez, 2018; Kavanagh & Szweda, 2017; Killam et al., 2011; 2012; Rusch et al., 2019) ⦁ Effective delegation (Berkow et al., 2008; NCSBN, 2006) ⦁ Electronic data and information management (Beauvais et al., 2017; Candela & Bowles, 2008) ⦁ Emergency procedures (Cantlay et al., 2017) ⦁ Interprofessional communication (Beauvais et al., 2017; Killam et al., 2011; NCSBN, 2006) ⦁ Pharmacology knowledge (Berkow et al., 2008; Candela & Bowles, 2008; Jamshidi, et al., 2016; NCSBN, 2006; Rusch et al., 2019) ⦁ Leadership (Beauvais et al., 2017; Candela & Bowles, 2008; Cantlay et al., 2017) ⦁ Time management and prioritization (Rusch et al., 2019) ⦁ Understanding pathophysiology (Berkow et al., 2008; NCSBN, 2006; Rusch et al., 2019).

Nursing Program Curriculum Rusch et al. (2019) conducted a descriptive exploratory study of 569 nursing student preceptors to determine student readiness for prac- tice. Their results, along with those of Benner et al. (2010), suggest nursing programs need to place more emphasis on patient safety and integrating pharmacology more meaningfully throughout the program to have a truly high-quality program. Developed by consensus of national nursing and healthcare experts, including input from nurse regulators, the Quality and Safety Education for Nurses (QSEN) competencies (Cronenwett et al., 2007) have been integrated into many U.S. nursing education programs as a foundation for professional competence and patient safety. No studies, however, have been conducted to determine whether these are quality indicators or whether programs that embed these competencies into their curriculum have better prepared students than schools that do not. Similarly, the provincial and territorial nurse regulators in Canada have developed the Jurisdictional Competency Process Entry- Level Registered Nurse Competencies (Canadian Council of Registered Nurse Regulators, 2018).* While the domains have different labels, the QSEN and the jurisdictional competency process for entry-level competencies are similar, including patient safety, professional re- sponsibility, evidence-based practice, and knowledge-based care. This suggests there is some regulatory consistency for nursing program quality between the United States and Canada. Furthermore, the College of Nurses of Ontario finds these competencies so essential that they are incorporating them into their nursing education approval process.

Faculty It is presumed that faculty play a leading role in the overall performance of a nursing education program, yet actual evidence for this as- sumption is limited. Libner and Kubala (2017) recommended strategies for Illinois programs to improve their NCLEX pass rates based on their observations as regulation board members who conducted site visits to prelicensure programs. Their suggestions included focusing on the appropriate ratio of full-time versus part-time faculty (no recommendation given) and whether faculty professional development needs were being met. Additionally, they suggested evaluating the program’s administrative support, including leadership of the program and financing, to support ongoing program improvement. Other areas they found important to assess included evaluation tools, teaching/ learning methodologies, admission policies, faculty-student ratios, and academic support. Odom-Maryon et al. (2018) examined faculty qualifications such as whether a higher percentage of doctoral faculty teaching didactic courses and certification in specialty fields or in nursing education resulted in higher NCLEX pass rates. Data were not statisti- cally significant. Only one study (an unpublished master’s thesis) of 92 nursing programs in Kansas and Missouri found a positive trend between NCLEX pass rates and faculty with doctoral degrees (Longabach, 2012), and even this finding was not statistically significant. Odom-Maryon et al. (2018) did find a statistically significant difference in NCLEX pass rates when a program had a higher percentage of full-time faculty versus part-time or adjunct faculty.

* Professional responsibility and accountability, knowledge-based practice, ethical practice, service to the public and self-regulation. S12 Journal of Nursing Regulation Systematic Program Evaluation The need for a program evaluation system has been cited as a crucial element for assessing a program by regulators, accreditors, and edu- cators (Hooper & Ayars, 2017; Oermann, 2017; Spector et al., 2018). Oermann (2017, p. 1) defines program evaluation as a systematic process for collecting data for making decisions about the nursing program and assessing its value. This process is also foundational to the national nursing accreditors as they evaluate programs for accreditation (Accreditation Commission for Education in Nursing, 2017; Commission on Collegiate Nursing Education, 2018; National League for Nursing, 2016). No actual studies have been conducted or data collected as to the most important elements of a program evaluation.

Institution Type Although there is no specific explanation as to why, there is evidence that the type of institution (i.e., public, nonprofit, for profit) affects program and student outcomes. As cited previously, DeAngelo et al. (2011) found that public schools would outperform private schools in terms of graduation rates if the schools had similar characteristics. For-profit institutions were not included in DeAngelo’s sample. Pittman et al. (2019) studied 5 years’ worth of data from 13,745 nursing programs in 41 states and the District of Columbia using multivariable linear and regression models and controlling for variables. Public schools outperformed (using NCLEX first-time pass rates as the measure) the nonprofit and for-profit schools, though the margin of effect was much higher for the for-profit schools. Similarly, Odom-Maryon et al. (2018) found public schools outperformed private nonprofit and for-profit schools. Both Pittman et al. (2019) and Odom-Maryon et al. (2018) reported that private nonprofit institutions outperformed private for-profit institutions.

National Accreditation Odom-Maryon et al. (2018) did not find a statistically significant difference between accredited schools versus unaccredited schools; however, only 43 (6%) of the programs they studied were not accredited*. Spector et al. (2018) studied all RN nursing programs in 2016, comparing first-time NCLEX pass rates with accreditation status (i.e., Yes/No), and found a statistically significant difference in NCLEX pass rates between accredited programs versus unaccredited programs. Specifically, 741 programs (ADN and BSN) were not accredited and had pass rates of 72%, whereas 1,531 programs (ADN and BSN) were accredited and had pass rates of 87%. More research is needed on the relationship between national nursing accreditation and program outcomes. In summary, during a national workshop, higher education experts reviewed the literature and found that there are no magical solutions to the long-standing issue of performance measurement in higher education institutions (NASEM, 2016), which is similar to the findings of this literature review of higher education outcomes and metrics. In fact, none of these components alone may be indicative of a quality program. It may require a combination of factors that lead to producing competent graduates.

Warning Signs Indicating a Nursing Program Is Falling Below Standards and At Risk of Losing BON Approval While the literature does not address warning signs, per se, it does address observations when nursing programs begin to experience difficulties. Failing to address unsafe students in the clinical area could be a warning sign for a nursing program. Docherty and Dieckmann (2015) surveyed 84 faculty in one Western state and found that faculty frequently do not fail nursing students, neither in didactic nor clinical areas, who demonstrate unsatisfactory progression in the program. This poses safety issues for patients and challenges for the practice setting and nurse regulators, especially when the school does not have a remediation process. Furthermore, Luhanga et al. (2014) conducted a qualitative descriptive design with nursing, education, and social work students to learn why faculty have difficulty failing students in clinical experiences. They suggest student failure is a difficult experience for the student and faculty and there are consequences for the program as well. Hooper and Ayars (2017), nurse regulators from Texas, documented some of the weaknesses they encounter when they evaluate nursing programs and provided ideas for early interventions. Areas of weakness include (a) lack of early recognition of at-risk students, (b) inconsistent use of grading policies, (c) insufficient numbers of faculty, (d) lack of faculty development, (e) lack of rigor across the cur- riculum, (f) inadequacies with the testing processes, (g) difficulty in locating clinical placements, and (h) an ineffective program evaluation plan. Alexander (2019) editorialized about what nurse regulators observe when a program abruptly shuts its doors. The warning signs that nursing regulatory bodies observe when programs experience difficulties include: ⦁ Rapid growth in admissions ⦁ High faculty turnover ⦁ Unclear workload policies

* It should be noted that while almost 89% of BSN nursing programs are accredited, only about 53% of associate degree programs and 11% of practical nursing programs are accredited (Silvestre, 2020). Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S13 ⦁ High administrator turnover ⦁ High rate of complaints ⦁ Weak admissions policies ⦁ Old-fashioned skills lab with high student ratios ⦁ Poor clinical placements.

Conclusion There is an overall lack of evidence regarding the existence of validated metrics that could be used to evaluate a nursing education pro- gram, although the number of articles suggest there is a growing body of evidence that is defining what constitutes a quality education in nursing. We did not critique the quality of the research studies within the text of this report since our goal was to determine the state of the science so we could answer our research questions. However, we did rate the level and quality of the research and reports using Johns Hopkins evidence levels and quality ratings, which can be found in Appendix B1 and B2. While many studies examined different components of nursing education, there is not one quality indicator or one warning sign that indicates a program’s quality. Rather, this literature review points to several factors that in combination may serve as metrics for evaluating a program. These vary from components a program may have little to no control over, such as the type of institutional owner- ship, to the selected clinical experiences the school is able to obtain and afford to students. The evidence is insufficient to lead us to any conclusion. More research is needed in this area. To this end, NCSBN embarked upon a three-part national study to further examine the three topics studied within this review. Those data, along with the work herein, may provide evidence into the development of a guidance document for program approval.

S14 Journal of Nursing Regulation Part II

A National Mixed-Methods Study to Identify Quality Indicators and Warning Signs of Nursing Education Program Performance NCSBN conducted a groundbreaking, national, mixed-methods study to identify evidence-based quality indicators and warning signs of nursing program performance. This comprehensive study comprises three national studies using different methodologies: (1) a national Delphi study, (2) a quantitative 5-year annual report study, and (3) a qualitative 5-year site visit study.

A National Delphi Study to Determine Quality Indicators and Warning Signs of Nursing Education Program Performance The objective of this Delphi study was to provide data on consensus from experts in nursing education, regulation, and practice regarding nursing education quality indicators, warning signs when programs are beginning to fall below standards, and performance of nursing education programs. Specifically, we aimed to answer the following research questions: ⦁ What are characteristics/quality indicators of nursing education programs that graduate safe and competent nurses? ⦁ What are warning signs that indicate a nursing program is falling below the standard of graduating safe and competent nurses? ⦁ What outcome measures could BONs use to determine whether nursing programs are graduating safe and competent nurses? Institutional review board approval was obtained from the Western Institutional Review Board.

Methods The Delphi method assumes group opinion is more valid than individual opinion (Keeney et al., 2011). In this method, generally there are two to four rounds of surveys, with the goal being that the group comes to consensus on issues. Round one is a qualitative round where the participants are asked to provide their views on issues. It is imperative that the questions are clear and understandable by the participants. To this end, it is recommended to pilot the questions with a small group of experts first (Benton et al., 2013; Keeney et al., 2011). In round two, the participants rate the factors identified in round one. If there are areas of disagreement, rounds three and four will allow participants to change their minds based on the findings of the group. Benton et al. (2013) indicated that the advantages of this method are (a) gathering expert opinion while providing anonymity to the participants, (b) providing for a controlled and structured process, and (c) allowing for relatively simple statistics to interpret the results. The Delphi method has been used successfully for answering policy questions (Benton et al., 2013; Linstone & Turoff, 2002; McGeouch et al., 2014; Meskell et al., 2014; Rayens & Hahn, 2000) and in education (Barton et al., 2009; van Houwelingen et al., 2016). For policy questions, Delphi uses a heterogeneous group of experts so that diverse views of the issues can be sought (Benton et al., 2013; Linstone & Turoff, 2002).

Defining Consensus The literature supports using a 67% threshold for agreement (Benton et al., 2013; Keeney et al., 2011). Benton (2013) explains that particularly with policy Delphi studies, a 67% agreement is in concert with the threshold vote for Robert’s Rules Online (n.d., Art. VII. Debate.) when the vote addresses an important policy issue. Therefore, for this study, a 67% agreement was set for the threshold. Additionally, the interquartile range, which measures the dispersion of the data and, therefore, the collective judgments of the respondents, was set at 1.0 or below, which is also supported in the literature (Benton et al., 2013; Keeney et al., 2011).

Study Sample Selection The research team used NCLEX program code data to obtain a list of email addresses for nurse educators of RN-MSN, RN-BSN, RN- ADN, RN-diploma and licensed practical nurse/licensed vocational nurse (LPN/LVN) programs in the United States. A list of email addresses for clinical nurse educators was purchased from the Association for Nursing Professional Development. The list of experts in

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S15 nursing regulation was obtained by using the education consultant distribution list from the NCSBN membership email address list. Inclusion criteria for educators were as follows: ⦁ Taught master’s entry, BSN, ADN, diploma, or LPN/LVN for at least 2 years ⦁ If an LPN/LVN educator, must have at least a BSN ⦁ If an RN educator, must have at least a master’s degree. Clinical educators were required to have worked with new graduate LPN/LVNs or RNs for at least 2 years. Education consultants were required to have been hired by the BON to regulate nursing programs. In addition, all participants were required to be willing to complete three rounds of surveys about nursing education programs that graduate students who are competent and safe to practice. By including regulators, educators, and those who supervise new graduates in practice, we were able to include diverse perspectives in this Delphi study. Additionally, with practice readiness being addressed in the literature related to performance outcomes, we wanted the practice perspective. Thus clinical nurse educators who work with new graduates in hospitals were included. The demographics of the Delphi study participants are presented in Table 2. The demographics were balanced across the sample, except for the highest level of education attained. Whereas 51% of educators and 50% of the regulators had doctorates, only 19% of the clinical nurse educators who work in hospitals did.

TABLE 2 Demographics of Survey Participants in the Delphi Study Participant Type n %a Participant Type n %a Educators 174 Educators 174 Sex LPN/LVN, ADN, BSN 3 2 Female 162 93 LPN/LVN, BSN, master’s entry 1 1 Male 10 6 LPN/LVN, ADN, BSN, master’s entry 1 1 Prefer not to say 2 1 ADN, BSN, master’s entry 4 2 Age Range ADN, diploma, BSN, master’s entry 1 1 18–24 0 0 BSN and master’s entry 12 7 25–34 4 2 35–44 10 6 Participant Type n %a 45–54 33 19 Education Consultants 50 55–65 89 51 Sex > 65 38 22 Female 48 96 Highest Level of Education Attained Male 2 4 Diploma 0 0 Prefer not to say 0 0 ADN 0 0 Age Range BSN 7 4 18–24 0 0 MS/MSN 68 39 25–34 0 0 DNP 20 11 35–44 4 8 PhD 79 45 45–54 10 20 Years of Experience in Nursing Educationb 55–65 24 48 2 0 0 > 65 12 24 3–5 8 5 Highest Level of Education Attained 6–10 19 11 No response 2 4 > 10 147 84 Diploma 0 0 Types of Students Taught ADN 0 0 LPN/LVN only 24 14 BSN 2 4 Diploma only 3 2 MS/MSN 21 42 ADN only 27 16 DNP 8 16 BSN only 61 35 PhD 17 34 Master’s entry only 3 2 Years of Experience in Regulation of Nursing Education LPN/LVN and ADN 26 15 Programs LPN/LVN and BSN 1 1 0–2 9 18 ADN and BSN 6 3 3–5 14 28 LPN/LVN, diploma, BSN 1 1 6–10 14 28 > 10 13 26

S16 Journal of Nursing Regulation Demographics of Survey Participants in the Delphi Study (continued) Participant Type n %a Participant Type n %a Education Consultants 50 Clinical Nurse Educators 71 Types of Programs Regulated Prefer not to say 0 0 LPN/LVN only 1 2 Age Range LPN/LVN and BSN 1 2 18–24 0 0 ADN and BSN entry 3 6 25–34 6 8 BSN and BSN entry 1 2 35–44 12 17 ADN, BSN, BSN entry 1 2 45–54 19 27 ADN, Diploma, BSN entry 1 2 55–65 32 45 ADN, Diploma, BSN, BSN entry 1 2 > 65 2 3 LPN/LVN, ADN, BSN entry 4 8 Highest Level of Education Attained LPN/LVN, ADN, BSN, BSN entry 6 12 Diploma 0 0 LPN/LVN, ADN, diploma 1 2 ADN 0 0 LPN/LVN, ADN, diploma, BSN 8 16 BSN 5 7 LPN/LVN, ADN, diploma, BSN entry 4 8 MS/MSN 53 75 LPN/LVN, ADN, diploma, BSN, BSN entry 18 36 DNP 9 13 PhD 4 6 Participant Type n %a Years of Experience Working With New Graduate Nurses Clinical Nurse Educators 71 0–2 3 4 Sex 3–5 16 23 Female 68 96 6–10 12 17 Male 3 4 > 10 40 56

Note. ADN = associate degree in nursing; BSN = bachelor of science in nursing; MS = master of science; MSN = master of science in nursing; DNP = doctor of nursing practice; LPN = licensed practical nurse; LVN = licensed vocational nurse. a Percentages may not total 100 due to rounding. b Educators with less than 2 years’ experience were excluded and skipped to the end of the survey.

Procedure Ten experts in regulation, education, and clinical education (in hospitals) piloted the surveys for clarity, and revisions were made based on their feedback. For example, we originally used the phrase “regulatory quality indicators,” and although the educators and education consultants understood the term, the clinical educators did not. Therefore, we changed it to “characteristics of nursing programs that graduate safe and competent nurses,” which was universally understood. An introductory email describing the Delphi study was sent to the entire list of clinical nurse educators, educators in nursing programs, and education consultants inviting them to participate. If they met the inclusion criteria and were interested in participating, they were directed via hyperlink to the Qualtrics (Utah) online survey platform asking for demographics and related experience. The educators in nursing programs (n = 293), clinical nurse educators (n = 125), and education consultants (n = 62) who agreed to participate and completed the demographic survey were sent the first round of Delphi surveys (round one). In round one, the participants were asked a series of open-ended questions. These included asking the participants to list up to 15 variables they would consider for each of the following: ⦁ Characteristics/quality indicators of nursing education programs that graduate safe and competent nurses ⦁ Warning signs that indicate a nursing program is falling below the standard of graduating safe and competent nurses ⦁ Outcome measures BONs could use to determine whether nursing programs are graduating safe and competent nurses. All participants had the opportunity to provide qualitative feedback during this round and were allowed 15 responses for each question (Keeney et al., 2011). Participants were allowed 2 weeks to respond to the survey. Reminder emails were sent at specific intervals to those who had not completed the surveys. Major recurring themes from the first round were initially identified using NVivo 12 Plus software (QRS International). Text search and word frequency queries were run within each of the quality indicators, warning signs, and outcomes categories, and then across all three categories to determine possible themes. Text searches were directed by using the top word frequencies and the predominant themes that came out of the published literature. The possible themes were further content-analyzed by the research team. R software (R Foundation for Statistical Computing) was used to help validate the themes obtained manually by the research team and by using NVivo. Latent Dirichlet allocation with R (Grün & Hornik, 2011) was used to categorize comments into naturally occur-

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S17 ring themes by examining word frequency. Latent Dirichlet allocation is a statistical unsupervised learning technique that categorizes comments by assigning them to topics where the comments within a topic share more words in common than those in other topics. A set of major themes emerged from round one (Table 3) for use in round two. In round two, each participant from round one was administered another survey that included the major themes identified. The participants were asked to rate the importance of each theme or variable using a 4-point Likert scale (1 = unimportant, 2 = not too important, 3 = important, and 4 = very important). The 4-point scale is particularly suited to a policy Delphi because it forces the respondents to take a position (Benton et al., 2013). Participants were again allowed 2 weeks to respond to the survey and reminder emails were sent at specific intervals to those who had not completed the surveys.

TABLE 3 Major Themes Emerging From Delphi Round One

Quality Indicators 1. Evidence-based curriculum that emphasizes quality and safety standards for patient care 2. Evidence-based curriculum that emphasizes critical thinking and clinical reasoning skills 3. Faculty are able to role model professional behaviors 4. Clinical experiences with actual patients that prepare students for the reality of clinical practice 5. Systematic process is in place to address and remediate student practice errors 6. Faculty teaching clinical courses demonstrate current clinical competence 7. Consistent administrative leadership in the nursing program 8. Collaboration between education and practice to enhance readiness for practice 9. Ongoing systematic evaluation of the nursing program 10. Institutional administrative support of the nursing program 11. Consistently has a pattern of NCLEX pass rates that meet set standards 12. Administrative support for ongoing faculty development 13. Significant opportunities for a variety of clinical experiences with diverse populations 14. Consistent full-time faculty, as opposed to reliance on adjunct faculty 15. Quality simulation is used to augment clinical experiences 16. Comprehensive student support services 1 7. National nursing accreditation 18. Admission criteria emphasize a background in the sciences Warning Signs 1. Lack of consistent and prepared clinical faculty 2. Limited clinical experiences that do not prepare the students for practice 3. Poor leadership in the nursing program 4. Trend of NCLEX pass rates is inconsistent or decreasing 5. Complaints to the nursing program or board of nursing from employers, students, or faculty 6. Pattern of faculty attrition 7. Pattern of nursing program administrator attrition 8. Unwillingness of institutions to host clinical experiences for the nursing program’s students 9. Pattern of student attrition 10. Curriculum is based on “teaching to the NCLEX” 11. Over-reliance on simulation to replace clinical experiences with actual patients Program Outcome Measures 1. NCLEX pass rates of the nursing program 2. Relationship of the nursing program with its clinical partners 3. Employer satisfaction with the graduates’ readiness for practice 4. Graduate preparedness to practice for an interprofessional environment 5. Graduates’ satisfaction with the nursing program 6. Graduation rates of students in the nursing program 7. Consistency of graduate employment rates with regional data on nurse employment rates 8. History of board of nursing discipline with the graduates of the nursing program

A third round of Delphi surveys was planned but round two yielded such high agreement (see Results) across all groups for all variables that it was deemed unnecessary.

S18 Journal of Nursing Regulation Statistical Analysis Statistical analysis was conducted using SPSS version 22.0. Simple descriptive statistics were estimated for each item and agreement was estimated by looking at the percentage of respondents who agreed that an item was either important or very important (a Likert rating of 3 or 4). Group differences were examined using a one-way analysis of variance, where post hoc comparisons were used to determine which group or groups differed on rating the item.

Results Of the 293 educators, 125 clinical educators, and 62 education consultants, 174 educators (59% response rate); 71 clinical nurse educators who work with new graduates (57% response rate); and 50 education consultants, who are hired by BONs and approve nursing programs (81% response rate), completed both rounds of the study. Results from the second round of the Delphi analysis found excellent agreement, and relatively little dispersion of ratings of im- portance (Table 4). Percent agreement ranged from 78% to 100%. None of the items had a median rating below 3.00 (important). All of the interquartile ranges (IQRs) were either zero or one. These results met the criteria established for adequate agreement with percent agreement above 67% and all IQR’s at one or below.

TABLE 4 Agreement With Regulatory Quality Indicators Among Participants in the Second Round of the Delphi Analysisa

M Median SD IQR % Agreement Quality Indicators 1. Evidence-based curriculum that emphasizes quality and safety standards for pa- 3.9 4.0 0.38 0 99.7% tient care 2. Evidence-based curriculum that emphasizes critical thinking and clinical reasoning 3.9 4.0 0.37 0 99.3% skills 3. Faculty are able to role model professional behaviors 3.8 4.0 0.43 0 99.3% 4. Clinical experiences with actual patients that prepare students for the reality of 3.6 4.0 0.55 1 98.7% clinical practice 5. Systematic process is in place to address and remediate student practice errors 3.6 4.0 0.53 1 98.7% 6. Faculty teaching clinical courses demonstrate current clinical competence 3.7 4.0 0.50 1 98.7% 7. Consistent administrative leadership in the nursing program 3.7 4.0 0.54 1 98.3% 8. Collaboration between education and practice to enhance readiness for practice 3.7 4.0 0.53 1 97.7% 9. Ongoing systematic evaluation of the nursing program 3.7 4.0 0.54 1 97.7% 10. Institutional administrative support of the nursing program 3.6 4.0 0.57 1 97.3% 11. Consistently has a pattern of NCLEX pass rates that meet set standards 3.5 4.0 0.59 1 96.3% 12. Administrative support for ongoing faculty development 3.6 4.0 0.58 1 96.3% 13. Significant opportunities for a variety of clinical experiences with diverse 3.4 4.0 0.59 1 95.7% populations 14. Consistent full-time faculty, as opposed to reliance on adjunct faculty 3.6 4.0 0.60 1 95.0% 15. Quality simulation is used to augment clinical experiences 3.3 3.0 0.63 1 93.3% 16. Comprehensive student support services 3.4 3.0 0.63 1 93.0% 1 7. National nursing accreditation 3.3 4.0 0.79 1 84.0% 18. Admission criteria that emphasize a background in the sciences 3.1 3.0 0.77 1 80.3% Warning Signs 1. Lack of consistent and prepared clinical faculty 3.77 4.00 0.422 0 100.00% 2. Limited clinical experiences that do not prepare the students for practice 3.73 4.00 0.466 1 99.01% 3. Poor leadership in the nursing program 3.75 4.00 0.481 0 98.67% 4. Trend of NCLEX pass rates is inconsistent or decreasing 3.48 4.00 0.589 1 96.69% 5. Complaints to the nursing program or board of nursing from employers, students, 3.53 4.00 0.606 1 94.70% or faculty 6. Pattern of faculty attrition 3.36 3.00 0.614 1 94.02% 7. Pattern of nursing program administrator attrition 3.38 3.00 0.640 1 92.72% 8. Unwillingness of health care institutions to host clinical experiences for the nurs- 3.39 3.00 0.646 1 92.05% ing program’s students

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S19 Agreement With Regulatory Quality Indicators Among Participants in the Second Round of the Delphi Analysisa (continued) M Median SD IQR % Agreement 9. Pattern of student attrition 3.11 3.00 0.645 1 85.05% 10. Curriculum is based on “teaching to the NCLEX” 3.19 3.00 0.740 1 81.73% 11. Over-reliance on simulation to replace clinical experiences with actual patients 3.08 3.00 0.766 1 80.13% Performance Outcome Measures 1. NCLEX pass rates of the nursing program 3.46 3.00 0.558 1 97.67% 2. Relationship the nursing program has with its clinical partners 3.50 4.00 0.559 1 97.00% 3. Employer satisfaction with the graduates’ readiness for practice 3.43 3.00 0.619 1 94.68% 4. Graduate preparedness to practice for an interprofessional environment 3.46 4.00 0.622 1 93.69% 5. Graduates’ satisfaction with the nursing program 3.04 3.00 0.611 0 85.38% 6. Graduation rates of students in the nursing program 3.04 3.00 0.671 0 80.40% 7. Consistency of graduate employment rates with regional data on nurse employ- 3.04 3.00 0.681 1 79.33% ment rates 8. History of board of nursing discipline with the graduates of the nursing program 3.08 3.00 0.816 1 78.00% Note. IQR = interquartile range. a A 4-point Likert scale was used: 1 = unimportant, 2 = not too important, 3 = important, and 4 = very important.

There were some statistical differences between the clinical nurse educators, academic nurse educators, and educational consultants on mean importance for 15 of the 37 characteristics. Most of these findings were because the clinical nurse educators rated items as more or less important than the other two groups. For only one item was the mean score for the differing group below 2.90. This was for the item stating, “Admission criteria that emphasize a background in the sciences,” and even in this case, the clinical nurse educators rated the item as 2.83 in importance, which was still in the “important” range (from 2.5 to 3.5) of the Likert scale.

Discussion This Delphi study identified 18 quality indicators (characteristics of nursing programs that graduate safe and competent students), 11 warning signs when nursing programs begin to fall below standards, and eight performance outcomes nursing regulatory bodies could measure. The quality indicators fall into the following categories: (a) school leadership and faculty support, (b) consistent and compe- tent faculty, (c) providing quality, hands-on clinical experiences with meaningful collaboration with clinical partners, and (d) having an evidence-based curriculum emphasizing quality and safety and critical thinking/clinical reasoning. While the warning signs are similar to the quality indicators (only the opposite), there are some additional intriguing ones. There were few surprises with the performance outcomes (i.e, NCLEX pass rates, graduation rates, employment rates, etc.), although there were a few that are new.

Quality Indicators While many of the quality indicators identified are supported by the literature, this national Delphi study lends further credence to the previous findings. Additionally, under each category are some possible indicators that could be used by regulators and educators when evaluating nursing programs. The general category of leadership and faculty support was identified (Table 4, quality indicators 3, 6, 7, 10, and 12). Hooper & Ayars (2017), in their observations when approving nursing programs in Texas, found faculty development is an important factor for quality programs. Nurse regulators in Illinois (Libner & Kubala, 2017) found administrative support to be an essential quality indicator. Alexander (2019) reports on U.S. nursing regulatory bodies’ observations related to program approval with consistency in program directors being paramount. There is also support in the literature for admission criteria emphasizing the sciences (Benner et al., 2010) and for more rigorous admission policies (Alexander, 2019); ongoing systematic evaluation plan (Hooper & Ayars, 2017; Oermann, 2017); requiring national nursing accreditation (Hooper & Thomas, 2014; Jones et al., 2012; Spector et al., 2018); consistent pattern of licensure pass rates (College of Nurses of Ontario, 2018; Hooper & Ayars, 2017; Libner & Kubala, 2017); and that the program has a systematic process in place to address and remediate student practice errors (College of Nurses of Ontario, 2018). However, it should be noted that much of this evidence is observational. There is a higher order of evidence for faculty ratios and qualifications* (Table 4, quality indicators 3,6, and 14). Odom-Maryon et al. (2018) found a statistically significant relationship between full-time faculty ratio and NCLEX pass rates. Libner & Kubala (2017),

* The highest level of evidence is a randomized controlled trial, followed by a quasi-experimental study, a nonexperimental study, expert opinion, and lastly experiential and nonresearch evidence (Dang & Dearholt, 2017). S20 Journal of Nursing Regulation based on their observations regulating nursing programs, found that a focus on appropriate full-time faculty ratios was important when remediating nursing programs that were falling below state standards. Related to the curriculum, the Delphi participants strongly agreed on an evidence-based curriculum that emphasizes critical think- ing and clinical reasoning skills, as well as one that emphasizes quality and safety standards for patient care (Table 4, quality indicators 1 and 2). Odom-Maryon et al. (2018) did not find any curricular factors to be related to NCLEX pass rates. The evidence supporting clinical reasoning as a quality indicator is strong. Benner et al. (2010) in their mixed-methods study of nine prelicensure RN nursing programs (all levels) with excellent reputations for teaching and learning, concluded that to shift to integrating clinical experience into the classroom, faculty should place more emphasis on clinical reasoning. Others have provided evidence to support clinical reasoning as a quality indicator (Candela & Bowles, 2008; Cantlay et al., 2017; Gonzalez, 2018; Kavanagh & Szweda, 2017; Killam et al., 2011; Pitt et al., 2012; Rusch et al., 2019). The evidence supporting quality and safety education as a quality indicator is supported by the national initiative QSEN (Cronenwett et al., 2007; QSEN, 2019), which is being integrated in undergraduate nursing education. The Delphi participants identified four indicators for quality clinical experiences in the nursing program (Table 4, quality indica- tors 4, 8, 13, and 15). Benner et al. (2010), in their national study of nursing education, highlighted the importance of quality, hands-on clinical experiences as being a strength of the nursing programs in their study. Candela and Bowles (2008) and El Haddad et al. (2017) both called for more hours in clinical experiences, although there have been no studies linking increased numbers of clinical hours to improved educational outcomes. The evidence does, however, support clinical experiences with actual patients that mirror the reality of practice and opportunities for a variety of clinical experiences (Beauvais et al., 2017; Benner et al., 2010; Berkow et al., 2008; Kavanagh & Swezda, 2017; NCSBN, 2006; Rusch et al., 2019), as well as the quality of simulation (Hayden et al., 2014). Similarly, there is much support in the literature for a more meaningful collaboration between practice and education (Beauvais et al., 2017; Boston-Fleischhauer, 2019; Granger et al., 2012; El Haddad et al., 2017; Kavanagh & Swezda, 2017; Rusch et al., 2019). Specifically, Boston-Fleischhauer (2019) calls for practice to become more innovative in clinical experience, planning for more opportunities in primary and ambulatory care settings, and thus preparing new graduates for cross-continuum practice of the future.

Warning Signs Some of the warning signs the participants identified were the opposite of the quality indicators (Table 4, warning signs 1, 2, 6, 7, 9, and 10). However, participants identified specific ones, such as an unwillingness for the healthcare institutions to host clinical experiences. If an institution hosts other programs, but refuses one program, it likely is a problem with that program. The literature only alludes to this problem (Hooper & Ayars, 2017), where regulators observe programs are beginning to fall below standards when they are unable to acquire settings for clinical experiences. Complaints to the nursing regulatory bodies were also identified as a warning sign, which is supported by Alexander (2019) in an editorial reporting on observations from U.S. nursing regulatory bodies when nursing programs are beginning to fall below standards. The Delphi participants identified the curriculum being heavily based on the NCLEX as a warning sign, which also has been addressed in the as a problem (Candela & Bowles, 2008; Kavanagh & Swzeda, 2017), as well as in non-nursing literature where licensure examinations are used (Barrett et al., 2016). Additionally, an interesting warning sign was identified that has not been reported in the literature: over-reliance on simulation to replace clinical experiences with actual patients. This item could be affected by a number of intervening variables, such as the program has a lack of sufficient hands-on clinical experiences or it is increasing its simulation percentage without adhering to accepted simulation guidelines (Alexander et al., 2015).

Performance metrics The participants were asked to identify outcomes that nursing regulatory bodies could measure. There were a few new ideas, although some of the items addressed those metrics that are already being used and have little evidence to support their being related to the qual- ity of the nursing programs, such as NCLEX pass rates (Bernier et al., 2005; Foreman, 2017; Giddens, 2009; O’Lynn, 2017; Taylor et al., 2014), employment rates (Ferrante, 2017; NASEM, 2016, pp. 57–80), and graduation rates (Cook & Hartle, 2011; NASEM, 2016). History of the U.S. nursing regulatory body’s discipline with the graduates was another metric cited by the participants; however, there would be many intervening variables, such as the practice environment. Two other metrics identified were the graduate’s satisfaction with the nursing program and employer satisfaction with the graduate. Although some healthcare accreditors evaluate these metrics, nursing accreditors report that these data are often incomplete and difficult for programs to obtain. Two outcomes were reported that have not been reported in the literature: (1) the relationship the nursing program has with its clinical partners, and (2) the graduates’ preparedness to practice for an interprofessional environment. These outcomes relate to the quality of clinical experiences, which was identified as an important quality indicator.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S21 Achieving Consensus We reached consensus on two rounds with this Delphi study. The piloting of the questions was very important for ensuring the questions were understood uniformly across our sample. We piloted the survey to all three groups in our sample and made many revisions based on the feedback. Some of the educators and education consultants may have resorted to metrics they commonly use related to either the accredita- tion or regulatory standards. We particularly saw that with the outcomes that were identified. This likely was not a major factor because the clinical nurse educators, who work with new graduates in practice, are not tied to the national accreditation standards or to state requirements. Therefore, they were more apt to come up with innovative factors that have not been used when assessing programs. Some ideas, not previously cited, did come from the practice educators, and those were then selected as important or very important by the educators and regulators.

Limitations While these quality indicators, warning signs, and outcomes were identified by experts, it should be noted the metrics are the opinions of experts in the field, which is the lowest level of evidence. Additionally, while our response rate across the two rounds was good (61% overall), a 70% response rate is recommended by some researchers (Keeney et al., 2011). Currently, however, no specific guidelines exist for acceptable response rates for Delphi studies (Keeney et al., 2011), and reported response rates range from 8% to 100% in Delphi studies. The larger the number of participants, the lower the expected response rate (Keeney et al., 2011). Our response rate, therefore, was acceptable given our large sample, and it probably benefitted by our sending out reminders every 2 days.

Conclusions NCSBN conducted this Delphi study to learn about expert consensus of quality indicators, warning signs, and performance outcomes. The diverse group of educators, regulators, and clinical educators who work with new graduates agreed on 18 quality indicators, 11 warning signs, and eight outcome measures. While this study lends more support to those metrics that have already been studied, some newer ones have also been identified (such as collecting evidence on the relationship the nursing program has with the facilities they use for clinical experiences or the graduates’ preparedness to work in an interprofessional environment). Some highlights of this study are that we used three separate methods of qualitative analysis (content analysis done by hand as well as verifying the findings with NVivo and R [Latent Dirichlet Allocation] software), thereby providing a comprehensive and reliable list of quality indicators, warning signs, and performance outcomes. Additionally, by including regulators (education consultants), educators, and those who work with new graduates in practice, our experts provided diverse perspectives and therefore enhanced the breadth of findings.

Acknowledgment The researchers would like to acknowledge Lou Fogg, PhD, Associate Professor in the Department of Community, Systems and Mental Health Nursing, at Rush College of Nursing in Chicago, for his direction and mentorship in conducting this study and his expertise in statistically analyzing the results.

S22 Journal of Nursing Regulation A Quantitative Analysis of 5 years of BONs Annual Report Documents This second national study to identify evidence-based quality indicators, warning signs, and performance outcomes is a quantitative analysis examining 5 years’ worth of data from BON annual reports. Specifically, we aimed to answer the following research questions: ⦁ What nursing education program performance indicators are associated with full approval of a prelicensure nursing education program? ⦁ What additional factors exist that are associated with prelicensure nursing programs that have lost full program approval? In a post hoc analysis, in order to uncover all the possible evidence, we asked the same questions, using 80% or higher NCLEX pass rates as the outcome.

Methods This is a quantitative retrospective cohort study examining 5 years’ worth of data (2012-2017) from U.S. BONs’ annual reports to iden- tify QIs and thereby systematically evaluate nursing education program performance. All 55 U.S. BONs that approve nursing programs, as well as the Board of Regents in New York and the Mississippi Institutions of Higher Learning, were invited to participate. In total, 43 U.S. BONs, including one U.S. territory and geographically diverse states of all sizes and demographics (Figure 1), provided 11,378 annual report documents for inclusion in the study. Institutional review board approval was obtained from the American Institutes for Research (AIR) because they assisted with collecting the data.

FIGURE 1 Geographic Representation of U.S. Boards of Nursing That Participated in the Study

WA MT ND MN ME SD WI OR ID WY NY NH IA MI NE VT PA MA IL IN OH CT NV UT CO RI KS MO WV NJ KY VA CA DE OK TN NC MD AR AZ NM SC DC AL LA MS GA TX

AK FL

HI

MP GU AS PR VI Data submitted for analysis

Data Collection NCSBN staff sent a detailed request to the U.S. BONs that approve nursing programs in February 2018, and a webinar for the BONs was held on February 20, 2018, to answer any questions and provide more information. Annual report and site visit data were requested for the academic years of 2012-2013, 2013-2014, 2014-2015, 2015-2016, and 2016-2017 from all BONs. Data submissions began after the webinar with an April 15, 2018 deadline for data collection. Many BONs needed permission from other entities or their attorneys before submitting their data. Some BONs requested NCSBN send a request to their state agencies. As a result, many BONs could not meet the April 15 deadline, so it was extended to September 24, 2018. Because of the large quantity of submitted data, an outside vendor, AIR, collected it on a secure database. Each state and territory had its own password-protected link for sending the data. In a few cases, the BONs sent in boxes of their documents for NCSBN or AIR to scan and enter into the database. When all the data were collected, AIR transferred the information to a secure database at NCSBN for data analysis and storage.

Variables More than 40 variables were reviewed to determine initial eligibility for analysis. Of the more than 40 variables, 25 factors had sufficient sample size for inclusion (a priori threshold of ≥ 1,000 records). Upon further review, 17 variables were selected for the study based on a range of criteria, including valid response values, sufficiently common or similar tracking procedures across boards, etc. Still, these vari- ables presented in varying levels of completeness (some factors had only ≤ 5% missing data, others had as much as ≥ 50%). Valid N totals

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S23 provide important context when interpreting the results presented in the accompanying descriptive table. Thus, each median estimate and proportion is reported only out of the total number of program entries for which the information could be verified. Extensive recoding was also applied to a majority of the 17 variables given the disparate means of data collection and coding across all participating BONs. Several quality assurance measures were implemented to ensure accuracy, including redundant coding procedures and substantial peer review. All variable transformations and recode parameters are specified below.

Student Age Student age was initially tracked as a multi-level ordinal variable. The proportion of enrolled students was reported for each of the fol- lowing age categories, in years: (a) 17-20, (b) 21-25, (c) 26-30, (d) 31-40, (e) 41-50, (f) 51-60, and (g) ≥ 61. Initial steps were taken to validate the original tracking by ensuring that no single cell value fell outside a predetermined maximum acceptable range, 0.99–1.01, to account for rounding error. Any record that did not meet this eligibility criterion was reviewed and, if it could not be reconciled, was omitted. In addition, to facilitate further modeling, a single response for each program record was selected. The age category that a majority or plurality (if no majority existed) of students fell was chosen for analysis. Still, due to low observed cell counts, each band was further collapsed for analysis until only two age categories remained with sufficient numbers for modeling (17–25 and ≥ 26).

Student Race Student race was initially tracked as a multi-level categorical variable. The proportion of enrolled students was reported or each of the following categories: (a) Asian, Black, (b) Caucasian, (c) Hispanic, (d) Native American, and (e) Native Hawaiian-Pacific Islander. Initial steps were taken to validate the original tracking by ensuring that no single cell value fell outside a predetermined maximum acceptable range, 0.99–1.01, to account for rounding error. Any record that did not meet this eligibility criterion was reviewed and, if it could not be reconciled, was omitted. In addition, to facilitate further modeling, a single response for each program record was selected to indicate a plurality of Caucasian students (≥ 40%).

Program Director Credentials Program director credentials were initially tracked as a multi-level categorical variable, which included BA, BS, MSN, DNP, PhD, and various other fields (master of education [MEd], doctorate of education [EdD], “non-nursing master’s”, etc.). To facilitate the analysis, broader “baccalaureate” and “other graduate” fields were created to further collapse otherwise related fields with low observed cell counts. The bulk of the other graduate field included MEd and EdD recipients.

Faculty Qualifications Faculty qualifications were initially tracked as a multi-level categorical variable. For each of the following categories, the proportion of faculty members holding these education credentials was reported: (a) associate degree, (b) baccalaureate nursing, (c) baccalaureate non- nursing, (d) master’s nursing, (e) master’s non-nursing, (f) DNP, (g) PhD nursing, and (h) PhD non-nursing. Initial steps were taken to validate the original tracking by ensuring that no single row fell outside a predetermined maximum acceptable range, 0.99–1.01, to account for rounding error. Any record that did not meet this eligibility criterion was reviewed and, if it could not be reconciled, was omitted. In addition, to facilitate further modeling, a single response for each program record was selected. The binned category in which a majority or plurality (if no majority) of faculty members fell was chosen for analysis. Due to low observed cell counts, each band was further collapsed for analysis until only two categories remained: “baccalaureate or lower” and “master’s or higher”.

Proportion of Full-Time Faculty The proportion of full-time faculty was originally scored on both a 0–1 and 0–100 scale. When possible, valid cell values were rescaled (e.g., 34 to 0.34) to a 0–1 scale. Steps were then taken to validate the original tracking by ensuring that no single cell value fell outside a predetermined acceptable range, 0 to 0.99–1.01 (accounting for rounding error). Non-possible values (e.g., > 101) were reviewed and, if they could not be reconciled, were recoded as missing. Given how skewed the raw scores remained, the variable was further collapsed into quartiles.

Student-to-Clinical Faculty Ratio Raw scores were highly skewed and grouped around a single common value (8); thus, the variable was collapsed into a binary predictor (e.g. ≤ 8 vs. ≥ 9).

Program Age Raw scores were highly skewed; thus, the variable was further collapsed into quartiles.

S24 Journal of Nursing Regulation Total Enrollment/Maximum Capacity Separate variables were present for total student enrollment and maximum program capacity. Most of the programs did not have data for both variables. The 28 programs with data for both variables were analyzed, and it was determined that the data were sufficiently comparable and could be combined into a single variable. For the 28 programs containing both values, the data on total enrollment was used. Any record containing a zero-cell value for both criteria was recoded as missing. Given how skewed the raw scores were, the data were then binned into quartiles.

Estimated Graduation Rate Raw scores were highly skewed; thus, the variable was collapsed into quartiles.

NCLEX Pass Rates NCLEX pass rates were originally tracked separately from all other data elements. As a result, this information was matched to program data for analysis using unique program codes as the primary key. As ≥ 80% is currently the passing standard used by the majority of U.S. BONs, as well as by the national nursing accreditors, that cut point was selected for analysis purposes.

Number of Program Sites Given low observed cell counts for higher raw values and the truncated nature of the range, this variable was further collapsed into a binary predictor (e.g., 1 vs. ≥ 2).

Program Status This variable was not originally embedded in the supporting program documentation. Supplementary secondary searches were conducted to ascertain if the programs under review were either public, private nonprofit, or private for-profit.

Statistical Methods A descriptive summary of the data included frequencies and proportions for all categorical characteristics, whereas continuous variables were reported using median and IQR estimates (Table 5). Generalized linear mixed-effects models were used to estimate the odds of full approval as a function of univariable faculty demographic and program characteristics. Post-hoc analyses assessing NCLEX pass rates at or above 80% were also investigated. In both instances, full multivariable modeling was pursued only if two criteria were met: (1) specific evidence-driven hypotheses guiding the inquiry and (2) sufficiently robust model samples. In all models, a binomial distribution was specified for the outcome, and logit links were used to estimate the odds ratio for each predictor. To account for the longitudinal struc- ture of the data, random intercepts were allowed for each program and state to account for within-program correlation and the possible influence of common state-level regulations. An alpha error rate of p ≤ .05 was considered statistically significant and all analyses were conducted using SAS 9.4.

Results A total of 11,378 annual reports from the 43 BONs that participated in the study were analyzed by researchers at NCSBN.

Program Demographics Characteristics The median program age in the sample was 23 years (IQR = 7–33 years) with a median enrollment capacity of approximately 66 students (IQR = 32–123). Summary outcome measures were strong, with a median graduation rate of 70% (IQR = 51–85) and NCLEX pass rate of 87% (IQR = 77–94). Approximately 90% of all programs (n = 9,168) received full program approval during the study period. A majority of the programs included in the analysis reported more traditional (17-25 years old; n = 497, 61.7%) and largely Caucasian (≥ 40% Caucasian; n = 1,621, 87.3%) student populations. Approximately 90% of programs had directors with evidence of graduate training in place, led by MSN (n = 1,128, 46.2%) and PhD (n = 495, 20.3%) degrees. Similarly, nearly three-quarters of pro- grams (n = 1,115, 72.8%) also reported a majority or, at minimum, a plurality of faculty with an academic credential of MSN or higher. The median rate of full-time faculty across all programs was 50% (IQR = 34–75%). Between 30% and 40% of all programs reported a student-to–clinical faculty ratio of 8, so data were binned accordingly, resulting in a near even split between those programs with a ratio ≤ 8 (n = 682, 46.8%) and those with ≥ 9 (n = 776, 53.2%). A majority of the programs were accredited (n = 4,738, 68.4%). Overall, there were fairly even distributions of programs by learning modality (in-person n = 1,004, 43.9%; hybrid n = 750, 32.8%; online n = 534, 23.3%), as well as degree type (LPN/LVN n = 2,556, 42.7%; ADN n = 2,077, 34.7%; BSN n = 1,354, 22.6%). Most programs in the sample were public institutions (n = 5,878, 61.7%), and the median number of program directors was one, with a range of one to seven.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S25 TABLE 5 Program Characteristics and Student Demographics as Provided in Annual Report Documents

Program Characteristics Valid N a n (%)b Student Age, in y 805 17–25 497 (61.7) > 26 308 (38.3) Student Race 1,856 Non-White/Caucasian (12.7) > 40% White/Caucasian 1,621 (87.3) Full Program Approval Status 10,172 No 1,004 (9.9) Yes 9,168 (90.1) Program Director Credentials 3,507 Baccalaureate 367 (10.5) MSN 1,658 (47.3) DNP 197 (5.6) PhD 710 (20.3) Other graduate 575 (16.4) Faculty Qualifications 1,531 Baccalaureate or lower 416 (27.2) MSN or higher 1,115 (72.8) % Full-Time Faculty (Median, IQR) 4,923 50 (34–75) Student-to–Clinical Faculty Ratio (Median, Range) 1,458 9 (1–22) ≤ 8 682 (46.8) > 9 776 (53.2) Accreditation (N = 6,929) 6,929 Not accredited 2,191 (31.6) Accredited 4,738 (68.4) Learning Modality (N = 2,288) 2,288 In-person only 1,004 (43.9) Hybrid 750 (32.8) Online 534 (23.3) Degree Type (N = 5,987) 5,987 LPN/LVN 2,556 (42.7) RN – ADN 2,077 (34.7) RN – BSN 1,354 (22.6) Program Age in Years (Median, IQR) 10,831 23 (7-33) Enrollment Capacity (Median, IQR) 3,677 66 (32–123) % Estimated Graduation Rate (Median, IQR) 2,060 70 (51–85) NCLEX Pass Rate (Median, IQR) 9,672 87 (77–94) ≤ 79% 2,943 (30.4) > 80% 6,729 (69.6) Number of Program Sites (Median, Range) 1,910 1 (1–13) 1 1,214 (63.6) ≥ 2 696 (36.4) Program Type 9,525 Private nonprofit 1,720 (18.1) Private for-profit 1,927 (20.2) Public 5,878 (61.7) Number of Program Directors (Median, Range) 2,957 1 (1–7) Note. MSN = master of science in nursing; DNP = doctor of nursing practice; IQR = interquartile range; LPN = licensed practical nurse; LVN = licensed vocational nurse; RN = registered nurse; ADN = associate degree in nursing; BSN = bachelor of science in nursing. a Total number of program entries: N = 12,107. Valid N is the total number of entries for which information is known and can be verified. b Data present- ed as n (%) except where otherwise noted.

S26 Journal of Nursing Regulation Faculty Characteristics Related to Full Approval Programs with a majority of graduate-educated faculty were marginally more likely (odds ratio [OR] = 1.82, 95% CI = 0.89–3.73, p = .10) to receive full approval compared to programs with a majority of faculty with a bachelor’s or lower degree. Similarly, programs with a larger proportion of full-time faculty were marginally more likely to receive full approval (p = .08) (Table 6). After adjusting for degree type, programs with a majority graduate educated faculty were found to be 2.80 times more likely (95% CI = 1.22–6.39, p = .003) to receive full approval compared to programs with a majority of bachelor’s or lower educated faculty (Table 6).

TABLE 6 Univariable Binary Logistic Regression Results Examining Faculty Characteristics Related to Program Full Approval Status

Faculty Characteristics n OR (95% CI) p Program Director Credentials 3,353 .39 Baccalaureate 1.63 (0.89–2.99) . 11 MSN (Ref) - DNP 1.17 (0.53–2.51) .70 PhD 1.19 (0.78–1.83) .42 Other graduate 0.88 (0.56–1.37) .56 Faculty Qualifications 1,421 Baccalaureate or lower (Ref) - MSN or higher 1.82 (0.89–3.73) .10 % Full-Time Faculty 4,353 .08 ≤ 34 (Ref) - 35–50 1.46 (1.06–2.01) .02 51–75 1.46 (1.03–2.06) .03 > 76 1.34 (0.93–1.92) . 11 Student-to-Clinical Faculty Ratio 879 ≤ 8 (Ref) - > 9 1.51 (0.76–2.99) .24 Note. OR = odds rate; CI = confidence interval; MSN = master of science in nursing; DNP = doctor of nursing practice.

Program Characteristics Related to Full Approval Programs that are accredited by a national nursing accreditation body were 2.03 times (95% CI = 1.44–2.87) more likely to receive full approval compared to non-accredited programs (p < .001) (Table 7). Online programs were also 55% (OR = 0.45, 95% CI = 0.27–0.73, p = .001) and 51% (OR = 0.49, 95% CI = 0.27–0.73, p = .001) less likely to receive full approval compared to in-person and hybrid programs, respectively. Longer standing and larger enrollment/capacity programs were both more likely to receive full approval (p < .001) compared to new programs (both p < .001). Similarly, programs with more than one site were 70% more likely (OR = 1.70, 95% CI = 1.04–2.77) to receive full approval compared to programs with only a single site (p = .03). Private for-profit programs were 71% (OR = 0.29, 95% CI = 0.22–0.38, p < .001) less likely to receive full approval compared to public programs and 60% (OR = 0.40, 95% CI = 0.28–0.56, p < .001) less likely to receive full approval compared to private nonprofit programs. Private nonprofit programs were also 27% (OR = 0.73, 95% CI = 0.55–0.96, p = .03) less likely to receive full approval compared to public programs. Programs with NCLEX pass rates at or above 80% were 5.34 times more likely (95% CI = 4.36–6.54) to receive full approval compared to programs that fell below that passing threshold (p < .001). While less pronounced, we also observed several other noteworthy trends. Those included an inverse relationship between higher rates of program director attrition and full program approval (OR = 0.86, 95% CI = 0.69–1.07, p = .17). Similarly, there was an observed trend of BSN programs receiving full program approval at higher rates than both LPN/LVN (OR = 0.69, 95% CI = 0.46–1.04, p = 0.08) and ADN (OR = 0.67, 95% CI = 0.44–1.01, p = .06) programs. These relationships by degree type (LPN/LVN vs. BSN – adjusted OR [AOR] = 0.83, 95% CI = 0.54–1.29, p = .40 and ADN vs. BSN – AOR = 0.71, 95% CI = 0.48–1.07, p = .10) held even after control- ling for national accreditation.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S27 TABLE 7 Univariable Binary Logistic Regression Results Examining Program Characteristics Related to Program Full Approval Status

Program Characteristics n OR (95% CI) p Accreditation 5,913 Not accredited (Ref) - Accredited 2.03 (1.44–2.87) < .001 Learning Modality 2,156 .01 In-person only (Ref) - Hybrid 0.92 (0.62–1.35) .66 Online 0.45 (0.27–0.73) .001 Degree Type 4,928 .13 LPN/LVN 0.69 (0.46–1.04) .08 RN – ADN 0.67 (0.44–1.01) .06 RN – BSN (Ref) - Program Age, in y 9,224 < .001 ≤ 7 (Ref) - 8–23 1.66 (1.30–2.12) < .001 24–32 2.92 (2.24–3.79) < .001 > 33 2.79 (2.05–3.79) < .001 Enrollment Capacity 3,371 .01 1–32 0.39 (0.22–0.68) < .001 33–66 0.66 (0.38–1.14) .14 67–123 0.58 (0.34–0.99) .04 > 123 (Ref) - Estimated Graduation Rate 1,466 .62 ≤ 50% (Ref) - 51%–70% 0.84 (0.48–1.46) .54 71%–85% 1.06 (0.58–1.93) .86 > 85%+ 1.23 (0.67–2.28) .51 NCLEX Pass Rate 8,035 ≤ 79% (Ref) - > 80% 5.34 (4.36–6.54) < .001 Number of Program Sites 1,172 1 (Ref) - > 2 1.70 (1.04–2.77) .03 Program Type 8,028 < .001 Private nonprofit 0.73 (0.55–0.96) .03 Private for-profit 0.29 (0.22–0.38) < .001 Public (Ref) - Number of Program Directors 2,879 0.86 (0.69–1.07) .17 Note. OR = odds ratio; CI = confidence interval; LPN = licensed practical nurse; LVN = licensed vocational nurse; RN = registered nurse ADN = associate degree in nursing; BSN = bachelor of science in nursing.

Relationship Between NCLEX Pass Rates and Faculty Characteristics The relationship between ≥ 80% NCLEX pass rates and faculty characteristics is illustrated in Table 8. This was a post hoc analysis carried out to uncover all possible faculty characteristics related to program outcomes. Programs whose director had a PhD were marginally more likely to have NCLEX pass rates ≥ 80% (p = .08). In addition, there was a trend toward programs with a greater proportion of full-time faculty having NCLEX pass rates ≥ 80% compared to programs with a smaller proportion full-time faculty (p = .11).

S28 Journal of Nursing Regulation TABLE 8 Univariable Binary Logistic Regression Results Examining Faculty Characteristics Related to NCLEX Pass Rates

Faculty Characteristics n OR (95% CI) p Program Director Credentials 2,864 .08 Baccalaureate 1.12 (0.75–1.68) .57 MSN (Ref) - DNP 1.35 (0.82–2.22) .24 PhD 1.56 (1.14–2.13) .01 Other graduate 1.21 (0.86–1.70) .27 Faculty Qualifications 604 Baccalaureate or lower (Ref) - MSN or higher 1.26 (0.72–2.20) .41 % Full-Time Faculty 3,287 . 11 ≤ 34 (Ref) - 35–50 1.24 (0.93–1.66) .14 51–75 1.17 (0.85–1.60) .33 > 76 1.49 (1.07–2.08) .02 Student-to-Clinical Faculty Ratio 1,357 ≤ 8 (Ref) - > 9 0.96 (0.60–1.52) .85 Note. OR = odds ratio; CI = confidence interval; MSN = master of science in nursing; DNP = doctor of nursing practice.

Program Characteristics Related to NCLEX Pass Rates Hybrid programs were 51% (OR = 1.51, 95% CI = 1.09–2.10, p = .01) and 64% (OR = 1.64, 95% CI = 1.03–2.56, p = .03) more likely to have a NCLEX pass rate ≥ 80% compared to in-person and online programs, respectively (Table 9). ADN programs were 45% less likely (OR = 0.55, 95% CI = 0.39–0.78, p < .001) to have an NCLEX pass rate ≥ 80% of 80% or above compared to BSN programs. When the outcome was full approval, the BSN programs trended toward (though it was not statistically significant) being more likely to have full approval than ADN or LPN/LVN programs. As when the outcome was full approval, longer-standing programs were more likely to have a NCLEX pass rate ≥ 80% of 80% or above (p < .001) compared to new programs (≤ 7 years). Also, similar to the outcome of full approval, programs with more than one site were also 50% more likely (OR = 1.50, 95% CI = 1.04–2.16) to have a NCLEX pass rate of ≥ 80% compared to programs with only a single site (p = .03). Private for-profit programs were 82% (OR = 0.18, 95% CI = 0.15–0.23, p < .001) and 73% (OR = 0.27, 95% CI = 0.21-0.36, p < .001) less likely to have a NCLEX pass rate ≥ 80% compared to public and private nonprofit programs, respectively. Private nonprofit programs were also 33% (OR = 0.67, 95% CI = 0.54–0.84, p < .001) less likely to have a NCLEX pass rate ≥ 80% compared to public programs. As director attrition increases, programs were 23% less likely (OR = 0.77, 95% CI = 0.66–0.91) to have a NCLEX pass rate ≥ 80% (p = .002).

TABLE 9 Univariable Binary Logistic Regression Results Examining Program Characteristics Related to NCLEX Pass Rates

Program Characteristics n OR (95% CI) p Accreditation 5,148 Not accredited (Ref) - Accredited 1.12 (0.89–1.41) .32 Learning Modality 1,808 .01 In-person only (Ref) - Hybrid 1.51 (1.09–2.10) .01 Online 0.93 (0.60–1.42) .72 Degree Type 3,902 .003 (continued) Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S29 Univariable Binary Logistic Regression Results Examining Program Characteristics Related to NCLEX Pass Rates (continued) Program Characteristics n OR (95% CI) p LPN/LVN 0.75 (0.53–1.06) . 11 RN – ADN 0.55 (0.39-0.78) < .001 RN – BSN (Ref) - Program Age, in y 9,060 < .001 ≤ 7 (Ref) - 8–23 1.83 (1.51–2.21) < .001 24–32 3.07 (2.52–3.75) < .001 > 33 3.83 (3.04–4.82) < .001 Enrollment Capacity 2,221 .20 1–32 0.97 (0.61–1.55) .90 33–66 0.73 (0.47–1.12) .15 67–123 0.69 (0.45–1.07) .10 > 123 (Ref) - Estimated Graduation Rate 1,958 .54 ≤ 50% (Ref) - 51–70% 1.26 (0.86–1.86) .24 71–85% 1.01 (0.67–1.53) .97 > 85% 1.16 (0.76–1.77) .50 Number of Program Sites 1,758 1 (Ref) - > 2 1.50 (1.04–2.16) .03 Program Type 8,762 < .001 Private nonprofit 0.67 (0.54–0.84) < .001 Private for-profit 0.18 (0.15–0.23) < .001 Public (Ref) - Number of Program Directors 2,198 0.77 (0.66–0.91) .002 Note. OR = odds ratio; CI = confidence interval; LPN = licensed practical nurse; LVN = licensed vocational nurse; RN = registered nurse; ADN = associate degree in nursing; BSN = bachelor of science in nursing.

Discussion A profile of the nursing programs most likely to secure full approval status emerged in the analysis. Specifically, shared characteristics of approved programs included (a) national accreditation, (b) traditional or hybrid modalities, (c) longer-standing programs, (d) higher enrollment capacity, (e) ≥ 80% or above first-time NCLEX pass rates, (f) multiple program sites, and (g) private nonprofit or public institutions. While not statistically significant, there were some observed trends that may also prove noteworthy for BONs. Related to faculty, programs with a majority of graduate-educated faculty were marginally more likely to receive full approval (p = .10), as were those with a greater proportion of full-time faculty (p = .08). Importantly, adjusting for degree type, programs with a majority graduate- educated faculty were ultimately found to more likely to receive full approval (p = .003). There was also a marginal, inverse relationship between program director turnover and full program approval (p = .17), which underscores the potentially important role of administra- tive stability. Finally, there was evidence that BSN programs received full approval more frequently than either LPN/LVN (p = 0.08) or ADN programs (p = .06). When using first-time NCLEX pass rates of ≥ 80% as the outcome, many of the findings associated with full program approval were replicated, though there were a few notable differences. For example, in the post-hoc NCLEX analysis, programs were marginally (p = .08) more likely to have NCLEX pass rates ≥ 80% when the program director was PhD educated, as compared to program directors with other graduate and undergraduate credentials. Online programs were significantly less likely to be approved than programs using traditional or hybrid modalities, whereas programs incorporating hybrid learning strategies were significantly more likely to have higher NCLEX pass rates. ADN programs were also significantly less likely to have higher NCLEX pass rates compared to BSN and LPN/LVN programs, as were all programs that experienced greater director attrition (> 3 directors in 5 years). In both instances, degree type and program director attrition were only marginally associated with full program approval. By contrast, enrollment capacity was a significant

S30 Journal of Nursing Regulation finding when examining approval status, but not NCLEX pass rates (p = .20). For both full approval and NCLEX pass rates, the percent of full-time faculty (> 35%) was a marginal finding (p = .08; p = .11). This quantitative study found that programs using hybrid-learning modalities were significantly more likely to have ≥ 80% first- time NCLEX pass rates. Similarly, a seminal, multi-year USDE meta-analysis of more than 1,000 studies with measured student outcomes found that hybrid (or blended) approaches had significantly better outcomes (Means et al., 2010). A rigorous national study of nursing education characteristics as compared to NCLEX outcomes found the percentage of full-time faculty was a predictor of NCLEX success. Our quantitative study found that trend with both the full approval and NCLEX pass rates outcomes as well (Odom-Maryon et al., 2018). An interesting finding in two large national studies on NCLEX outcomes related to nursing program characteristics (Odom-Maryon et al., 2018; Pittman et al., 2019), as well as our quantitative study, was that public nursing programs have significantly better outcomes than private nonprofit or private for-profit programs. Both in the Pittman et al. (2019) study and our study, the private for-profit pro- grams had significantly poorer outcomes. What is it about the for-profit programs that puts their students at risk? This finding needs to be studied with more depth in the future. Somewhat related, our study found that long-standing programs with more than one site and with higher capacities have significantly better outcomes. The evidence for quality clinical experiences and simulation was strong in the Delphi study and literature. Benner et al. (2010), in a seminal mixed-methods, longitudinal study, found that highly performing programs provided quality clinical experiences, emphasizing clinical judgment and reasoning. Similarly, Kavanagh and Szweda (2017), in their national study of more than 5,000 new graduates, found that high-performing nursing programs had competent clinical faculty who focused on the development of clinical judgment. However, these factors were not assessed in our quantitative study as those data are not consistently collected in annual reports and therefore are not available for analysis.

Limitations While a high number of annual and site reports were obtained from diverse U.S. BONs, the data collected were not consistent across all BONs. For example, many BONs did not report estimated graduation rates, numbers of hours for clinical experiences, simulation percent- ages, use of simulation guidelines, etc. Similarly, there were missing or incomplete data for those characteristics the BONs did track. This gap in data significantly limited the extent of the multivariable modeling possible. Guided by specific hypotheses, targeted multivariable models were generated when sufficiently robust model samples could be confirmed. However, full models assessing all potential factors simultaneously were not possible given the current sample limitations.

Conclusion This quantitative study of 5-years of BON annual reports provides us with a profile of those nursing programs that meet state approval requirements. Statistically significant characteristics of approved programs and those with ≥ 80% NCLEX pass rates included (a) national accreditation, (b) traditional or hybrid modalities, (c) long-standing programs, (d) higher enrollment capacity, (e) multiple program sites, (f) private nonprofit or public institutions, (g) program director with a PhD, (h) LPN/LVN and BSN programs (as opposed to ADN programs), and (i) no more than three program directors in 5 years. A marginally significant finding was that programs with more than 35% full-time faculty had ≥ 80% first-time NCLEX pass rates and full approval.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S31 A Qualitative Analysis of 5 years of BONs Site Visit Documents A qualitative study of 5 years’ worth of BONs’ site visit documents was conducted to better understand the qualifiable descriptors of why programs become at risk for failing or do fail. Specifically, we aimed to answer the following research question: What are the warning signs when programs become at risk for failing or do fail?

Methods This qualitative descriptive design blended directed-content analysis techniques to generate the findings for this report. Qualitative descriptive designs are the most basic of all approaches to data analysis and seek to identify and describe a phenomenon that is not well understood (Sandelowski, 2000, 2009). Considering that what leads to program failures in nursing is not well defined, a descriptive ap- proach was the best methodological match. NCSBN and AIR researchers collected the data while external experts in qualitative research analyzed the data.

Document Sample Inclusion and Exclusion Criteria The analytical sample provided to the team included 2,853 eligible documents from 40 states (Table 10). For each state, first the number of documents per state was counted. Next, the researchers checked whether files were “readable” (in a compatible file format) according to the MaxQDA software. Documents were then reviewed and sorted according to the inclusion and exclusion criteria, adhering to the principles of best practices of systematic reviews. Documents were included for analysis if they were classified as “site” or “survey” visits. Exclusion criteria were as follows: ⦁ Self-study reports/plans ⦁ Letters (e.g., letters of intent, approval letters, etc.) ⦁ Addenda ⦁ Current board status at full approval ⦁ State level summaries of any kind ⦁ Action plans and responses ⦁ Duplicate files ⦁ Accreditation documents ⦁ Spreadsheets ⦁ Signature pages ⦁ State BON annual reports.

TABLE 10 Total Site Visit Documents by State

State/Board Total Received Incompatible File Format Excludeda Total Reviewed AK 3 0 3 0 AR 207 34 101 72 AZ 35 4 20 11 CA-RNb 409 23 44 342 CA-VNb 53 2 7 44 CO 16 0 6 10 DC 40 0 40 0 GA 4 0 2 2 IA 7 0 0 7 ID 8 3 8 0 IL 13 2 0 11 KS 66 10 52 4 KY 8 0 8 0 LA-RNc 225 13 197 15 MA 92 0 92 0 MN 63 9 18 36 MO 4 3 0 1 MS 4 0 4 0

S32 Journal of Nursing Regulation Total Site Visit Documents by State (continued) State/Board Total Received Incompatible File Format Excludeda Total Reviewed MT 21 0 21 0 NC 1 0 1 0 ND 18 3 0 15 NE 15 2 0 13 NH 14 0 10 4 NM 22 3 3 16 OH 299 0 0 299 OK 39 0 0 39 OR 32 0 4 28 SC 4 0 4 0 SD 2 0 0 2 TN 52 18 0 34 TX 129 0 2 127 VA 69 0 7 62 VT 15 0 12 3 WA 354 47 236 71 WI 7 0 7 0 WV-RNd 485 13 460 12 WY 18 5 12 1 TOTAL 2,853 194 1,381 1,278 a Documents were not site visit or survey reports. b California has two nursing boards: California Board of Registered Nursing (CA-RN) and California Board of Vocational Nursing and Psychiatric Techni- cians (CA-VN). Both boards submitted documents. c Please provide explanation for LA-RN d Please provide explanation for WV-RN

Of the documentation from 40 states in the sample, nine states had no documents that met the inclusion criteria; therefore, this analysis represents data from 31 states. After the inclusion/exclusion criteria were applied, there were 1,278 site visit reports for all RN and LPN/LVN programs eligible for the analysis. The final step in the process was to determine which reports were for programs that were on probation, under review, or did not have full approval. Only these reports would form the dataset for analysis.

Data Analysis Two researchers used MaxQDA qualitative data analysis software to analyze the documents. Coding occurred through content analysis and context analysis. Content analysis is an iterative coding process whereby codes emerge naturally from the data using a specific intent to guide the process (Miller & Alvarado, 2005). In this case, the intent was issues contributing to low performing schools (poor NCLEX pass rates and approval downgrades). All documents from fully approved programs were randomly reviewed to be sure they didn’t have the same issues, and these issues were unique to the low performing schools. Researchers also used context analysis that considered the geographic location of the school, the state regulatory context (as speci- fied through websites), and whether the school was classified as urban, suburban, rural, or virtual (Miller & Alvarado, 2005). The team developed a codebook to harmonize their coding (Appendix C). RN and LPN/LVN programs were analyzed separately, and then coding was harmonized and tracked for similarities and distinctions based on entry-level program type. Figure 2 illustrates how codes emerged during the data analysis process. Researchers tracked the number of codes generated with each document to determine when coding saturation would occur, according to the methods recommended by Hennink et al. (2017). Our process results were similar to those of the study by Hennink et al, which demonstrated that coding saturation occurs between interviews/ data sources nine to 11 and minimal codes generated after interview 16. Ours differed only slightly, with saturation occurring later in the coding process, which may reflect the different reporting formats of the states. Themes and categories emerged iteratively from the coding process, and finalized themes and categories represent consensus agreement by the analysts.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S33 FIGURE 2 Number of New Codes Generated by Site Visit Documents

18 16 16 14 12

10 9 8 6 4 4

Number of New Codes 4 3 3 3 2 2 2 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Document Number

Results The findings here represent data from 139 survey or site visit reports that formed the final sample for analysis, with 52 reports from RN programs and 87 from LPN/LVN programs. Eighteen states had both RN and LPN/LVN programs represented, and 15 states had only RN programs. There were several notable observations in the analysis. First, a large number of for-profit programs received citations. This may merit further exploration. Second, “younger” programs (< 10 years in operation) appeared to be at higher risk for failure. Third, more LPN/LVN programs received citations compared to RN programs. The data comprising the sample allowed us to achieve a level of data saturation that generated themes and categories. Through a summative content analysis, which examined the frequency with which codes were used to analyze the data (Hsieh & Shannon, 2005), all codes appeared in the documents with a frequency of 50% or greater. The themes and categories aligned well with Bronfenbrenner’s socioecological model as an organizing framework for why programs become at risk or fail (Darling, 2007; Christenson, 2010). In the case of nursing education, the student is the center of development as they are socialized and educated to become nurses within a specific context. Their responses to the environment where they are shaped affect their developmental processes and outcomes. With NCLEX as the final quantitative developmental indicator of an entry-level educational program in nursing, a series of failures on the NCLEX suggests there may be contextual issues affecting students’ performance. The right educational context should be able to ameliorate individual level issues with students who would otherwise confound the outcomes. Effective educational environments are “person centered” with teaching strategies, student services and support, and faculty qualifications aligned to optimize student success. As such, themes and associated categories are presented in alignment with this model in Figure 3. All schools had at least two of these areas where problems had occurred, with faculty and leadership issues being the most common. Failing programs usually had problems at every level, with the state regulatory context dictating the severity of those issues based on local laws..

S34 Journal of Nursing Regulation FIGURE 3 A Socioecological Perspective on Factors Contributing to At-Risk Status of Nursing Education Programs

State Regulatory Context

Organization

Leaders

Faculty

Student

The findings are presented by the three overarching themes that emerged from the analysis followed by the categorical presentation of findings associated with the theoretical framework.

Emerging Themes Theme 1: Site Visit Triggers Site visit triggers are defined as the issue or issues that triggered a review of the program via a site or survey visit. The main signal for a “site visit trigger” was NCLEX pass rates ≤ 80% for 4 or more quarters. The length of time it took to trigger a site visit related to NCLEX performance concerns varied by state regulations. Other site visit triggers were associated with student complaints about the program, clinical site complaints about the program or students, and/or public formal complaints about a program or its graduates. What triggered the site visit appeared to set the tone of the visit for the surveyor. Reports that involved student or public complaints were more detailed, and assessments appeared more thorough than with only a trend in NCLEX failures.

Theme 2: Administrative Processes Are a Primary Source of Program Vulnerability Administrative processes are defined as the necessary operational procedures, policies, and resources needed to ensure adequate record keeping for students and faculty, support faculty productivity, and facilitate program leadership. When programs had failing NCLEX rates for more than a year, it was clear that administrative processes had either (a) never existed, (b) not been revised in more than 5 years, (c) were ignored altogether, or (d) had been cut during a reorganization or as part of “efficiency” measures. Notable hallmarks of poor administrative processes leading to citations by the state include: ⦁ Poor record keeping of faculty credentials, course evaluations, and student records ⦁ A general lack of policies and procedures ⦁ A lack of quality improvement processes around program and curricular evaluations ⦁ A lack of faculty and student input into policies, procedures, and processes ⦁ Students failing to receive educational materials (e.g., books, uniforms, software, internet access, syllabi, etc.) at the beginning of the semester. Most failing programs had at least two of these factors involved when receiving a violation.

Theme 3: Use of Data Programs that failed to use data to set admission, progression, and student performance standards appeared to have consistent problems. These issues were also persistent over time when programs received a citation for a deficiency and data were not used to address it. It was also a persistent issue when programs failed to conduct adequate self-evaluations. The potential reasons for a lack of use of data include: ⦁ Lack of administrative competence with interpreting and using data to guide decision making (e.g., statistical process control) ⦁ Lack of faculty competence with interpreting and using data to set standards ⦁ No internal statistical support to conduct predictive analyses about what factors influence student performance and may predict NCLEX first-time pass success ⦁ No resources to contract out data for analysis to inform decision making. Key areas where data need to be used to facilitate student success include: ⦁ Student demographics ○ Socioeconomic status

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S35 ○ English \ First language \ Other languages ○ Presence of children younger than 18 years in the home ○ Need to work while attending the program ⦁ Program admission ○ High school or previous coursework cumulative grade point average ○ SAT or ACT scores (when applicable) ○ Secondary education in the United States ⦁ Presence of a remediation program prior to start of coursework ⦁ Program progression ○ Minimum grade point average standard ○ Minimum passing grade in specific courses ○ No pass/fail grades ○ Predictive examination scores (e.g., HESI examinations) These were all indicated in survey reports captured by the surveyor as factors that were influencing the risk for program success or failure.

Categorical Findings Associated With the Theoretical Framework Student Feedback Student feedback reported in the summaries provided useful perspectives on sources of problems from a non-faculty viewpoint. From the student perspective, they were readily able to: ⦁ Differentiate between well-managed versus poorly managed schools ⦁ Identify lack of program director support ⦁ Verbalize fear of retaliation from faculty for discussing program concerns ⦁ Identify a lack of student input into program decision making ⦁ Highlight school-student communication issues. Student demographics may also be factors. The most consistent faculty-reported student issues associated with NCLEX performance problems included status of English as a first or other languages speaker and undiagnosed learning disabilities. Poverty and family issues were the next most attributed factors affecting students’ academic performance and NCLEX pass rates. The faculty-cited student issues were reported almost entirely in LPN/LVN program reports.

Faculty Feedback A persistent theme throughout all reports and the most consistent characteristic of programs becoming at risk or failing was that faculty do not have training in basic pedagogical methods. Reasons why faculty lacked training included (a) lack of bachelor’s or master’s de- gree–prepared faculty who had undergone any kind of teaching training; (b) faculty who transitioned directly from clinical practice roles and who had little to no experience with precepting; (c) heavy workloads; (d) a lack of ongoing faculty development of substance; and (e) limited new faculty mentorship. Regarding the lack of ongoing faculty development, such development opportunities focused on what leadership deemed important rather than preparing faculty to become teachers or improve skills. Another persistent faculty theme was “a lack of recent clinical practice experience.” Schools with faculty who had not completed direct patient care within the past 5 years appeared to have outdated teaching approaches, were not in tune with the latest in clinical practice, and often relied less on the use of evidence in their curricula. Given the increased use of technology in the workplace and the increased use of electronic health records that have fundamentally changed the implementation of the hospital nursing role, faculty without recent clinical experience appear to be a liability that places programs at risk. Curriculum development skills by faculty were also lacking in many programs and appears associated with the level of curricular control faculty were perceived to have by the surveyor. Many failing schools had no overarching philosophy that tied the curriculum together. This resulted in curricular plans that were task centered, often in ways that masked themselves as being competency based. High faculty turnover or an inability to recruit qualified personnel was also a factor in many schools. An extensive reliance on adjunct faculty to teach all classes is a known issue in higher education quality and nursing programs. Programs that lacked full-time faculty saw problems with NCLEX performance as their quality indicator. Poor compensation (in comparison to full- time clinical nursing positions) and a lack of incentives were both contributors to turnover and recruitment issues. Faculty in for-profit schools often observed that while administrators often had incentives like stock options in the parent company who owned the program, faculty were rarely offered such incentives.

S36 Journal of Nursing Regulation Teaching and Learning Resources Teaching and learning resources were a critical subtheme for faculty. Even qualified faculty would have trouble doing their jobs if teach- ing and learning resources were not available or poorly managed. Teaching and learning resources were also tied directly to the leader’s ability to procure them resources for faculty and the organization’s management. Key teaching and learning resources that appear tied to a program’s risk for failure fell into three categories: (a) teaching resources, (b) physical instructional resources, and (c) quality of materials. Using NCLEX test preparation materials and online supplemental instructional resources with classroom and clinical instruction appeared linked to satisfactory NCLEX pass rates. The brand of these materials did not matter. A survey of programs may produce insights as to which brands are most effective, but it also may be linked to student demographic data. Physical Instructional Resources Physical resources include the quality of materials in the simulation laboratory, the quality of other physical instructional resources for teaching and learning, and whether full- or part-time faculty had private office space for student meetings or their own work. Office space for adjunct faculty did not appear significant, but the ability to reserve a conference room to meet with students was important for them. Programs that lacked simulation laboratory accreditation appeared at higher risk for failure. Broken mannequins or equipment, out-of-date materials, and a lack of equipment for medication administration were common issues cited. The quality of materials is defined as whether teaching materials were prepared and managed according to the course outcomes in the syllabi and were consistent in their design with internal policies. It was not uncommon for site visitors to find that the content of a class did not match the approved course description or outcomes. The more classes with issues, the more likely the program was to have prolonged performance issues on NCLEX. However, it was not always possible to follow up on NCLEX performance at a school level after a probationary citation because of variations in state transparency around problem program reporting.

Leadership of the Nursing Program Nursing program leadership had three dimensions that appeared to affect the risk of a program failing or falling under review. The first was when the director of the nursing program, through organizational consolidation, was placed in charge of other allied health and/or vocational programs. These added responsibilities often came without the addition of an assistant director who could manage the day- to- day operations of the nursing program. The additional responsibilities detracted from program quality, a factor that was also reflected in student feedback. It is another “symptom” of potentially problematic program management practices. The second leadership dimension appears to be tied to the degree qualifications of a director. Doctoral-level education appeared to mitigate against a lack of academic administrative experience, though the exact effect of why this level of education appeared to be protective against program failure is not yet clear and would merit further exploration. It may be that individuals with doctorates have more diversity of work experience in general and that the training provided additional skills that facilitated program management. It was clear that directors in charge of programs that did not have a college or university affiliation and whose leaders were only prepared at the master’s level were at greater risk for failure. Because demographic data about these individuals were not available, it was difficult to determine why this finding occurred in the reports; however, it was consistently observed. The final nursing program leadership issue that arose frequently was when a nurse was not in charge of the program. This could be either due to the position being vacant for a long period or higher administration not thinking a nurse needed to oversee the program despite the regulatory context dictating otherwise. Both factors were more common in for- profit programs than other types.

Educational Organizations Educational organizations had other specific issues that emerged as distinct categories in the analysis—namely, organizational changes and resources, which could influence program success or failure. Organizational changes are changes in schools with other degree-granting programs where administration decided to make changes based on economic efficiencies. Sometimes, these changes masked broader financial problems for the parent institution overall. Also, changes could add or decrease responsibilities for nursing faculty. From the reports, it appears that 1 to 3 years after these changes, programs are at risk for changes in NCLEX performance, which increase the risk of probation. The longer performance issues persist after these changes, the more likely the program would transition from probation to failing. These trends likely reflected the nursing program leadership’s ability to navigate existing faculty through the changes or how they managed higher faculty turnover rates that are often associated with organizational change of any kind. Resources provided by organizations to facilitate nursing education were another factor that was often missing. While often monetary in nature, resources include (but are not limited to) student affairs support, administrative support, libraries, and information technology. How resources were allocated toward clinical learning experiences and clinical sites, including simulation and laboratory supplies, were also important. Problematic programs usually lacked in at least one but sometimes all of these areas.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S37 It is important to note that we observed that for-profit schools appeared to trigger more site visits than nonprofit or state schools. This was especially true for LPN/LVN programs.

State Regulatory Context It was clear the regulatory context of the program approval of nursing education had a positive effect in terms of holding the programs accountable for standards. This was particularly true related to the minimum requirements for faculty. It held whether or not the program was accredited by a national accrediting agency. Probationary and failure consequences varied in the length of time schools had to address their deficiencies. Unsurprisingly, shorter periods usually meant increased chance of failure. A shorter period also meant that schools had to rely on obtaining resources to hire consultants to help them address deficiencies. Without a standardized chart to compare regulatory contexts for nursing education, our ability to compare between states to de- termine the associations with geographic, socioeconomic, and other factors was limited.

Limitations Despite the volume of documents that served as the initial sample size, there were a number of problems with file management that may have precluded a larger sample size or fully complete analysis of all site or survey visits. These include: ⦁ An inconsistent and unstructured file naming and management system ⦁ Incompatible file types with qualitative data analysis software ⦁ Improper file format for analysis or optical character recognition ⦁ Files that were not able to be downloaded for analysis ⦁ Missing reports for RN or LPN/LVN programs from 10 states. While every effort was made to ensure that all site visit or survey report documents were included, some may have not been included due to how files were named or classified. Nonetheless, because of the number of documents included, the researchers are confident of the results because of coder consensus, because only one new code appeared after the 20th document, and because both coders believed they had achieved saturation around 40 to 45 coded interviews.

Conclusion Considerable and specific data on what happens when nursing programs begin to fail or do fail were found in the site visit documents. Three overarching themes (site visit triggers, administration processes, and the ’s use of data in continuous improvement) were found. Specific findings in the areas of student, faculty, leadership, organization, and state regulatory context were also presented. The issues coalesce nicely with the data found in the literature and our Delphi study. The site visit study was the first to find that BON approval of nursing education programs is an essential process for protecting the public and maintaining nursing education program standards. This was true regardless of whether the programs were accredited by a national nursing accreditation agency.

S38 Journal of Nursing Regulation SUMMARY This comprehensive literature review and three-part national study provides substantial evidence-based criteria for identifying quality indicators of successful and high-risk nursing education programs to effectively recommend guidelines for nursing education approval. These criteria include quality indicators and warning signs related to: (a) organizational requirements and processes, (b) program leader- ship, (c) faculty quality and requirements, and (d) curriculum and clinical program components.

Organizational Requirements, Policies, and Processes Administrative processes, such as a lack of policies and procedures, were found in both the site visit study and the literature review as being problematic for nursing programs. The literature review, Delphi study, and site visit study all emphasized the importance of collecting data to establish policies and procedures and to evaluate the nursing program based on the data collected. A major theme identified in the site visit study was that programs that fail to collect data to set admission, progression, and student performance standards received downgraded approval statuses. The site visit study cited key areas where these data could facilitate student success: (a) student demographics, (b) program admis- sion standards, (c) remediation program, and (d) program progression. The literature review and Delphi study supported this finding as well. BONs should expect these data to be collected and acted upon. For example, if the program has a high rate of students with English as a foreign language, they should have resources in place to assist these students. Likewise, if the socioeconomic status of the students means they come from disadvantaged homes where the school systems are not up to par, there should be remediation programs in place. Additionally, the site visit study found that students in underperforming schools often verbalized communication issues and lack of input into program decision making. These findings illustrate the importance of regulators meeting with the students when making a site visit. The failure of nursing programs to collect data also meant that they were unable to evaluate their programs. The literature review, Delphi study, and site visit study all emphasized the importance of a systematic evaluation of the nursing program based on data collection. Yet, one problem identified in the site visit study was that in underperforming programs, even when data were collected, faculty often are not able to interpret or use data for setting standards. Likewise, faculty may not have access to statistical support when analyzing the data. In the site visit study, researchers found that in programs that lost approval, policies and procedures had not been revised in more than 5 years or they had been ignored (because of “efficiency” measures). Furthermore, there was no faculty or student input into the policies and procedures. In programs that lose full board approval, organizational changes made for “economic efficiencies” sometimes mask larger financial problems. BONs will want to find out more about what is behind organizational changes because they can affect faculty workload and responsibilities. As noted in the site visit study, when organizations begin to make organizational changes because of financial problems, they often lose board approval within 1 to 3 years. This is an important consideration for BONs when evaluating programs. Similarly, when the parent organization does not provide the program with sufficient resources, such as student services, libraries, information technology, and adequate clinical facilities and simulation, the programs and students struggle. In the site visit study, another issue that often triggered further review by a BON was associated with student, clinical site, or formal public complaints. This finding was further substantiated by the Delphi study, which cited complaints to the BON as a warning sign for the program. Given the issues that were uncovered in these site visits, BONs should continue the practice of site visits to programs with a substantial number of complaints. The annual report and site visit studies and some of the literature demonstrated that nursing program ownership may impact out- comes. Negative outcomes can range from low NCLEX pass rates to the program unexpectedly shutting down while students are enrolled. Data indicate that for-profit schools are at the greatest risk. Public schools and those that are well-established (≥10 years) are the most likely to maintain educational standards set forth by the state. Therefore, new programs require more oversight, as do online programs.

Leadership Consistent leadership in a nursing program was found to be crucial. The literature review and Delphi study cited program director attri- tion as a warning sign, whereas the annual report study found frequent director turnover resulted in significant statistical differences in the NCLEX pass rates and differences in program approval status. Both the site visit study and the annual report study found that when the program director was doctorally prepared, the programs had higher NCLEX pass rates and were more likely to have full approval. In the site visit study, it appears as though the doctoral degree may make up for a lack of academic administrative experience due to the other valuable experiences the individual may bring to the role. The site visit study was able to delve deeper into leadership of the nursing program and found that when a director is placed in charge of other allied health and/or vocational programs, usually for financial reasons, the program is more at risk due to having the director’s attention diverted by multiple programs. Additionally, the site visit study discovered that program issues arose when a nurse was not in charge of the program. This often happened when the position had been vacant for some time or when administration did not think it necessary. These situations happened more often in for-profit institutions.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S39 Faculty Quality and Requirements The quality of faculty is at the core of a successful nursing program. Having consistent, full-time faculty (at least 35% full-time faculty, as opposed to adjunct or part-time faculty) in a nursing program predicts full approval and higher NCLEX pass rates according to the annual report study. The literature review also found that the full-time faculty percentage was linked to higher NCLEX pass rates, and the Delphi study reported consistent, full-time faculty as an essential element in a nursing program. The site visit study found high faculty turnover and the inability to recruit qualified faculty were linked to poor NCLEX performance. Both the annual report and site visit studies demonstrated that a lack of a graduate degree for faculty was linked to less than full approval status. Additionally, as seen in the site visit study, faculty with little training in basic pedagogies was a persistent theme in failing programs. Faculty in programs that were failing often had no training in teaching, having transitioned directly from clinical practice to education. Likewise, they had heavy workloads and limited new faculty mentorship opportunities. The site visit study cited the lack of substantive and ongoing faculty development opportunities as an important element of failing nursing programs. The literature review and Delphi study also cited faculty development as important factors in successful nursing programs. The literature review, Delphi study, and site visit study all identified current clinical experiences as a critical element of successful nursing programs. The site visit study found that schools where faculty had not provided direct patient care in the past 5 years appeared to have outdated teaching experiences and were not teaching the latest technological advances. There are many ways a program could provide their students with faculty who are clinically competent. They might, for example, develop partnerships with practice, such as dedicated education units, in which the faculty lead the clinical experiences but experienced nurses work directly with their students. The site visit study also found that in programs that lost approval, faculty did not have the resources needed to teach. For example, faculty lacked the ability to reserve a conference room to meet with students or equipment in their learning and simulation laboratories was missing or broken. Likewise, the quality of the syllabi was often questionable in underperforming programs; for example, it was typical that the content of the classes did not match the course descriptions or outcomes.

Curriculum and Clinical Experiences The annual report study found hybrid education was a predictor of 80% or higher NCLEX pass rates and that online education predicted the program was less likely to be approved. Quality and safety concepts, such as the QSEN competencies, were identified in the literature review and Delphi study as impor- tant elements of nursing curricula. However, more research on whether integrating QSEN into the curriculum is associated with better outcomes is needed. According to the site visit study, many failing schools had no overarching philosophy and curricular framework that tied the cur- riculum together. This resulted in curricula that were task-oriented, masking themselves as being “competency-based.” The literature review and Delphi study highlighted that clinical judgment is critical to thread throughout the curricula but provided little detail on specifically how to do that, though that literature is growing. The literature review, Delphi study, and site visit study all found quality clinical experiences and simulation to be critical for suc- cessful nursing programs. Clinical experiences with actual patients in a variety of clinical settings were found to be important. BONs should evaluate the relationship the program has with its clinical partners, looking for collaboration between the nursing program and practice sites. Programs that lost BON approval often had a limited number of clinical sites, and their parent organizations did not al- locate enough resources (such as clinical faculty) toward clinical learning experiences according to the site visit study. Likewise, in weaker programs, supplemental instructional resources (such as videos and online modules) were lacking. The literature review found the fol- lowing to be important areas to include in clinical experiences: (a) clinical reasoning, (b) delegation, (c) electronic data management, (d) emergency procedures, (e) interprofessional communication, (f) knowledge of pharmacology, (g) leadership, (h) time management, and (i) understanding pathophysiology. As documented in the literature review, Delphi study, and site visit study, quality simulation is an important element of a successful nursing program and is an important curricular component for BONs to evaluate. The site visit study found the quality of the materials in the simulation laboratory was poor with broken or out-of-date materials in failing programs. Often there was a lack of equipment for teaching medication administration, a critical curricular element. Simulation laboratory accreditation should be mandated for all programs substituting simulation for direct care clinical experiences. National nursing accreditation of the nursing program is associated with higher NCLEX pass rates, as seen in the literature review, Delphi study, and annual report study, although we are not sure why. It may be that the more seasoned and successful programs seek national nursing accreditation. More research should be conducted to clarify the reasons. While most BSN programs are nationally ac- credited, only about 53% of ADN programs and 11% of LPN/LVN programs are accredited (Silvestre, 2020). Currently, about half the BONs require programs to be nationally nursing accredited.

S40 Journal of Nursing Regulation Conclusion In their missions of public protection, the BONs have called for nursing education quality indicators and warning signs as they approve nursing programs. This literature review and three-part mixed-methods study have provided robust and specific data for developing evidence-based and legally-defensible approval guidelines. From this evidence, a site visit template (Appendix D) was developed for BONs to use when making site visits, and an annual report template (Appendix E) was developed for collecting core data on an annual basis. The annual report template will enable the collection of core, consistent data across the BONs, thus allowing for continuing data analysis and making the guidelines a living document that will change based on new data. Part III presents the approval guidelines.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S41 Part III

NCSBN Guidelines for Nursing Education Program Approval

Introduction Considering the literature and study evidence presented, NCSBN invited a group of research, education, regulatory, and legal experts (Table 11) to analyze the data together and make recommendations for evidence-based, legally defensible guidelines for nursing regulatory bodies (NRBs) and nursing education programs (Figure 4). It is hoped that these guidelines will increase collaboration between regulators and educators, allow for transparency in the approval process, help NRBs avoid antitrust issues, and provide criteria that allow NRBs to intervene prior to programs falling below standards.

FIGURE 4 Evidence-Based Model for Nursing Education Program Approval

Lierature Review

Evidence- 5-Year Delphi Site Visits Based Approval

5-Year Annual Reports

The guidelines allow NRBs to use the evidence-based quality indicators to provide guidance on where the nursing program needs to act. NRBs will also be able to identify warning signs and high-risk programs, from either site visits (Appendix D) or annual reports (Appendix E), and to take action before a program falls below standards. This will enable the BONs to be proactive rather than reactive. The evidence for the quality indicators and warning signs can be found in Table 12. The site visit template (Appendix D) was developed from the evidence and can be used by NRBs during site visits. Additionally, the annual report core data template (Appendix E) was devised from the quantitative data and can be used by BONs to collect critical nursing education data.

S42 Journal of Nursing Regulation TABLE 11 Expert Panel

Maryann Alexander, PhD, FAAN Donna Meyer, MSN, ANEF, FAADN, FAAN Chief Officer, Nursing Regulation CEO, Organization of Associate Degree Nursing NCSBN Janice Brewington, PhD, RN, FAAN Bibi Schultz, MSN, RN, CNE Director, Center for Transformational Leadership Director of Education Chief Program Officer Missouri State Board of Nursing National League for Nursing Rebecca Fotsch, JD Anne Marie Shin, RN, MN, MSc (QIPS) Director, State Advocacy and Legislative Affairs Manager, Education Program NCSBN College of Nurses of Ontario Janice I. Hooper, PhD, RN, FRE, CNE, FAAN, ANEF Josephine Silvestre, MSN, RN Nursing Consultant for Education Senior Associate, Regulatory Innovations Texas Board of Nursing NCSBN Nicole Livanos, JD Nancy Spector, PhD, RN, FAAN Senior Associate, State Advocacy and Legislative Affairs Director, Regulatory Innovations NCSBN NCSBN Elizabeth Lund, MSN, RN Joan Stanley, CRNP, FAAN, FAANP Executive Director, Chief Academic Officer NCSBN Board of Directors, American Association of Colleges of Nursing Tennessee Board of Nursing Brendan Martin, PhD Crystal Tillman, DNP, RN, CPNP, PMHNP-BC, FRE Associate Director, Research Director of Education and Practice NCSBN North Carolina Board of Nursing

Quality Indicators The quality indicators are categorized into administrative requirements, program director, faculty, students, curriculum and clinical experiences, and teaching and learning resources. They were developed by the expert panel based on the literature review and the Delphi, annual report, and site visit studies (Table 12).

Administrative Requirements 1. The program has criteria for admission, progression, and student performance. 2. Written policies and procedures are in place and have been vetted by faculty and students.

Program Director 1. The program director of an RN program is doctorally prepared and has a degree in nursing. 2. The program director of an LPN/LVN program has a graduate degree and a degree in nursing.

Faculty 1. At a minimum, 35% of the total faculty (including all clinical adjunct, part-time, or other faculty) are employed at the institution as full-time faculty. 2. In RN programs, faculty hold a graduate degree. 3. In LPN/LVN programs, faculty hold a BSN degree. 4. Faculty can demonstrate they have been educated in basic instruction of teaching and adult learning principles and curriculum devel- opment. This may include the following: 5. Methods of instruction a. Teaching in clinical practice settings b. Teaching in simulation settings c. How to conduct assessments, including test item writing d. Managing “difficult” students. 6. Faculty can demonstrate participation in continuing education related to nursing education and adult learning pedagogies. 7. The school provides substantive and periodic workshops and presentations devoted to faculty development.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S43 8. Formal mentoring of new full-time and part-time faculty takes place by established peers. 9. Formal orientation of adjunct clinical faculty occurs. 10. Clinical faculty have up-to-date clinical skills and have had experience in direct patient care in the past 5 years. 11. Simulation faculty are certified.

Students 1. The nursing program should ensure the following are in place to assist students: a. English as a second language assistance is provided. b. Assistance is available for students with learning disabilities. c. All students have books and resources necessary throughout the program and strategies are in place to help students who can’t afford books and resources. d. Remediation strategies are in place at the beginning of each course and students are aware of how to seek help. This should include processes to remediate errors and near misses in the clinical setting.

Curriculum and Clinical Experiences 1. At least 50% or more of clinical experience in each clinical course is direct care with patients. 2. Variety of clinical settings with diverse patients. 3. Opportunities for quality and safety education integrated into the curriculum, including delegating effectively, emergency procedures, interprofessional communication, and time management. 4. Systematic evaluation plan of the curriculum is in place.

Teaching and Learning Resources 1. The simulation laboratory is accredited. 2. Students have access to a library, technology, and other resources. 3. Programs are able to assess students with learning disabilities and tailor the curriculum to meet their needs.

Warning Signs NRBs should intervene early when programs experience the following warning signs. The evidence indicates these programs could be identified either from site visits or annual reports (Table 12). The warning signs include: 1. Complaints to BONs or other NRBs from students, faculty, clinical sites, or the public. 2. Turnover of program directors (more than three directors in a 5-year period). 3. Frequent faculty turnover/cuts in numbers of faculty. 4. Trend of decreasing NCLEX pass rates.

High-Risk Programs That May Need Additional Oversight If a program has been in operation for 7 years or fewer, it may need additional oversight because the NRB does not have a history with that program. This recommendation is supported by the literature review, the annual report study, and the site visit study. Additional oversight may include more frequent progress reports related to the number of students, faculty qualifications, stability of the program director, and NCLEX pass rates, in addition to the regularly collected annual reports. If there is concern, the BON may make a focused visit to the program to further assess and possibly make recommendations.

S44 Journal of Nursing Regulation Supportive Evidence for the Approval Guidelines Evidence supporting how each warning sign and quality indicator is linked to the evidence is presented in Table 12.

TABLE 12 Evidence Supporting Guidelines for Quality Indicators and Warning Signs

Quality Indicators Evidence Administrative Requirements 1. The program can provide evidence that their admission, Literature review, qualitative 5-year site visit study progression, and student performance standards are based on data. 2. Policies and procedures are in place and based on data that Literature review, qualitative 5-year site visit study have been vetted by faculty and students. Program Director 1. The program director of an RN program has a doctorate Literature review, qualitative 5-year site visit study, quantitative and a degree in nursing. 5-year annual report study 2. The program director of a LPN/LVN program has a graduate Literature review, quantitative 5-year annual report study degree and a degree in nursing. Faculty 1. At a minimum, 35% of the total faculty (including all clinical Literature review, Delphi study, qualitative 5-year site visit study, adjunct, part-time, or other faculty) are employed at the in- quantitative 5-year annual report study stitution as full-time faculty. 2. In RN programs, faculty hold a graduate degree. Literature review, qualitative 5-year site visit study, quantitative 5-year annual report study 3. In LPN/LVN programs, faculty hold a BSN degree. Literature review, quantitative 5-year annual report study 4. Faculty can demonstrate they have been educated in basic Literature review, qualitative 5-year site visit study instruction of teaching and adult learning principles and curriculum development. This may include the following: Methods of instruction Teaching in clinical practice settings Teaching in simulation settings How to conduct assessments, including test item writing Managing “difficult” students. 5. Faculty can demonstrate participation in continuing educa- Literature review, qualitative 5-year site visit study tion related to nursing education and adult learning pedagogies. 6. The school provides substantive and periodic workshops Literature review, Delphi study, qualitative 5-year site visit study and presentations devoted to faculty development. 7. Formal mentoring of new full-time and part-time faculty Literature review, Delphi study, qualitative 5-year site visit study takes place by established peers. 8. Formal orientation of adjunct clinical faculty. Literature review, Delphi study, qualitative 5-year site visit Study 9. Clinical faculty have up-to-date clinical skills and have had Literature review, Delphi study, qualitative 5-year site visit Study experience in direct patient care in the past 5 years. 10. Simulation faculty are certified. Literature review, qualitative 5-year site visit study Students The nursing program should ensure the following are in place Literature review, qualitative 5-year site visit study to assist students: English as a second language assistance Assistance for students with learning disabilities Necessary books and resources available throughout the pro- gram, as well as strategies to help students who cannot afford books and resources Remediation strategies are in place at the beginning of each course and students are aware of how to seek help. This should include processes to remediate errors and near misses in the clinical setting. (continued)

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S45 Evidence Supporting Guidelines for Quality Indicators and Warning Signs (continued) Quality Indicators Evidence Curriculum and Clinical Experiences 1. 50% or more of clinical experience in each clinical course is Literature review, Delphi study direct care with patients. 2. Variety of clinical settings with diverse patients. Literature review, Delphi study, qualitative 5-year site visit study 3. Opportunities for quality and safety education integrated Literature review, Delphi study into the curriculum, including delegating effectively, emer- gency procedures, interprofessional communication, and time management. 4. Systematic evaluation plan of the curriculum is in place. Literature review, Delphi study, qualitative 5-year site visit study Teaching and Learning Resources 1. The simulation laboratory is accredited. Literature review, qualitative 5-year site visit study 2. Students have access to a library, technology, and other Literature review, qualitative 5-year site visit study resources. 3. Programs are able to assess students with learning disabili- Literature review, qualitative 5-year site visit study ties and tailor the curriculum to meet their needs. Warning Signs Evidence 1. Complaints to boards of nursing or other nursing regulato- Literature review, Delphi study, qualitative 5-year site visit study ry boards from students, faculty, clinical sites, or the public. 2. Turnover of program directors; more than three directors in Literature review, Delphi study, qualitative 5-year site visit study, a 5-year period. quantitative 5-year annual report study 3. Frequent faculty turnover or cuts in number of faculty Literature review, Delphi study, qualitative 5-year site visit study, members. quantitative 5-year annual report study 4. Trend of decreasing NCLEX pass rates. Delphi, qualitative 5-year site visit study, quantitative 5-year annu- al report study 5. High-Risk Programs needing additional oversight, such as Literature review, qualitative 5-year site visit study, quantitative Prelicensure programs younger than 7 years. 5-year annual report study

S46 Journal of Nursing Regulation References Accreditation Commission for Education in Nursing. (2019). ACEN Christensen, J. (2010). Proposed enhancement of Bronfenbrenner’s Devel- accreditation manual: 2017 standards and criteria. https://www. opment Ecology Model. Education Inquiry, 1(2), 117–126. https:// acenursing.org/for-programs/general-resources/resources-acen- doi.org/10.3402/edui.v1i2.21936 accreditation-manual/ Cohen, H., & Ibrahim, N. (2008). A new accountability metric for a new Accreditation Council for Occupational Therapy Education. (2019). 2018 time: A proposed graduation efficiency measure. Change: The Maga- Accreditation Council for Occupational Therapy Education (ACOTE) stan- zine for Higher Learning, 40(3), 47–52. dards and interpretive guide. https://acoteonline.org/accreditation- College of Nurses of Ontario. (2018). Nursing education program approval explained/standards/ guide: Overview of the program approval process. http://www.cno.org/ Accreditation Council for Pharmacy Education. (2015). Accreditation stan- globalassets/3-becomeanurse/educators/nursing-education-program- dards and key elements for the professional program in pharmacy leading to approval-guide-vfinal2.pdf the doctor of pharmacy degree. https://www.acpe-accredit.org/pdf/ Commission of Accreditation in Physical Therapy Education. (2017). Standards2016FINAL.pdf Standards and required elements for accreditation of physical thera- Alexander, M. (2019). How can we best evaluate nursing education pro- pist education programs. http://www.capteonline.org/ grams [Editorial]? Journal of Nursing Regulation, 9(4), 3. https://doi. AccreditationHandbook/ org/10.1016/S2155-8256(19)30010-9 Commission on Collegiate Nursing Education. (2018). Standards for accred- Alexander, M., Durham, C. F., Hooper, J. I., Jeffries, P. R., Goldman, N., itation of baccalaureate and graduate nursing programs. https://www. Kardong-Edgren, S., Kesten, K. S., Spector, N., Tagliareni, E., aacnnursing.org/Portals/42/CCNE/PDF/Standards-Amended-2018. Radtke, B., & Tillman, C. (2015). NCSBN simulation guidelines for pdf prelicensure nursing programs. Journal of Nursing Regulation, 6(3), Cook, T., & Hartle, T. W. (2011). Why graduation rates matter – and why 39–42. https://doi.org/10.1016/S2155-8256(15)30783-3 they don’t. American Council on Education. https://www.acenet.edu/ Association of Specialized and Professional Accreditors. (2016). Outcomes: Documents/College-Graduation-Rates-Behind-the-Numbers.pdf Getting to the core of programmatic education and accreditation. https:// Cronenwett, L., Sherwood, G., Barnsteiner, J., Disch, J., Johnson, J., www.aspa-usa.org//wp-content/uploads/2016/06/Outcomes-Report- Mitchell, P., Sullivan, D. T., & Warren, J. (2007). Quality and safety June-2016.pdf education for nurses. Nursing Outlook, 55(3), 122–131. https://doi. Barrett, S. F., Steadman, J. W., & Whitman, D. L. (2016). Using the funda- org/10.1016/j.outlook.2007.02.006 mentals of engineering (FE) examination as an outcomes assessment tool. Dang, D., & Dearholt, S. (2017). Johns Hopkins nursing evidence-based prac- National Council of Examiners for Engineering and Surveying. tice: Model and guidelines (3rd ed.). International. https://ncees.org/wp-content/uploads/FEOAT-booklet-2018.pdf. Darling, N. (2007). Ecological systems theory: The person in the center of Barton, A. J., Armstrong, G., Prehelm, G., Gelmon, S. B., & Andrus, L. the circles. Research in Human Development, 4(3–4), 203–217. https:// C. (2009). A national Delphi to determine progression of quality and doi.org/10.1080/15427600701663023 safety competencies in nursing education. Nursing Outlook, 57(6), 313–322. https://doi.org/10.1016/j.outlook.2009.08.003 DeAngelo, L., Franke, R., Hurtado, S., Pryor, J. H., & Tran, S. (2011). Completing college: Assessing graduation rates at four-year institutions. Beauvais, A. M., Kazer, M. W., Aronson, B., Conlon, S. E., Forte, P., Fries, Higher Education Research Institute at UCLA. https://www. K. S., Hahn, J. M., Hullstrung, R., Levvis, M., McCauley, P., Mor- researchgate.net/profile/Ray_Franke/publication/249644731_ gan, P. P., Perfetto, L., Reveschi, L. M., Solernou, S. B., Span, P., & Completing_College_Assessing_Graduation_Rates_at_Four-Year_ Sundean, L. J. (2017). After the gap analysis: Education and practice Institutions/links/0046351e5bb5279e3a000000.pdf changes to prepare nurses of the future. Nursing Education Perspectives, 38(5), 250–254. https://doi.org/ 10.1097/01. Docherty, A., & Dieckmann, N. (2015). Is there evidence of failing to fail NEP.0000000000000196 in our schools of nursing? Nursing Education Perspectives, 36(4), 226– 231. https://doi.org 10.5480/14-1485 Benner, P., Sutphen, M., Leonard, V., & Day, L. (2010). Educating nurses: A call for radical transformation. Jossey-Bass. Eberle-Sudré, K., Welch, M., & Nichols, A. H. (2015). Rising tide: Do col- lege grad rate gains benefit all students? The Education Trust. https:// Benton, D. C., González-Jurado, M. A., & Beneit-Montesinos, J. V. edtrust.org/wp-content/uploads/2014/09/TheRisingTide-Do- (2013). Defining nurse regulation and regulatory body performance: College-Grad-Rate-Gains-Benefit-All-Students-3.7-16.pdf A policy Delphi study. International Nursing Review, 60(3), 303–312. El Haddad, M., Moxham, L., & Broadbent, M. (2017). Graduate nurse Berkow, S., Virkstis, K., Stewart, J., & Conway, L. (2008). Assessing new practice readiness: A conceptual understanding of an age old debate. graduate nurse performance. The Journal of Nursing Administration, Collegian, 24(4), 391–396. https://doi.org/10.1016/j. 38(11), 468–474. https://doi.org/ 10.1097/01. colegn.2016.08.004 NNA.0000339477.50219.06 Feeg, V., & Mancino, D. J. (2016). Trends upward and trends downward Bernier, S. L., Helfert, K., Teich, C. R., & Viterito, A. (2005). Are we reflecting a changing job market for new nursing graduates. Dean’s using the right “gold” standard? Community College Journal, 76(1), 36. Notes, 37(4–5), 1–5. https://www.ajj.com/sites/default/files/services/ Boston-Fleischhauer, C. (2019). Confronting clinical rotations. The Journal publishing/deansnotes/summer16.pdf of Nursing Administration, 49(1), 6–8. https://doi.org/ 10.1097/ Feeg, V., & Mancino, D. J. (2018). New graduates’ first jobs and future NNA.0000000000000699 plans: Debt, employers and education prospects. Dean’s Notes, 40(1), Canadian Council of Registered Nurse Regulators. (2018). Entry to practice 1–5. competencies for the practice of registered nurses. http://www.ccrnr.ca/assets/ Ferrante, F. (2017). Assessing quality in higher education: some caveats. draft-rn-elc-competencies-july-24-2018_en.pdf Social Indicators Research, 131, 727–743. Candela, L., & Bowles, C. (2008). Recent RN graduate perceptions of edu- Foreman, S. (2017). The accuracy of state NCLEX-RN passing standards cational preparation. Nursing Education Perspectives, 29(5), 266–271. for nursing programs. Nurse Education Today, 52, 81–86. http://doi. Cantlay, A., Salamanca, J., Golaw, C., Wolf, D., Maas, C., & Nicholson, P. org/ 10.1016/j.nedt.2017.02.019 (2017). Self-perception of readiness for clinical practice: A survey of Gaberson, K. B., Oermann, M. H., & Shellenbarger, T. (2015). Clinical accelerated master’s program graduate nurses. Nursing Education in teaching strategies in nursing (4th ed.). Springer Publishing Company. Practice, 24, 34–42.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S47 Giddens, J. F. (2009). Changing paradigms and challenging assumptions: Keeney, S., Hasson, F., & McKenna, H. (2011). The Delphi technique in Redefining quality and NCLEX-RN pass rates. Journal of Nursing nursing and health research. Wiley-Blackwell. Education, 48(3), 123–124. https://doi.org/ 10.3928/01484834- Killam, L. A., Luhanga, F., & Bakker, D. (2011). Characteristics of unsafe 20090301-04 undergraduate nursing students in clinical practice: An integrative Gonzalez, L. (2018). Teaching clinical reasoning piece by piece: A clinical literature review. Journal of Nursing Education, 50(8), 437–446. reasoning concept-based learning method. Journal of Nursing Educa- https://doi.org/10.3928/01484834-20110517-05 tion, 57(12), 727–735. https://doi.org/ 10.3928/01484834- Kumm, S., Godfrey, N., Richards, V., Hulen, J., & Ray, K. (2016). Senior 20181119-05 student nurse proficiency: A comparative study of two clinical Granger, B. B., Prvu-Bettger, J., Aucoin, J., Fuchs, M. A., Mitchell, P. H., immersion models. Nurse Education Today, 44, 146–150. https://doi. Holditch-Davis, D., Roth, D., Califf, R. M., & Gillis, C. L. (2012). org/10.1016/j.nedt.2016.05.023 An academic-health service partnership in nursing: Lessons from the Liaison Committee on Medical Education. (2019). Functions and structure field. Journal of Nursing Scholarship, 44(1), 71–79. https://doi. of a medical school: Standards for accreditation of medical education org/10.1111/j.1547-5069.2011.01432.x programs leading to the MD degree. https://lcme.org/ Grant, A. R. (2015). NCLEX-RN predictor test scores and NCLEX-RN success publications/#Standards [Doctoral dissertation, Walden University]. ScholarWorks. https:// Libner, J., & Kubala, S. (2017). Improving program NCLEX pass rates: scholarworks.waldenu.edu/cgi/viewcontent.cgi?article=2621&contex Strategies from one state board of nursing. Nursing Education Perspec- t=dissertations tives, 38(6), 325–329. Grün B., & Hornik, K. (2011). Topicmodels: An R package for fitting Linaker, K. L. (2015). Student evaluations, outcomes, and national licen- topic models. Journal of Statistical Software, 40(13), 1–30. sure examinations in radiology education: A narrative review of the Hayden, J., Smiley, R. A., Alexander, M., Kardong-Edgren, S., & Jeffries, literature. Journal of Chiropractic Humanities, 22(1), 17–21. https:// P. R. (2014). The NCSBN national simulation study: A longitudi- doi.org/10.1016/j.echu.2015.09.002 nal, randomized, controlled study replacing clinical hours with simu- Linstone, H. A., & Turoff, M. (Eds.). (2002). The Delphi method: Techniques lation in prelicensure nursing education. Journal of Nursing and applications. Authors. https://web.njit.edu/~turoff/pubs/ Regulation, 5(2), S1–S40. https://doi.org/10.1016/S2155- delphibook/delphibook.pdf 8256(15)30062-4 Longabach, T. (2012). Number of clinical hours in the nursing programs and Hennink, M. M., Kaiser, B. N. & Marconi, V. C. (2017). Code saturation National Council Licensure Examination for Registered Nurses (NCLEX- versus meaning saturation: How many interviews are enough? Quali- RN) passing rate [Unpublished master’s thesis]. University of Kansas. tative Health Research, 27(4), 591–608. https://doi.org/ 10.1177/1049732316665344 Loversidge, J. M. (2016). An evidence-informed health policy model: Adapting evidence-based practice for nursing education and regula- Hickerson, K. A., Taylor, L. A., & Terhaar, M. F. (2016). The preparation- tion. Journal of Nursing Regulation, 7(2), 27–33. https://doi. practice gap: An integrative literature review. The Journal of Continu- org/10.1016/S2155-8256(16)31075-4 ing Education in Nursing, 47(1), 17–23. https://doi. org/10.3928/00220124-20151230-06 Luhanga, F. L., Larocque, S., MacEwan, L., Gwekwerere, Y. N., & Dany- luk, P. (2014). Exploring the issue of failure to fail in professional Hooper, J. I., & Ayars, V. D. (2017). How Texas nursing education pro- education programs: A multidisciplinary study. Journal of University grams increased NCLEX pass rates and improved programming. Teaching & Learning Practice, 11(2), 3. Journal of Nursing Education, 8(3), 53–58. https://doi.org/10.1016/ S2155-8256(17)30160-6 McGeoch, M., Brunetto, Y., & Brown, K. (2014). The policy Delphi method: Contribution to policy and strategy within energy organisa- Hooper, J. I., & Thomas, M. B. (2014). National Accreditation as a crite- tions: A 2013 Malaysian case study with global implications. Quality rion for ongoing approval of education programs. Journal of Nursing and Quantity, 48, 3195–3208. Regulation, 4(4), 48–50. https://doi.org/10.1016/S2155- 8256(15)30110-1 Means, B., Yoyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evalu- ation of evidence-based practices in online learning: A meta-analysis and Hsieh H.-F. & Shannon, S. E. (2005). Three approaches to qualitative con- review of online learning studies. U.S. Department of Education. https:// tent analysis. Qualitative Health Research, 15(9), 1277–1288. https:// www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport. doi.org/10.1177/1049732305276687 pdf Hsu, L.-L., & Hsieh, S.-I. (2013). Development and psychometric evalua- Meskell, P., Murphy, K. Shaw, D. G., & Casey, D. (2014). Insights into the tion of the competency inventory for nursing students: A learning use and complexities of the Policy Delphi technique. Nurse Researcher, outcome perspective. Nursing Education Today, 33(5), 492–497. 21(3), 32–39. https://doi.org/10.7748/nr2014.01.21.3.32.e342 https://doi.org/10.1016/j.nedt.2012.05.028 Miller F. A., & Alvarado K. (2005). Incorporating documents into qualita- Hungerford, C., Blanchard, D., Bragg, S., Coates, A., & Kim, T. (2019). tive . Journal of Nursing Scholarship, 37(4), 348–353. An international scoping exercise examining practice experience https://doi.org/10.1111/j.1547-5069.2005.00060.x hours completed by nursing students. Journal of Nursing Education, 58(1), 33–41. https://doi.org/10.3928/01484834-20190103-06 Missen, K., McKenna, L., Beauchamp, A., & Larkins, J. (2016). Qualified nurses’ perceptions of nursing graduates’ abilities vary according to Jamshidi, N., Molazem, Z., Sharif, F., Torabizadeh, C., & Kalyani, M. N. specific demographic and clinical characteristics: A descriptive quan- (2016). The challenges of nursing students in the clinical learning titative study. Nurse Education Today, 45, 108–113. https://doi.org/ environment: A qualitative study. The Scientific World Journal, 2016, 10.1016/j.nedt.2016.07.001 1846178. https://doi.org/10.1155/2016/1846178 The National Academies of Sciences, Engineering, and Medicine. (2016). Jones, A., Foote, J., & Ridgeway, S. (2012). Program approval: Minneso- Quality in the undergraduate experience: What is it? How is it measured? ta’s case for accreditation requirement. Journal of Nursing Regulation, Who decides? The National Academies Press. 2(4), 40–42. The National Academies of Sciences, Engineering, and Medicine. (2018). Kavanagh, J. M., & Szweda, C. (2017). A crisis in competency: The strate- Graduate medical education outcomes and metrics: Proceedings of a workshop. gic and ethical imperative to assessing new graduate nurses’ clinical The National Academies Press. reasoning. Nursing Education Perspectives, 38(2), 57–62. https://doi. org/10.1097/01.NEP.0000000000000112

S48 Journal of Nursing Regulation National Council of State Boards of Nursing. (2005). Clinical instruction in Sax, S. (1978). Nurse education and training: Report of the Committee of Inquiry prelicensure nursing programs. https://www.ncsbn.org/Final_Clinical_ Into Nurse Education and Training to the Tertiary Education Commission. Instr_Pre_Nsg_programs.pdf Tertiary Education Commission. https://www.voced.edu.au/content/ National Council of State Boards of Nursing. (2006). A national survey of ngv%3A35641. elements of nursing education [Research brief]. https://www.ncsbn.org/ The Secretary’s Recognition of Accrediting Agencies, 34 C.F.R. § 602.16 Vol_24_web.pdf (2009). https://www2.ed.gov/policy/highered/reg/hearulemaking/ National Council of State Boards of Nursing. (2019). Member board profiles. hea08/34cfr602.pdf https://www.ncsbn.org/2019Education.pdf Silvestre, J. H. (2020). Percentages of programs that are accredited: An National League for Nursing Commission for Nursing Education Accredi- update. Leader to Leader. https://www.ncsbn.org/LTL_Spring2020.pdf tation. (2016). Accreditation standards for nursing education programs. Smiley, R. A. (2019). Survey of simulation use in prelicensure nursing http://www.nln.org/docs/default-source/accreditation-services/cnea- programs: Changes and advancements, 2010–2017. Journal of Nurs- standards-final-february-201613f2bf5c78366c709642ff00005f0421. ing Regulation, 9(4), 48–61. https://doi.org/10.1016/S2155- pdf?sfvrsn=12 8256(19)30016-X Nursing Education Outcomes and Metrics Committee. (2017). Report on Spector, N., Blegen, M. A., Silvestre, J., Barnsteiner, J., Lynn, M. R., BON Approval Survey – 2017. https://www.ncsbn.org/670.htm Ulrich, B., Fogg, L., & Alexander, M. (2015). Transition to practice Odom-Maryon, T., Bailey, L. A., & Amiri, S. (2018). The influences of study in hospital settings. Journal of Nursing Regulation, 5(4), 24–38. nursing school characteristics on NCLEX-RN pass rates: A national https://doi.org/10.1016/S2155-8256(15)30031-4 study. Journal of Nursing Regulation, 9(3), 59–69. https://doi. Spector, N., Hooper, J. I., Silvestre, J., & Qian, H. (2018). Board of nurs- org/10.1016/S2155-8256(18)30154-6 ing approval of registered nurse education programs. Journal of Nurs- Oermann, M. (Ed.). (2017). A systematic approach to assessment and evaluation ing Regulation, 8(4), 22–29. https://doi.org/10.1016/ of nursing programs. Wolters Kluwer. S2155-8256(17)30178-3 O’Lynn, C. (2017). Rethinking indicators of academic quality in nursing Spector, N., & Woods, S. L. (2013). A collaborative model for approval of programs. Journal of Nursing Education, 56(4), 195–196. https://doi. prelicensure nursing programs. Journal of Nursing Regulation, 3(4), org/10.3928/01484834-20170323-01 47–52. https://doi.org/10.1016/S2155-8256(15)30186-1 Papes, K., & Lopez, R. (2007). Establishing a method for tracking persis- Taylor, H., Loftin, C., & Reyes, H. (2014). First-time NCLEX-RN pass tence rates of nursing students: One school’s experience. Journal of rate: Measure of program quality or something else? Journal of Nurs- Professional Nursing, 23(4), 241–246. https://doi.org/10.1016/j. ing Education, 53(6), 336–341. https://doi.org/ 10.3928/01484834- profnurs.2007.01.004 20140520-02 Pitt, V., Powis, D., Levett-Jones, T., & Hunter, S. (2012). Factors influenc- van Houwelingen, C.T., Moerman, A.H., Ettema, R.G.A., Kort, H.S., & ing nursing students’ academic and clinical performance and attri- Ten Cate, O. (2016). Competencies required for telehealth activities: tion: An integrative review. Nurse Education Today, 32(8), 903–913. A Delphi-study. Nurse Education Today, 39, 50–62. https://doi. https://doi.org/10.1016/j.nedt.2012.04.011 org/10.1016/j.nedt.2015.12.025 Pittman, P., Bass, E., Han, X., & Kurtzman, E. (2019). The growth and Wellman, J., Johnson, N., & Steele, P. (2012). Measuring (and managing) performance of nursing programs by ownership status. Journal of the invisible costs of postsecondary attrition [Policy brief]. American Insti- Nursing Regulation, 9(4), 5–21. https://doi.org/10.1016/S2155- tutes for Research. 8256(19)30011-0 Quality and Safety Education for Nurses. (2019). QSEN Competencies. http://qsen.org/competencies/pre-licensure-ksas/ Randolph, P. (2013). Program Outcome Index: A measure of program effectiveness. Leader to Leader. https://www.ncsbn.org/L2L_Fall2013_ v5.pdf Randolph, P. K. (2016). Standardized testing practices: Effect on gradua- tion and NCLEX pass rates. Journal of Professional Nursing, 33(3), 224–228. https://doi.org/10.1016/j.profnurs.2016.09.002 Rayens, M. K., & Hahn, E. J. (2000). Building consensus using the Policy Delphi method. Policy, Politics, & Nursing Practice, 1(4), 308–315. https://doi.org/10.1177/152715440000100409 Reyna, R. (2010). Complete to compete: Common college completion metrics. National Governors Association. Robert’s Rules Online. (n.d.). Roberts’ Rules of Order Revised (4th ed.). http:// www.rulesonline.com/rror-07.htm. Rusch, L., Manz, J. A., Hercinger, M., Oertwich, A., & McCafferty, K. (2019). Nurse preceptor perceptions of nursing student progress toward readiness for practice. , 44(1), 34–37. https:// doi.org/10.1097/NNE.0000000000000546 Sandelowski, M. (2000). Whatever happened to qualitative description? Research in Nursing and Health, 23(4), 334–340. Sandelowski, M. (2009). What’s in a name? Qualitative description revis- ited. Research in Nursing and Health, 33(1), 77-84. https://doi. org/10.1002/nur.20362

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S49 APPENDIX A Definition of Terms

Annual Report Contains data the NRBs require from the nursing programs on a yearly basis. These data are not con- sistent among the NRBs but often consist of faculty, student, and program demographic data; pro- gram resources; student outcomes; clinical experiences; curriculum; etc. Approval of nursing education Official recognition of nursing education programs that go through a systematic approval process programs implemented by U.S. BONs, thus meeting regulatory standards and being able to make their stu- dents eligible to take the nursing licensure examination. In most states, the approval process will be designated as full approval when all requirements are met; conditional or probationary or other des- ignations when some of the requirements are met; or approval removal when programs fail to cor- rect cited deficiencies (adapted from Spector et al., 2018). Graduation rates Number and percentage of degree-seeking students who graduate within the normal program time. Hybrid learning modality Blended elements of face-to-face and online instruction. Metrics For the purposes of this report, those measures that are used when evaluating the outcomes, quality, and warning signs of nursing programs. NCLEX-RN predictor Examinations developed by proprietary companies external (not related) to NCSBN. The examina- examinations tions are intended to measure the readiness of a graduating nursing student to take the NCLEX-RN. They are also termed exit examinations. Outcomes The behaviors, characteristics, qualities, or attributes that learners display at the end of an education- al program (Gaberson et al., 2015, p. 18). Practice readiness of graduat- Newly licensed nurses being able to deliver consistent, competent, and safe care in predictable situa- ing students tions, with some guidance in more complex situations (adapted from Cantlay et al., 2017; Kavanagh & Szweda, 2017). Quality clinical experiences Either in face-to-face clinical experiences or in simulation, under the oversight of an experienced clin- ical instructor, the intentional integration of knowledge, clinical reasoning, skilled know-how, and eth- ical comportment across the lifespan (adapted from Benner et al., 2010). Quality indicators As adapted from the Agency for Healthcare Research and Quality (https://www.qualityindicators. ahrq.gov/), quality indicators are evidence-based measures of nursing education outcomes that are readily available data to track program performance. Site visit documents Documented findings from an NRB’s face-to-face visits with the program, often obtained from inter- viewing faculty, students, nursing program directors, administrators, and clinical facilities. Warning signs Negative indicators when a program is beginning to fall below the standards of graduating safe and competent students. Note. NRB = nursing regulatory board; BON = board of nursing; NCSBN = National Council of State Boards of Nursing.

S50 Journal of Nursing Regulation APPENDIX B1 The Johns Hopkins Evidence Levels and Quality Ratings

Level and Description Quality Rating Level I Experimental study, RCT, systematic review of RCTs; explanatory mixed-method design with only level I quantitative studies. Level II Quasi-experimental study; systematic review of a combination of RCTs and quasi-experimental studies or quasi-ex- perimental studies alone; explanatory mixed-methods with only level II quantitative study Level III Nonexperimental study; systematic reviews of combination of RCTs, quasi-experimental, and nonexperimental stud- ies or nonexperimental studies alone; qualitative study; meta-synthesis; exploratory, convergent, or multiphasic mixed-methods studies; explanatory mixed-method design that includes only level III quantitative study. Quality Rating For Level I-III Evidence – Quantitative Studies A High – Consistent generalizable results; sufficient sample size; adequate control; definitive conclusions; consistent recommendations based on reference to scientific evidence. B Good – Reasonably consistent results; sample size sufficient; fairly definitive conclusions; reasonable recommenda- tions based on a fairly comprehensive literature review. C Low – Little evidence with inconsistent results; insufficient sample size; conclusions can’t be drawn. Quality Rating For Level I-III Evidence – Qualitative Studies A/B High/Good Transparency – Documentation justifying decisions; how data were reviewed by others; how themes and categories were formulated. Diligence – Reads and rereads data to check interpretations; seeks opportunity to find multiple sources to corroborate evidence. Verification – The process of checking, confirming, and ensuring methodological coherence. Self-reflection and scrutiny – Being continuously aware of how a researcher’s experiences, background, or prejudices might shape and bias analysis and interpretations. Participant-driven inquiry – Participants shape the scope and breadth of questions; analysis and interpretation give voice to those who participated. Insightful interpretation – Data and knowledge are linked in meaningful ways to relevant literature. C Low – Study contributes little to the overall review of findings and have few, if any, of the features listed above. Level IV Opinion of respected authorities and/or nationally recognized expert committees/consensus panels. Includes consen- sus panels and clinical practice guidelines based on scientific evidence. Quality Rating For Level IV Evidence A High – Officially sponsored by a professional, public, private organization or government agency; documented sys- tematic search strategy; consistent results with sufficient numbers of well-designed studies; criteria-based evaluation of overall strength of studies and conclusions; national expertise; developed/revised within past 5 years. B Good – Officially sponsored by professional, public, private, or governmental agency; reasonably thorough and ap- propriate search strategy; reasonable consistency; sufficient number of well-designed studies; evaluations of strengths and limitations with fairly definitive conclusions; national expertise; developed/revised within past 5 years. C Low – Not sponsored by official agencies or organizations; poorly defined search strategies; no evaluation of strengths or weaknesses; insufficient evidence; conclusions cannot be drawn; older than 5 years. Level V Experiential and non-research evidence; includes literature integrative reviews, quality improvement, case reports, and opinion of nationally recognized experts based on experiential evidence. Quality Rating For Level V Evidence – Organizational Experience (QI, program, or financial evaluation) A High – Clear aims and objectives; consistent results across multiple settings; formal QI, financial, or program evalua- tion methods used; definitive conclusions; consistent recommendations with thorough reference to scientific evidence. B Good – Clear aims and objectives; consistent results in a single setting; formal QI, financial, or program evaluation methods; reasonably consistent recommendations with some reference to scientific evidence. C Low – Unclear or missing aims and objectives; inconsistent results; poorly defined QI, financial, or program evalua- tion methods; recommendations cannot be made. Quality Rating For Level V Evidence – Integrative review, literature review, expert opinion, case report, community standard, clini- cian experience, consumer preference) A High – Expertise is clearly evident; draws definitive conclusions; provides scientific rationale; thought leaders in field. B Good – Expertise appears to be credible; draws fairly definitive conclusions; provides logical argument for opinions. C Low – Expertise dubious; conclusions cannot be drawn. Note. RCT = randomized controlled trial; QI = quality improvement;. Source: Dang, D., & Dearholt, S. (2017). Johns Hopkins nursing evidence-based practice: Models and guidelines (3rd ed.). Sigma Theta Tau International. pp. 278-279.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S51 APPENDIX B2 Evidence-Based Publications and Key Findings for Nursing Education Performance Metrics

Citation Type of Purpose Key Findings Evidence Publication Levela Accreditation ACEN standards To provide ACEN standards and Standards: (1) mission and administrative capacity, IV A* Commission for manual criteria to nursing education pro- (2) faculty and staff, (3) students, (4) curriculum, (5) Education in Nurs- grams obtaining accreditation. resources, and (6) outcomes. ing (ACEN; 2019) Alexander (2019) Editorial To evaluate nursing education Several warning signs were presented from the V A programs. regulatory perspective. Association of Survey of 45 ac- To report outcomes used by ac- Discussion of bright line outcomes used by profes- IV A vs Specialized and crediting creditors of professional sional educators. B* Professional Ac- agencies programs. creditors (2016) Barrett et al. (2016) National Coun- To report statistics on using the ap- Pass rates of the examination should not be used to IV A vs cil of Examiners plication of Fundamentals in Engi- determine curricular content of any program. B* for Engineering neering examination as an out- and Surveying comes assessment tool. report Beauvais et al. Report from the To provide gap analysis of new Several gaps were identified, such as that leader- III C (2017) Connecticut graduates with suggested curricu- ship, communication, systems-based practice, aca- Nursing Collab- lar improvements. demia, and practice did not always speak the same orative-Action language. Coalition Benner et al. Book presenting To describe the changes in nursing A major finding was that nurses are undereducated II A (2010) a longitudinal education since the Lysaught study for the current demands of practice. Other key find- mixed-methods was released 40 years ago. Spon- ings include: study of preli- sored by the The Carnegie Founda- (1) U.S. nursing programs are very effective in censure (all lev- tion for the Advancement of forming professional identity and ethical beliefs. els) RN educa- Teaching. (2) Clinical practice assignments provide powerful tion programs learning experiences, especially in programs where educators integrate clinical and classroom teaching. (3) U.S. nursing programs are not effective in teach- ing nursing science, natural sciences, social scienc- es, technology, and the humanities. Berkow et al. Peer-reviewed To describe the results of a nation- Identified 36 graduate nurse competencies. III B (2008) article al survey to a cross section of - Only 10% of nurse leaders but 90% of faculty frontline nurse leaders on new thought new graduates were prepared to practice. graduate nurse proficiency. - Nurse leaders prioritized new graduate improve- - To assess practice readiness. ment needs in a remarkably similar manner. - There can be a relatively consistent approach for addressing new graduate nurses’ greatest improve- ment needs. Bernier et al. Peer-reviewed To present a case for not using Recommended more research in using first-time V A (2005) article NCLEX first-time pass rates as a NCLEX pass rates and cautioned about using them sole indicator of quality. as a sole indicator. Canadian Council Draft report of To describe the competencies for Competencies were developed under the theme of IV B of Registered entry-level com- entry-level RN practice developed clinician, communicator, collaborator, advocate, ed- Nurse Regulators petencies of by 11 jurisdictions in Canada. ucator, leader, professional, scholar, and (2018) RNs in Canada coordinator. Candela & Bowles Peer-reviewed To describe a statewide study of Gaps found included insufficient pharmacology III B (2008) research 352 new graduates on their educa- content, lack of management and leadership prepa- tional preparation for practice. ration, and lack of preparation in electronic data measurement. Graduates felt educators did not prepare them for practice but instead to pass the NCLEX. A majority indicated they needed more clinical hours in the program.

S52 Journal of Nursing Regulation Citation Type of Purpose Key Findings Evidence Publication Levela Cantlay et al. Peer-reviewed To survey 183 new graduates from New graduates were weak in leadership, team III B (2017) research an accelerated prelicensure mas- management, responding to clinical emergencies, ter’s program in Australia. and recognizing abnormal laboratory findings; however, 94% felt equally or more prepared than other graduates in their work environments. Cohen & Ibrahim Higher educa- To reflect on the use of graduation A new metric was proposed: The graduation effi- IV C (2008) tion journal arti- rates as the outcome measure of ciency metric. cle but not peer choice in the assessment of the reviewed performance of higher education. College of Nurses Report from the To provide overview of nursing ed- Standards were developed in the areas of nursing IV A of Ontario (2018) College of Nurs- ucation program approval in program governance, client and student safety, es of Ontario Ontario. qualified faculty, entry to practice competencies, clinical learning opportunities, communication with preceptors, examination first-time pass rates, grad- uates’ rating of their preparation, and preceptors’ rating of students’ readiness to practice. Commission on Commission on To provide standards to nursing Standards: (1) program quality: mission and gover- IV A* Collegiate Nursing Collegiate Nurs- education programs obtaining nance, (2) institutional commitment and resources, Education (2018) ing Education accreditation. (3) program quality: curriculum and teaching-learn- accreditation ing practices, (4) program effectiveness: assess- manual ment and achievement of program outcomes. Cook & Hartle Report from the To analyze the limitations of gradu- The IPEDS calculation excludes students who begin V A (2011) American Coun- ation rates. college part time, who enroll mid-year, and who cil of Education transfer from one institution to another. Put another way, IPEDS counts only those students who enroll in an institution as full-time degree-seekers and fin- ish a degree at the same institution within a pre- scribed period of time. Cronenwett et al. Peer-reviewed To describe the QSEN initiative, Definitions of essential features of a competent IV A (2007) article which includes adapting the Insti- nurse are provided within each of the QSEN com- tute of Medicine competencies for petencies; knowledge, skills, and attitudes for each nursing. QSEN competency are identified; the QSEN compe- tencies include patient-centered care, teamwork and collaboration, evidence-based practice, quality improvement, safety, and informatics. DeAngelo et al. Report of the To provide data-driven information Private universities have the highest raw degree II A (2011) University of on how to assess institutional completion rates and public 4-year colleges have California, Los graduation rates, emphasizing the the lowest. Angeles Higher importance of taking into account By comparing expected and actual graduation Education Re- the characteristics of students rates, much of the success of private universities in search Institute whom institutions enroll. degree completion comes from the strength of the students they enroll. Instead of comparing raw degree completion rates, institutions can be evaluated by how they perform in moving students toward degree completion based on the characteristics and experiences of their students; in this manner, public institutions have lower overall graduation rates but more suc- cess in moving the students they enroll toward graduation. Docherty & Dieck- Peer-reviewed To describe a cross-sectional de- The majority of faculty feel confident in their grad- III B mann (2015) research scriptive study of 84 faculty on fail- ing practices; however, the findings also suggest ing and grading of students. that faculty fail to fail students in both didactic and clinical courses. Eberle-Sudré et al. Research brief To question whether college grad- Validating with statistics, the article asserts that us- III B (2015) from The Educa- uation rates benefit the diversity of ing graduation rates for student outcomes uneven- tion Trust students. ly benefits students with certain demographics.

(continued)

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S53 Citation Type of Purpose Key Findings Evidence Publication Levela El Haddad et al. Peer-reviewed Using grounded theory, to exam- The authors strongly advocate for nursing pro- III A/B (2017) research ine practice readiness from the grams to have collaborative education-practice perspective of nurse unit manag- partnerships. ers from the acute care practice sector and nursing program coor- dinators from the education sector. Feeg & Mancino National Stu- To provide data on the 2015 job Evidence supports that the changing job market has III B (2016) dent Nurses’ market and provide insight into ed- an impact on employment rates by type of program Association ucation and healthcare trends. and type of region. It also presents data on educa- newsletter tion plans following graduation and student loan load. Feeg & Mancino National Stu- To provide data on the 2017 job Data on employment provided by region and type III B (2018) dent Nurses’ market and insight into education of program, as well as staying in their current posi- Association and health care trends. tion, job market challenges, and student debt. newsletter Ferrante (2017) Peer-reviewed Descriptive study of 24 Italian engi- To evaluate the quality of the educational process, III B research neering institutions to analyze fac- account should be taken of the human capital en- tors related to academic productiv- tering the system. ity of universities. Caution should be taken when considering employ- ment rates because they depend on employment conditions, the graduate X years from graduation, duration of the job search, pay at X years from graduation, type of the contract, relevance of the degree, and graduates’ degree of job satisfaction. Foreman (2017) Peer-reviewed To compute a 95% confidence in- Application of confidence intervals to nursing pro- III A research terval for 1,792 nursing program gram pass rates suggests that use of pass rate stan- pass rates from 2010-2014 to deter- dards to evaluate nursing program quality may not mine whether programs that met be appropriate. or failed to meet pass rate stan- dards may have done so by accident. Giddens (2009) Editorial in peer- To make a case against using only - Multiple choice favors individuals with strengths V A reviewed NCLEX pass rates as outcomes in low-context applications. journal standards. - Programs are ensuring NCLEX success by recruit- ing commercial third parties. - Need multiple indicators—graduation rates along with NCLEX pass rates. Gonzalez (2018) Peer-reviewed To develop a concept-based learn- The author developed strategies to integrate clini- V A article ing method for clinical reasoning. cal reasoning into teaching, such as focusing on documentation, diagnosis, communication, inter- ventions, prioritization, putting it all together, and reflection. Grant (2015) Doctoral thesis To identify the relationship be- The study findings include no relationship between III B tween NCLEX-RN success and the NCLEX success and prenursing GPA, final GPA, and following: (1) prenursing GPA and gender, but there was a relationship between final GPA, (2) age and gender, (3) NCLEX success and age and ATI predictor scores. ATI predictor scores. Hayden et al. Peer-reviewed Randomized controlled trial to in- Provides evidence that substituting high-quality I A (2014) research vestigate replacing clinical hours simulation experiences for up to half of traditional with simulation in prelicensure clinical hours produces comparable educational nursing education. outcomes and new graduates who are ready for clinical practice. Hickerson et al. Peer-reviewed To describe an integrative review The following three main themes were identified: a V A (2016) article of 50 articles related to the exis- preparation-practice gap exists; this gap is costly; tence, extent, and significance of a and closing the gap will likely rely on changes in preparation-practice gap. undergraduate education and on-the-job remedia- tion, such as nurse residencies and preceptorship programs.

S54 Journal of Nursing Regulation Citation Type of Purpose Key Findings Evidence Publication Levela Hooper & Ayars Peer-reviewed To summarize findings from a re- Three common interventions found to be extremely V A (2017) article view of 88 nursing education self- effective were (1) identifying at-risk students earlier, study reports across a 3-year peri- (2) providing timely remediation for at-risk stu- od (2013–2015) and survey the dents, and (3) enforcing program policies. programs regarding which inter- ventions were most effective. Hsu & Hsieh Peer-reviewed To conduct psychometric testing The Competency Inventory of Nursing Students III A (2013) research on a competency inventory to has satisfactory psychometric properties and could measure learning outcomes of be a useful instrument for measuring learning out- baccalaureate students using a comes of a nursing student. Ethics and accountabil- convenience sample of 599 nurs- ity were the most important factors contributing to ing students. nursing students’ competencies.

Hungerford et al. Peer-reviewed Scoping review of the literature to There were substantial differences in the require- V B (2019) research compare the number of clinical ments from 2,300 hours (U.K.) to no required hours practice hours across 4 countries (U.S.). The authors call for more research on clinical mandated for students in nursing education and conclude that it is likely that it is the education programs that lead to quality rather than quantity that matters. RN licensure. Jamshidi et al. Peer-reviewed Using a qualitative study, to identi- The following challenges exist: lack of communica- III A/B (2016) research fy how the clinical learning envi- tion skills, lack of theoretical knowledge and practi- ronment could improve students’ cal skills, stress, and inferiority complexes. readiness to practice in Iran. Kavanagh & Sz- Peer-reviewed Posthire and prestart performance- Aggregate baseline data indicate that only 23% of III C weda (2017) research based development system as- new graduate nurses demonstrate entry-level com- sessments were administered to petencies and practice readiness. more than 5,000 new graduate nurses to assess entry-level com- petency and practice readiness. Killam et al. (2011) Peer-reviewed Integrative literature review to ex- The major characteristics are ineffective interper- V A research amine the characteristics of stu- sonal interactions; knowledge, skill, and compe- dents who are unsafe in clinical tence; and unprofessional image. competencies. Kumm et al. (2016) Peer-reviewed Using nonequivalent control group No statistically significant differences in the new II C research design, to measure within-group graduate nurses’ performance between the original and between-group changes 16-week and the new 8-week clinical immersion across the original 16-week and models, indicating that it is the quality of the clini- new 8-week clinical immersion cal experience that is important. experiences. Libner & Kubala Peer-reviewed To describe the response of the Illi- Major themes include improvement in curriculum V A (2017) article nois Board of Nursing when and resources, faculty and professional develop- NCLEX pass rates fall below ment, student-to-faculty ratios, and academic and standards. administrative support. Linaker (2015) Peer-reviewed Narrative literature review of 54 ar- There are no definitive conclusions between mas- V A* research ticles to examine radiology student tery of radiology with any specific evaluation, out- evaluation and outcome assess- come, or preprofessional/clinical grades. ments including national board examinations. Longabach (2012) Master’s thesis To examine the relationship be- No statistically significant correlation was found be- III B tween NCLEX pass rates and the tween NCLEX pass rates and the number of clinical number of clinical hours complet- clock hours. Increased NCLEX pass rates were ed by a student in 92 nursing pro- found with doctorally prepared faculty, but this was grams in the states of Kansas and not statistically significant. Missouri.

(continued)

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S55 Citation Type of Purpose Key Findings Evidence Publication Levela Luhanga (2014) Peer-reviewed Qualitative descriptive study to ex- Results include (1) failing a student is a difficult pro- III A/B research plore “failure to fail” in Canadian cess; (2) both academic and emotional support are professional programs, including required for students and field instructors/precep- nursing, education, and social tors/faculty advisors; (3) there are consequences for work. programs, faculty, and students when a student has failed a placement; (4) sometimes personal, profes- sional, and structural reasons exist for failing to fail a student; and (5) the reputation of the professional program can be diminished as a result of failing to fail a student. Missen et al. Peer-reviewed To provide descriptive quantitative New graduates were lacking in advanced clinical III B (2016) research study of qualified nurses’ (n = 201) skills (those in the last year of their programs) and perceptions of new graduate nurs- coping with nursing practice. Additionally, signifi- es’ abilities. cant differences were found in the evaluators of the new graduates’ abilities based on the evaluators’ demographic and clinical characteristics (e.g., age, length of registration/licensure, role, clinical set- ting). New graduate nurses generally were rated lower by evaluators who were older, have been registered/licensed for a longer period of time, and whose roles include nurse manager and nurse edu- cator, thus illustrating the need to establish reliabili- ty on all who evaluate new graduates. NASEM (2018) Proceedings of To provide an in-depth discussion Seven themes: (1) measuring outcomes is impor- IV A a workshop of graduate medical education out- tant for professional accountability; (2) data are comes and metrics. readily available; (3) information is needed on how they are performing in clinical and academic set- tings; (4) challenges with assessing and guiding the physician workforce were outlined; (5) gathering and using data is complicated; (6) more funding is needed; and (7) a data repository is needed. NASEM (2016) An NASEM To synthesize the available data on Quality should be based on a causal effect of an in- IV A workshop higher education outcome quality. stitution on education outcomes; quality indicators should be reliable; and quality indicators should provide improvement efforts. NASEM (2016, pp. Essay provided To outline how student and broad- Many data elements measuring quality are not V A 57–80) for an NASEM er societal outcomes should be pri- comparable across institutions due to different con- workshop on oritized as the general measure for ceptual definitions and populations. College quality quality of higher assessing quality. is multidimensional because students and society education expect many different outcomes. National League National League To provide standards to nursing Standards include program outcomes, mission IV A for Nursing (2016) for Nursing education programs obtaining governance and resources, faculty, students, curric- Commission for accreditation. ulum, and evaluation processes. Nursing Educa- tion Accredita- tion’s accredita- tion manual

S56 Journal of Nursing Regulation Citation Type of Purpose Key Findings Evidence Publication Levela NCSBN (2005) NCSBN position To provide guidance to nursing The positions include the following: IV C* paper regulatory bodies for evaluating (1) Prelicensure nursing educational experiences the clinical experience component should be across the lifespan. of prelicensure programs. (2) Prelicensure nursing education programs shall include clinical experiences with actual patients; they might also include innovative teaching strate- gies that complement clinical experiences for entry into practice competency. (3) Prelicensure clinical education should be super- vised by qualified faculty who provide feedback and facilitate reflection. (4) Faculty members retain the responsibility to demonstrate that programs have clinical experienc- es with actual patients that are sufficient to meet program outcomes. (5) Additional research needs to be conducted on prelicensure nursing education and the develop- ment of clinical competency. NCSBN (2006) NCSBN re- To describe the elements of nurs- Graduates were more likely to believe their pro- III A search brief ing education that lead to better gram adequately prepared them for practice when preparation of new nurse programs: graduates. Taught use of information technology and evi- dence-based practice. Integrated pathophysiology and critical thinking throughout the curriculum. Taught content related to the care of specific client populations as independent courses. Had a higher percentage of faculty teaching both di- dactic and clinical components of the curriculum. NCSBN (2018) NCSBN mem- To survey of all the U.S. nursing Education approval requirements for the U.S. nurs- IV B ber board regulatory bodies for their charac- ing regulatory bodies are detailed. profiles teristics and requirements.

Nursing Education NCSBN commit- To survey U.S. nursing regulatory Eight quality indicators and nine warning signs * Outcomes and tee report bodies on their quality indicators, were provided. The NCLEX first-time pass rates and Metrics Commit- warning signs, and outcomes noncompliance with rules/regulations were the tee (2017) when approving programs. leading outcome criteria used when approving programs. Odom-Maryon et Peer-reviewed National study to examine institu- The results showed a statistically significant in- III A al. (2018) research tional characteristics among nurs- crease in NCLEX pass rates with public schools, se- ing programs associated with mester schedules, larger admission cohorts, more NCLEX-RN first-time pass rates. students per didactic faculty, a higher percentage of full-time faculty, and no use of standardized pro- gression examinations. Predictor examinations did not predict success on the NCLEX. No curricular characteristics were associated with higher pass rates, including the use of simulation, an integrated curriculum (specialties are not offered as separate courses) and online environments. No significant differences were found between pass rates and progression in the program, including individual course grades, minimal course grades, clinical eval- uations, and allowing students to repeat courses. Faculty certification in nursing education did not lead to higher NCLEX pass rates.

(continued)

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S57 Citation Type of Purpose Key Findings Evidence Publication Levela Oermann (2017) Book To provide a comprehensive guide Assists nurse educators to understand program V A to systematic and ongoing nursing evaluation beyond accreditation; decide how pro- program evaluation. gram evaluation data should be collected; develop and implement a program evaluation plan; prepare for the accreditation process; and use data to make sound program decisions. O’Lynn (2017) Editorial in peer- To provides commentary on the O’Lynn argues that the use of NCLEX pass rates as V A reviewed use of NCLEX pass rates as a proxy a proxy for program quality should be re-examined journal for nursing education program and recommends moving toward practice-based quality. competencies as an alternative. Papes & Lopez Peer-reviewed To develop a tracking system for Tracking of persistence rates over time is important V A (2007) research persistence/retention rates and use for program review, evaluation, and ongoing im- the findings as part of ongoing provement. Persistence rates can be an indicator of program improvement activities. growing issues. Pittman et al. Peer-reviewed To track and report trends in The number of for-profit nursing program gradu- III A (2019) research growth of nursing programs and ates has continued to grow. For-profit schools had graduates by school ownership the lowest NCLEX pass rates during the study type and compare first-time period. NCLEX pass rates by school ownership. Randolph (2013) NCSBN To describe the Program Outcome In 2011, the mean index was 157 (percentage V B newsletter Index, which is a nursing pro- NCLEX first-time pass rate and on-time graduation; gram’s self-reported on-time grad- maximum is 200). The mean index was 165 for bac- uation rate plus the first-time calaureate programs, 155 for associate degree pro- NCLEX pass rate within the calen- grams, and 154 for licensed practical nurse dar year. programs. Randolph (2016) Peer-reviewed To describe standardized testing The use of standardized examinations as high- III B article practices across nursing programs stakes testing is not supported. in one state and determine wheth- er a cut score or oversight remedi- ation had any effect on (1) first- time NCLEX pass rates, (2) on-time graduation, or (3) a combination of both. Reyna (2010) Report from the Describes recommendations of the The work group recommends the following com- IV C* National Gover- National Governors Association pletion metrics: nors Center’s Work Group on College Outcome metrics: (1) degrees and certificates Association Completion Metrics. awarded, (2) graduation rates, (3) transfer rates, and (4) time and credits to degree. Progress metrics: (1) enrollment in remedial educa- tion, (2) success beyond remedial education, (3) success in first-year college courses, (4) credit accu- mulation, (5) retention rates, and (6) course completion. Rusch et al. (2019) Peer-reviewed Descriptive, exploratory study of Students scored highest in professional attributes III B research 15 cohorts (N = 856) to determine but lowest in time management, prioritization, strengths and weaknesses of se- management of multiple patients, and pharmacolo- nior-level nursing students related gy knowledge. to readiness for practice before graduation. Sax (1978) Australian Re- Describe the recommendations The Committee considered three approaches (com- IV C port of the Com- made to the Tertiary Education mon portal of entry, common baseline, and core mittee of Inquiry Commission on possible develop- curriculum) for allowing commonality of training Into Nurse Edu- ments and changes in nurse edu- between nurses but believed that none were practi- cation and Train- cation and training, including cable at the time. ing to the Tertia- whether nurse education should ry Education take place in hospitals, education Commission institutions, or both.

S58 Journal of Nursing Regulation Citation Type of Purpose Key Findings Evidence Publication Levela The Secretary’s Federal To provide U.S. Department of Ed- Requirements include graduation rate, licensure, IV C* Recognition of Ac- regulations ucation outcome requirements for and job placement rate. crediting Agencies education accreditors. (2009) Spector & Woods Peer-reviewed To describe the collaboration with Summary of the seven models of nursing program IV A (2013) article national accrediting agencies and approval used by U.S. nursing regulatory bodies. the recommendations that Nursing regulatory bodies have the legal authority resulted. to close nursing programs; accreditors do not. Challenges of program approval include overbur- dened staff (expansion of number of programs). Concern over the cost of mandating national nurs- ing accreditation. Spector et al. Peer-reviewed To discuss results of a multisite - Although results showed few statistically signifi- I A (2015) article study of transition to practice in- cant differences between the study and control volving 105 hospitals in three groups, when the hospitals in the control group states. were categorized as having established or limited programs, differences were noted. Hospitals with established programs had higher re- tention rates, and the nurses in these programs re- ported fewer patient care errors, used fewer nega- tive safety practices, and had higher competency levels, lower stress levels, and better job satisfaction. Spector et al. Peer-reviewed To present key components of the Nursing regulatory bodies evaluate the following V A (2018) article regulation of RN education pro- key components when making initial or continuing grams, providing an overview of approval decisions: governing entity, program lead- the nursing regulatory bodies ap- ership, faculty, curriculum, clinical learning experi- proval process of RN education ences, physical and fiscal resources, and evaluation programs in the U.S. plan. The role of nursing regulatory bodies differs from the role of the national accreditors in the approval of nurse education programs across the following key areas: mission, authority, response to com- plaints, service, and structure. Taylor et al. (2014) Peer-reviewed To outline how programs respond A stronger base of evidence is needed for the use V A article to poor first-time pass rates and of first-time NCLEX pass rates as the primary indi- the unintended consequences of cator for the quality of nursing programs. using them. Wellman et al. Policy brief To advance the goal of reducing at- Nursing regulatory bodies should identify attrition IV C* (2012) commissioned trition in postsecondary programs. as a crucial indicator of program success. by the American To develop metrics for determining To assist states and institutions in generating their Institutes for where, when, and how to invest in own measures of attrition costs for use in setting Research increased graduation rates and benchmarks and goals for improvement, a pro- lower production costs. grammer’s guide to the proposed methodology of measuring attrition costs is provided in a compan- ion paper. Attrition estimated by measuring the credits taken by students who do not complete a degree or other credential. Estimates can be generated using transcript analy- sis and projections based on year-to-year rates of student persistence. Based on an estimate of the average education and related cost of credits taken that do not attach to a degree. Note. ACEN = Accreditation Commission for Education in Nursing; RN = registered nurse; IPEDS = Integrated Postsecondary Education Data System; QSEN = Quality and Safety Education for Nurses; GPA = grade point average; ATI = Assessment Technologies Institute; National Academies of Sciences, Engineering, and Medicine; NCSBN = National Council of State Boards of Nursing. a See Table B1 for description of levels and ratings.

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S59 APPENDIX C Final Codebook for Program Analysis

Students ⦁ Turnover ⦁ Students get materials at the begin- ⦁ Recruitment ⦁ Clinical instructors and instruction ning of the semester ⦁ Demographics (i.e., characteristics ⦁ Administrative support ⦁ Program and curriculum evaluation that may present teaching and learn- ⦁ Teaching quality ⦁ Faculty and student input toward ing challenges, such as English as a ⦁ Faculty work space policies and procedures foreign language, learning disabili- ⦁ Recruitment/retention ⦁ Appropriate and sufficient clinical ties, etc.) ⦁ Adequate faculty candidate pool sites ⦁ Admission criteria (or lack thereof) Leadership ⦁ Comprehensive policies ⦁ Financial aid ⦁ Program director qualifications Teaching and Learning Resources ⦁ Services ⦁ Program director (not a nurse) ⦁ Teaching resources ⦁ Policies ⦁ Physical resources to conduct the ○ NCLEX preparation materials ⦁ Performance job ○ Online supplemental tools ⦁ School-student communication ⦁ Administrative support (time away ⦁ Physical instructional resources ⦁ Postgraduation outcomes from teaching) ○ Simulation laboratory Faculty ⦁ Turnover ○ Quality of resources ⦁ Credentials ⦁ Advocacy for resources ○ Office space ○ Credentials management ⦁ Administration of broader institu- ⦁ Materials prepared and managed ○ Content expertise tion/company according to internal policies ⦁ Staffing Program Management/Implementation Organizational Culture ○ Student-to-faculty ratio ⦁ Administrative support (or lack ⦁ Higher administration relationships ⦁ Workload thereof) ⦁ Program leadership and faculty ⦁ Relationships ⦁ Use of data relationships ⦁ Development/continuing education ⦁ Poor record keeping ⦁ Student perceptions of the program ⦁ Incentives/remuneration/ ⦁ Faculty and curricular control and faculty compensation ⦁ Site visit triggers

APPENDIX D Site Visit Template

Use of the Site Visit Template: Program This template was developed based on the qualitative 5-year 1. Current approval status ______site visit study that NCSBN conducted, looking at programs 2. Age of program ______that were not fully approved by boards of nursing. Each of the 3. Ownership of program (for-profit; nonprofit; public) ______items below were found to be lacking in those programs not 4. Trend of program’s NCLEX® pass rates for 3 years meeting regulatory standards. Nursing regulatory bodies _____ Current year (NRBs) could use this template as a guide when making a fo- _____ Year 2 cused site visit. NRBs may choose to adapt this template to customize it to their particular needs. _____ Year 3 Date of Site Visit ______Administration Name of Education Consultant ______1. Written policies and procedures are available to faculty and students. Yes/No/Comments Name of Program ______2. There is evident student and faculty input into policies and Address of Program ______procedures. Yes/No/Comments Director of Program ______3. Record keeping is in place for faculty credentials, course evaluations, and student records. Yes/No/Comments Contact Information of Director ______4. Quality improvement strategies are in place, particularly re- NCLEX® Program Code ______lated to student outcomes and course evaluations. Yes/No/ Comments 5. Students have the educational materials (books, uniforms, software, internet access, syllabi, etc.) they need to be suc- cessful. Yes/No/Comments

S60 Journal of Nursing Regulation 6. Data are used to set admission, progression, and student 11. Faculty have control over the curriculum. Yes/No Explain performance. Yes/No/Comments (Below are some key ar- ______eas to check.) 12. Full-time faculty turnover during the past academic year a. Student socioeconomic status. was ______b. English as a second language. Students c. Presence of children younger than 18 years in the home. 1. English as a second language assistance is provided on an d. Need to work while attending program. ongoing basis, when appropriate. Yes/No Comments e. Program admission, such as grade point average (GPA), 2. Resources are available for student learning disabilities. ® ® SAT /ACT , secondary education. Yes/No/Comments f. Remediation programs, including remediation for clini- 3. Throughout the program books and resources are provid- cal errors/near misses, are in place. ed. Yes/No/Comments g. Program progression (GPA standards, minimum course a. When students can’t afford books and other required re- grades, pass/fail, etc.). sources, strategies are in place to help them. Program Director 4. Remediation strategies are in place so that students are 1. How many directors has the program had in the past 5 aware of how to seek help. Yes/No/Comments years (including interim directors)? ___ a. Remediation strategies include errors/near misses 2. Is the director in charge of other allied health and/or voca- made in clinical experiences. tional programs? Yes/No Curriculum and Clinical Experiences 3. If the answer to #12 is yes, is there an assistant director for 1. 50% or more of clinical experiences in each course are di- managing the day-to-day operations of the nursing pro- rect care with patients. Yes/No gram? Yes/No Explain ______2. Variety of clinical settings with diverse patients. Yes/No 4. What is the highest academic degree of the program direc- Comment tor? ______3. Opportunities in clinical experiences for promoting safety 5. Is the program director a nurse? Yes/No and quality. Yes/No/Comment Faculty Evidence-based examples include: 1. Total number of faculty (including full-time, part-time, ad- a. Delegation junct clinical faculty each academic cycle, etc.) is ______b. Emergency procedures 2. Number of full-time faculty ______c. Interprofessional communication 3. Credentials of faculty (provide separately) d. Time management 4. Faculty have a basic knowledge of pedagogical methods. Teaching and Learning Resources Yes/No Comments 1. The simulation laboratory is accredited. Yes/No 5. Workload for faculty is reasonable (average number of a. Simulation laboratory is in working order with up-to- courses taught in an academic year). ______Yes/No date equipment. Yes/No Comment Comments 2. Syllabi are consistent in their design and with internal poli- 6. All faculty teaching in clinical experiences have performed cies. Yes/No/Comment direct patient care in the past 5 years. Yes/No Comments a. Course descriptions match the course content and ex- 7. Formal orientation plan for new full-time/part-time faculty pected outcomes. Yes/No/Comment is in place. Yes/No Explain ______3. Physical instructional resources are adequate. Yes/No 8. Formal orientation plan for adjunct faculty is in place. Yes/ a. Full- and part-time faculty have private office space for No Explain ______student meetings. Yes/No/Comment 9. There is administrative support for ongoing faculty devel- b. Adjunct faculty have the ability to reserve conference opment. Yes/No Explain ______rooms to meet with students. Yes/No/Comment 10. All faculty who teach simulation are certified. Yes/No

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S61 APPENDIX E Annual Report Core Data Template

Prelicensure Annual Report Core Data Introduction In collaboration with your board of nursing (BON), the National Council of State Boards of Nursing (NCSBN) is assisting with collect- ing their annual report data this year. The survey was designed based on the core data results of a large, mixed-methods study of nursing program quality indicators and warning signs. Your BON may include some additional questions at the end of the survey. Your BON will receive descriptive results of the nursing programs in their state/jurisdiction, as well as a report of the raw data of each program. Annually, they will receive an aggregate report of all participating BONs so that they can compare their programs to the aggregate. We are considering this the pilot year for collecting the BONs’ annual report data, and we’ll be interested in any suggestions you might have as we go forward in future years.

Directions Please complete the following survey for each NCLEX code that you have. Since these are core data, all fields are required before you can proceed to the next question. You may go back and make changes, and you may stop, save the survey and then return. You will have 30 days to complete the survey, and we’ll send the results to your BON 2 weeks after the survey is due. If you have any ques- tions, please email Qiana McIntosh at NCSBN at [email protected].

Contact Information Full Name of Program {Free-text entry} ______Mailing Address of the Program {Free-text entry} ______City ______State ______Zip Code ______First and Last Name of Person Completing Form {Free-text entry} ______Direct Phone # of Person Completing Form {Numeric response} ______NCLEX® Program Code {10-character alphanumeric code (e.g., US99999999)} ______

Program lege/university was founded).] {4-digit year} 1. Does the program have national nursing accreditation? ______Yes __No 7. Does the program have any satellite sites? 2. What is the program’s current approval status? [Board of __Yes {Q8} __No {Skip to Q9} nursing or state-designated program approval status.] 8. {If yes to Q7} How many total sites, including the home site, ☐ Full approval does the program have? ______☐ Conditional/probationary approval 9. What types of learning modalities does the program offer? ☐ Non-approved [Hybrid is defined as a program that combines elements of ☐ Other ______online learning and traditional in-person learning.] 3. What best describes the program’s geographic location? ☐ In-person only {Skip to Q11} ☐ Urban ☐ Online only {Q10} ☐ Suburban ☐ Hybrid {Q10} ☐ Rural 10. What percentage of your program is online? {Sliding scale ☐ Other ______percentage} ______4. What is the institutional ownership? 11. What best describes the program’s academic schedule? [A ☐ Public quarter system divides the academic year into four ses- ☐ Private nonprofit sions. A trimester divides the academic year into three ses- ☐ Private for-profit sions. A semester system divides the academic year into 5. What is the program type? two sessions.] ☐ Licensed practical nurse/licensed vocational nurse ☐ Quarters ☐ Diploma ☐ Trimesters ☐ Registered nurse – associates ☐ Semesters ☐ Registered nurse – bachelor’s ☐ Other ☐ Registered nurse – accelerated bachelor’s 12. Does the program administer a formal student orientation ☐ Master’s entry process? [A formal student orientation is the process of in- ☐ Other ______troducing new nursing students to program and healthcare 6. In what year was the program founded? [Year the nursing facility policies, procedures, and technologies. This may in- program started (might be different than the year the col- clude, but is not limited to, the following with the student:

S62 Journal of Nursing Regulation student responsibilities/expectations, professional dress/ 21. {If yes to Q20} Are simulation faculty certified? [The Society behavior codes, etc] for Simulation in Healthcare (SSH) provides the Certified __Yes __No Healthcare Simulation Educator (CHSE) certification.] 13. Does the program offer English as a second language ser- __Yes __No vices for non-native English speakers? [Program offers re- 22. 22. {If yes to Q20} Is the simulation laboratory accredited by sources where students with English as a second language the Society for Simulation in Healthcare (SSH)? can practice reading, listening, speaking, and writing.] __Yes __No __Yes __No 23. How many hours do students spend in direct client care? 14. Does the program offer disability support services? [Nurs- [Faculty-supervised care directly with clients.] {Integer} ing program has procedures for making reasonable accom- ______modations for students who qualify under the American 24. How many hours do students spend in simulation? [“A Disabilities Act.] technique that creates a situation or environment to allow __Yes __No persons to experience a representation of a real event for 15. Does the program offer support services to help low socio- the purpose of practice, learning, evaluation, testing, or to economic students access available resources (e.g., peer gain an understanding of systems or human actions.” From mentoring services, tuition assistance, a work-study pro- Healthcare Simulation Dictionary (2nd ed.) (AHRQ, 2020).] gram, etc.)? [Students have books and resources through- {Integer} ______out the program and the program has strategies to help 25. How many hours do students spend in the skills laborato- students who can’t afford books and resources.] ry? [A skills laboratory is equipped with manikins, task __Yes __No trainers, and hospital equipment where students can apply 16. Does the program have a formal remediation process in basic procedural skills such as administering injections.] place for students needing academic support? [The reme- {Integer} ______diation process is designed to promote success for stu- Program Director Data dents who are at risk of failure and should include the fol- 26. Is the program director a nurse? [This would include a lowing elements: description of the deficient areas; an nurse with an active or inactive license.] outline of specific, measurable goals to demonstrate suc- __Yes {Q27} __No {Skip to Q28} cess; individualized plan for each student; time frame for 27. {If yes to Q27} What is the program director’s highest nurs- completion, agreed upon by the faculty and student.] ing degree achieved? __Yes __No ☐ Diploma 1 7. Does the program have a formal remediation process in ☐ Associate degree in nursing place for students who commit errors/near misses in their ☐ Baccalaureate of science in nursing clinical experiences? [Program has policies and procedures ☐ Master of science in nursing in place for keeping track of errors and near misses in stu- ☐ Doctor of nursing practice dent clinical experiences and taking action to make system/ ☐ Doctor of philosophy in nursing educational improvements.] ☐ Other ______Yes __No 28. What is the program director’s highest non-nursing degree 18. Has the nursing program experienced major organizational achieved? changes over the past year)? [Major organizational changes ☐ Associate degree may include, but are not limited to, new director, new as- ☐ Bachelor degree sistant/associate director, staff layoff, faculty layoff, change ☐ Master of education in university leadership (e.g., provost or president), collaps- ☐ Other master’s degree ing programs, economic efficiencies, etc.] ☐ Doctor of education __Yes {Q19} __No {Skip to Q20} ☐ Doctor of philosophy 19. What major organizational changes has the nursing pro- ☐ Other doctoral degree gram experienced in the past year? ☐ N/A ☐ New director ☐ Other ______☐ New assistant/associate director 29. In the past 5 years, how many directors, including interim ☐ Staff layoff directors, has the program had? {Integer} ☐ Faculty layoff ______☐ Change in university leadership (e.g., provost or 30. Does the program director have administrative responsibil- president) ity for allied health? [Allied health is a broad field of health ☐ Collapsing programs (such as downsizing or merg- care professions made up of specially trained individuals ing programs) such as physical therapists and respiratory therapists.] ☐ Economic efficiencies/budget reductions __Yes __No ☐ Other 31. Does the program have an assistant/associate director? 20. Does the program offer simulated clinical experience? [“A __Yes __No technique that creates a situation or environment to allow 32. Does the program director have dedicated administrative persons to experience a representation of a real event for support? [Administrative support includes general office the purpose of practice, learning, evaluation, testing, or to management such as answering phones, doing clerical gain an understanding of systems or human actions.” From work, and a variety of other tasks.] Healthcare Simulation Dictionary (2nd ed.) (AHRQ, 2020).] __Yes __No __Yes {Q21} __No {Skip to Q23}

Volume 11/Issue 2 Supplement July 2020 www.journalofnursingregulation.com S63 Faculty Data 41. Does the program offer formal orientation for new full-time 33. How many full-time faculty are in the prelicensure pro- faculty? gram? [Full time equivalent (FTE) faculty are expected to __Yes __No work at least 40 hours per week and to teach, participate in 42. Does the program offer formal mentoring for new full-time curriculum development, hold office hours for student ad- faculty? [Formal mentoring includes assignment of a sea- visement, attend faculty meetings, participate in campus- soned (at least 1 year of teaching) faculty member who has wide events, attend professional development events, take taught at the same level for the purpose of providing ongo- part in scholarly activities, etc.] {Integer} ______ing support, coaching, guidance, and faculty development 34. How many clinical adjunct faculty are in the prelicensure for new full-time faculty.] program? [Clinical adjunct faculty are typically staff at the __Yes __No clinical facility that hosts students, and they supervise stu- Student Data dents during clinical rotations.] 43. How many students are enrolled in the nursing program as a. Employed by the nursing program {Integer} of the beginning of the current academic year? [Includes all ______prelicensure students.] {Integer} ______b. Not employed by the nursing program {Integer} 44. Do you have a maximum enrollment capacity? ______Yes {Q45} __No {Skip to Q46} 35. How many part-time faculty are in the prelicensure pro- 45. What is the maximum nursing enrollment capacity for the gram? [Part-time faculty work less than 40 hours per week current academic year? {Integer} ______and are responsible for assuming teaching responsibilities, 46. What is the total number of students who started in your usually collaborating with the full-time faculty. They main- most recent graduating cohort? {Integer} ______tain availability to students and communicate effectively 47. In your most recent graduating cohort, how many students with students and colleagues.] {Integer} ______graduated? {Integer} ______36. How many of the full-time faculty have a graduate-level 48. In your most recent graduating cohort, how many students education? did not graduate and are still actively pursuing course- ☐ Master of science in nursing {Integer} ______work? {Integer} ______☐ Master of science (other than nursing) {Integer} 49. What is the average age of a student enrolled in the pro- ______gram as of the beginning of the current academic year? ☐ Other aster’s {Integer} ______{Sliding scale, integer} ______☐ Doctor of nursing practice {Integer} ______50. Please provide a detailed breakdown of the racial composi- ☐ Doctor of philosophy {Integer} ______tion (number in each category) of the students currently en- ☐ Other doctoral degree {Integer} ______rolled in the program. 37. Please specify the typical number of students to one faculty ☐ American Indian or Alaska Native {Numeric response member for didactic/theory courses. {Integer} field, integer} ______☐ Asian {Numeric response field, integer} 38. Please specify the number of students to one clinical facul- ______ty member. [All levels of faculty (full time, part time, and ☐ Black or African American {Numeric response field, clinical adjunct) in all types of clinical experiences.] {Inte- integer} ______ger} ______☐ Native Hawaiian or other Pacific Islander {Numeric 39. Does the program offer formal orientation for new adjunct response field, integer} ______clinical faculty? [Formal orientation for new adjunct clinical ☐ White {Numeric response field, integer} faculty includes overview of the program and the particular ______course they’re teaching, policies and procedures, teaching ☐ Multi-racial {Numeric response field, integer} responsibilities, supervision of students, role modeling, ______planning post conferences, evaluation of students, etc.] ☐ Other {Numeric response field, integer} __Yes __No ______For questions 40-42 below, we are going to ask you about for- 51. Please provide a detailed breakdown (number of students mal orientation for new faculty. Formal orientation of new fac- in each category) of the ethnic composition of the students ulty includes an overview of the program and faculty resourc- currently enrolled in the program. es, policies and procedures, workload, faculty appraisal, ☐ Hispanic or Latino or Spanish origin {Numeric re- curriculum and syllabus development, student assessment, di- sponse field, integer} ______dactic and clinical teaching responsibilities, student advise- ☐ Non-Hispanic or Latino or Spanish origin {Numeric ment, etc. response field, integer} ______40. Does the program offer formal orientation for new part- 52. Please provide a detailed breakdown (number of students time faculty? in each category) by student sex. __Yes __No ☐ Female {Numeric response field, integer} ______☐ Male {Numeric response field, integer} ______☐ Other {Numeric response field, integer} ______

S64 Journal of Nursing Regulation