Session 6: Pathways to Improvement: Program Evaluation
Pat Quigley, PhD, MPH, ARNP, CRRN, FAAN, FAANP
Review process of program evaluation. Formulate measurable fall-related outcomes that build upon NDNQI data. Differentiate types of falls as a basis for analysis of program effectiveness at patient, unit and organizational levels. Compare proactive vs. reactive approaches to fall and injury prevention. Clinical Question or Idea Patient Idea Literature Review Innovation Product Evaluation Networking at Conferences & Meetings Informing Clinical Decision-Making
Achieving agreement on the aim of a project is critical for maintaining progress.
Teams make better progress when they are very specific about their aims.
Examples of Aims
Increase the percentage of staff Reducing the rate Reducing the rate on our unit who of injuries due to of falls in our are educated about falls in our facility facility by 40% in our falls by 50% in 7 seven months prevention months protocol to 100% in 2 months What are we trying to accomplish?
How will we know that a change is an improvement?
What change can we make that will result in improvement? ACT PLAN
STUDY DO Measure Measure Process Outcome 10 Have others that have some knowledge about the change review comment on its feasibility.
Test the change on the members of the team that helped developed it before introducing the change to others.
Conduct the test in one facility or office in the organization, or with one patient.
Conduct the test over a short time period.
Test the change on a small group of volunteers.
Changes That Result in Improvement
A P S D
A P • Hunches S D • Theories • Ideas
Overall Aim: Decrease Falls Rate by 40% in 7 months
Develop Develop Develop Develop Staff and Patient assessment knowledge environmental specific Education protocol of falls assessment interventions for fallers Seek usefulness, not perfection.
Use sampling, ex: 10 charts per week.
Don’t wait for the information system.
Report percentages and rates, not absolute numbers.
Take outcome measures at least 1 x/month.
Take process measures at least 2 x/month.
Plot data over time, run charts. Percentage of
• Patients at risk for falls and fall related injuries with interventions in place • Patients with completed intentional rounding • Observation chart review
Process measures answer the question: “Are we doing the things we think will lead to improvement in outcome?” Major Balancing Fall Rate Injury Rate Measures % of fall risk patients receiving intentional rounding 100
90
80
70
60
50
rounding 40
30
20
10
% of fall risk patients receiving intentional intentional receiving % patients of risk fall 0 Jul-11 Jul-12 Jan-11 Jan-12 Jun-11 Jun-12 Oct-11 Feb-11 Sep-11 Feb-12 Sep-12 Apr-11 Apr-12 Dec-11 Mar-11 Mar-12 Aug-11 Nov-11 Aug-12 May-11 May-12 Rate of Falls with Major Injury (#falls with major injury/BDOC*1000)
3.5
3
2.5
2 Increased rounding intervention
1.5
injury/BDOC*1000) 1
0.5 Falls with Major Injury Rate (falls w/major w/major Rate (falls Injury with Major Falls 0 Jul-11 Jul-12 Jan-11 Jan-12 Jun-11 Jun-12 Oct-11 Feb-11 Sep-11 Feb-12 Sep-12 Apr-11 Apr-12 Dec-11 Mar-11 Mar-12 Aug-11 Nov-11 Aug-12 May-11 May-12
Month What are we trying to accomplish?
How will we know that a change is an improvement?
What change can we make that will result in improvement? ACT PLAN
STUDY DO Do you want to change a process, improve a practice, or test a product on your unit? (Program Evaluation/QI) Do you want to test the strength of an association or the effectiveness of an intervention? (Research)
Process by which individuals work together to improve systems and processes with the intention to improve outcomes.*
*Committee on Assessing the System for Protecting Human Research Participants. Responsible Research: A Systems Approach to Protecting Research Participants. Washington, D.C.: The National Academies Press: 2002. Improve care for a specific population. Limited application (a clinical unit or clinic). Process of self-monitoring and self- assessment. Results are applied in an effort to improve a process or a practice. Trends are monitored using process improvement tools (run charts, standardized reports). 1. Problem: The problem should be of interest to the clinical staff, unit, and/or hospital. It should specify the population and the variables that are being studied. e.g. Increase in preventable falls (accidental and anticipated physiological) 2. Purpose: Determine if standardized purposeful rounding could reduce preventable falls and are accepted by staff and patients. A quantitative component identifies the key variables, their possible interrelationships and the nature of the population of interest. In a qualitative component, the statement of purpose indicates the nature of the inquire, the key concept, and the group, community, or setting under study. 3. Research question/hypothesis: No specific research hypothesis or criteria for including staff or patients. 4. Intervention: Hand Off Communication Outcome: Preventable Falls (quantitative) Staff/Patient Acceptance 5. Conceptual/theoretical framework: Planned change theory. Small tests of change. 6. Literature review: What knowledge exists on the study topic? The literature review can help the researcher plan study methods, instruments or tools to measure the study variables. 7. Research design: The overall plan for gathering data in a research study. It is concerned with the type of data that will be collected and the means used to obtain the data. How will the practice change implemented?
8. Population- sample- setting: Sample: Not specific – rather focus on a process or practice Sampling: Convenience methods 9. Instruments/ tools: Checklists, surveys and/or instruments that do not have established psychometric properties. The type used is determined by the data collection method (s) selected. 10. Data Collection: The gathering of pieces of information or facts. What data will be collected? How will the data be collected, who will collect the data? Where will the data will be collected? When will the data be collected? The data may be collected on subjects by observing, testing, measuring, questioning, or recording or combination of methods. 11. Data analysis: Organize, reduce and give meaning to the pieces of information. Involves the translation from common language or general research language to the language of the statisticians. When the analysis is complete, the results of the process or practice change is evaluated. 12. Results and Findings: The interpretations of results only apply to that setting and sample.
Data type is important for performance improvement analysis. Research and Performance Improvement as a continuum. Type of data is important to determine type of Statistical Process Control (SPC) charts.
Data or pieces of information should be: ◦ simple and understandable ◦ concise and precise ◦ compared (tables, benchmarks, targets) ◦ the basis for developing evidence based practice ◦ used from research to improve practice ◦ involved in collecting, analyzing, and using ◦ used to make decisions, improvements, and actions Benchmarking - A standard or reference against which something or some process can be measured or compared. It is used to compare performance of providers or systems with a standard.
Data inventory Verify data needed Assess duplication of data collected Determine use of data Data for decision making ◦ Complete ◦ Accurate ◦ Timely ◦ Balanced ◦ Trended over time ◦ Valid ◦ Reliable ◦ Prioritize As a magnet facility, we strive to achieve the best results and follow best practices related to clinical and non-clinical situations. Statistics reveal how well we are reaching the mark. Performance improvement is a continuous process to eliminate barriers to quality care due to system and process issues. The Nursing PI process is customer focused and contingent on teamwork. It utilizes data to compare and benchmark our performances prior to making changes in a process or practice.
Qualitative ◦ Opinions/perceptions ◦ Themes Quantitative ◦ Attribute (categorical) Nominal Ordinal ◦ Variable (continuous) Interval Ratio PATIENT
PROVIDERS
SYSTEM OF CARE
Working at the “Sharp” End
Cook & Woods Organizational Level: expert interdisciplinary all team, program evaluation, leadership, environmental safety, safe patient equipment, anti-tippers on wheelchairs Unit Level: education, communication-handoff, universal and population-based fall-prevention approaches Patient Level: exercise, medication modification, orthostasis management, assistive mobility aides Organizational Level: available helmets, hip protectors, floor mats, height adjustable beds; elimination of sharp edges Staff Level: education, adherence, communication-handoff includes risk for injury Patient Level: adherence with hip protector use, helmet use, etc. Prevalence Studies Formative and Summative Evaluation Methods ◦ Type of Falls ◦ Severity of Injury How are you assessing for injury? Duration? Extent of Injury? ◦ Repeat Falls ◦ Survival Analysis ◦ Annotated Run Charts
35 If focus on falls, measure preventable falls Otherwise, measure effectiveness of interventions to mitigate or eliminate fall risk factors (remember Oliver article, recommendation 2 and 3): Number (and type) of modifiable fall risk factors modified or eliminated upon DC. What are they? Accidental Anticipated Physiological Falls Unanticipated Physiological Falls What about others? What is your formula for fall rates? Is your formula standardized? What is your formula for fall related injury rates?
Number of Patient Falls X 1,000 Number of Patient Bed Days
Number of Injuries X 100 Number of Falls
Low Moderate Severe Death Multiple injuries – profile number and severity of injuries As an example of injury rates, “your facility has had 80 falls in the last month. Of the 80 falls, 5 falls resulted in a minor injury, 3 falls resulted in a major injury, such as a hip fracture, and the remainder resulted in no injury. Minor Injury Rate = (5/80) x 100 = 6.25 per 100 falls (6.25%) Major Injury Rate = (3/80) x 100 = 3.75 per 100 falls (3.75%) To interpret these rates, you would communicate that “6.25% of the falls last month resulted in minor injuries and 3.75% resulted in major injuries.”[i]
[i] Department of Veterans Affairs, Veterans Health Administration (VHA) National Center for Patient Safety. (2004). Falls Toolkit. http://vaww.ncps.med.va.gov/FallsToolkit accessed 11/30/04 How do you know that your program is working? How do you know that your staffing mix is appropriate? How do you compare your rates to that of other units? Repeat Fallers Delays in Repeat Falls (Time to Occurrence) Diagnostic Cohorts Known Patients with Fall Related Injuries A Special Emphasis Population Conducted with the patient where the fall occurred within 15 minutes of the fall Post fall analysis ◦ What was different this time? ◦ When ◦ How ◦ Why ◦ Prevention: protective action steps to redesign the plan of care
46 Specify Root Cause (proximal cause) Specify Type of Fall Identify actions to prevent reoccurrence Changed Planned of Care Patient (family) involved in learning about the fall occurrence Prevent Repeat Fall
Structures: ◦ Who attends: Nursing and others – Count them ◦ Changed Plan of Care: Add actions to your run- chart: Annotated run chart; Capture interventions Processes: ◦ Timeliness of Post Fall Huddle (number of minutes) ◦ Timeliness of changing plan of care ◦ Time to implemented changed plan of care Prevent repeat fall: same root cause and same type of fall Reduce costs associated with falls and fall related injuries
What data are you collecting? Are your variables evidence-based? What about reliability and validity? What is the role of Fall Incident Reports? Process: ◦ Fall risk assessment defined by the organization (TJC.PC.01.02.08)
Outcomes: ◦ Data presented to demonstrate improvement defined by the organization (TJC.PI.O3.01.01) Model of Care Delivery Skill Mix Family Centered Care Interdisciplinary Care Planning Post Fall Reviews Integration of Technology Run Charts Control Charts Annotation: Show your interventions
Falls per 1000 Patient Days
Fall Rate UCL LCL National Mean
8
7
6
5
4
3
2
Falls per 1000 Patient Days 1
0 Jul Jul Jan Jan Jun Jun Oct Oct Feb Feb Sep Sep Apr Apr Dec Dec Mar Mar Nov Nov Aug Aug May May 2008 2009 Fall Rate by Type of Fall per 1000 Patient Days Fall Rate Anticipated Falls Unanticipated Falls
Accidental Falls Intentional Falls 6
5
4
3
2
1 Fall Rate per 1000 Patient Days 0 Jan Mar May Jul Sep Nov Jan Mar May Jul Sep Nov 2008 2009 Days Between Serious Injury Days Between Serious Injury Average 80 67
70
60 54 49 51 50 38 40 37 30 30 30 18 20 Days Between Serious Injury Serious Between Days 10
0 Jan 3 08 Feb 2 08 Mar 25 May 1 08 May 31 Jul 7 08 Aug 30 Sep 17 Nov 23 08 08 08 08 08 JAH VAMC Med-Surg Falls with Harm by Quarter per 1000 pt days (Includes all harm categories: Minimal, Moderate, Major & Death)
Put pts on high risk falls precautions w/hx 5 of falls, osteoporosis, Computer Template Morse score > 50, on Equipment purchased (alarms, for Morse Fall Scale anticoagulants, low 4.5 mats, low beds 12/03-12/05) and embedded platelets 12/06 template into Adm and Use yellow wrist bands Trsf notes (12/05) on high falls risk pts 4 1/07 Start comfort safety Updated Added Alert re- rounds on off-tours 3.5 Falls HPM meds to Scale (evenings, nights, (9/05) (11-12/05) weekend days) 2-3/07 Call-Don't Fall Developed Education 3 Started AARs Signs (6-9/05) Brochure on meds (safety and fall risk (11/05) huddles) on 2.5 Revised Post Fall test units 10- Note (3-12/05) 12/06
2 Red non skid socks Teachback (pilot 5S then to all w/pts 11/06 M-S 12/05)
Falls w/harm per 1000 pt days 1000 pt per w/harm Falls 1.5 Added handoff form (4/06) 1 Staff education Staff on falls (10/05) education on 0.5 falls (1/05)
0 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 2004 2004 2004 2004 2005 2005 2005 2005 2006 2006 2006 2006 2007 2007 Quarter
Figure 2, Control Chart, Pilot Unit
Falls per 1000 BDOC Nursing Home Fall Rate Pilot Unit
Phase: Implementation Assessment Summar y 30
20 54% decrease in average monthly fall rate
9.75 10 4.49 Fall Rate
0
-10
Nov-01 Jan-02 Mar-02 May-02 Jul-02 Sep-02 Nov-02 Jan-03 Mar-03 May-03 Jul-03 Sep-03 Nov-03 Jan-04 Mar-04 May-04
What are strategies to increase staff knowledge of their success? Is your program improving? Are your patients safer? Are your patients confident? Fall Self Efficacy Patient Satisfaction Family Satisfaction What is needed for Prevalence Studies to profile your population’s fall risks? Questions? Comments? Sharing…. Thank you! The Promise and Challenge of Practice-Research Collaborations: Guiding Principles and Strategies for Initiating, Designing, and Implementing Program Evaluation Research. (includes abstract) Secret, Mary; Abell, Melissa L.; Berlin, Trey; Social Work, 2011 Jan; 56 (1): 9-20. (journal article - case study, tables/charts) ISSN: 0037-8046 PMID: 21314067
Differentiating the Scientific Endeavors of Research, Program Evaluation, and Quality Improvement Studies. Lowe, Nancy K.; Cook, Paul F.; JOGNN: Journal of Obstetric, Gynecologic & Neonatal Nursing, 2012 Jan-Feb; 41 (1): 1-3. (journal article - editorial, tables/charts) ISSN: 0884-2175 PMID: 22834716
Paradigms, pragmatism and possibilities: mixed-methods research in speech and language therapy. (includes abstract) Glogowska, Margaret; International Journal of Language & Communication Disorders, 2011 May; 46 (3): 251-60. (journal article - tables/charts) ISSN: 1368-2822 PMID: 21575067
Leadership: Culture of Safety
Fall Rounds
Signage
Frequency of Fall Risk Screening
Measurements of Effectiveness
68 69 Questions ?
70