Human Factors to Advance Patient Safety

Total Page:16

File Type:pdf, Size:1020Kb

Human Factors to Advance Patient Safety

Human Factors to Advance Patient Safety White Paper for Intermountain-led HEN Hospital Improvement

Frank A. Drews, Ph.D. Director Center for Human Factors in Patient Safety Salt Lake City VA Medical Center

Associate Professor Department of Psychology University of Utah

Prepared in response to the needs of our HEN hospitals.

Supported by CMS Contract # HHSM-500-2012-0024C December 2012

1For internal HEN use. 2For internal HEN use. Table of Contents

3For internal HEN use. 1. Human Factors and Patient Safety

Human Factors is an applied science that originated out of the need to address an increasing misfit between human operators and newly evolving technologies at the beginning of the 20th century.

At that time, frequent close calls and accidents during the operation of new technological systems created an increase in awareness that the lack of fit between technology and operator was one contributor to these interaction breakdowns. To address this issue, engineers and psychologists began focusing on the delicate interplay between human operators and the technologies that were being developed and used at that time. Over the next decades significant advances in our understanding of human system interaction lead to increases in safety and reductions in accidents in many domains, with aviation being the poster child.

A definition of Human Factors that is being used frequently defines Human Factors as “That field involving research into human psychological, social, physical and biological characteristics, maintaining the information obtained from that research, and working to apply that information with respect to the design, operation or use of products or systems for optimizing human performance, health, safety and / or habitability”.

Another definition provided by the International Ergonomics Association is: “Ergonomics (or human factors) is the scientific discipline concerned with the understanding among human and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance”. (www.iea.cc)

4For internal HEN use. Figure 1. Central components of a human machine system.

Clearly, at the core of a Human Factors contributions is an analysis of four central components, that when interacting with each other, play an important role in how effective a system is operating as a whole.

Among the elements of a human machine system are human operators and their specific skills, abilities and knowledge, and some kind of software and hardware components that are part of the system with both of them having inherent properties and vulnerabilities (see Figure 1). Finally, the interaction between these elements occurs in an environment that also has certain properties that can facilitate or impede performance. It is important to note that the environment is a significant contributor to how the interaction between these core elements unfolds.

It is not uncommon to use different levels of analysis when dealing with a human factors problem that needs to be addressed. Most commonly, these levels of analysis are: the physical ergonomics, the cognitive ergonomics and the organizational ergonomics. Each of these levels will be described in the following sections next. In addition, these levels will provide the analytical framework that will be applied to provide information about how to apply human factors to improve Patient Safety.

There is no generally accepted definition of what constitutes Patient Safety. For our purpose we will use the following definition of Patient Safety to identify how Human Factors can improve Patient

5For internal HEN use. Safety (Emanuel, Berwick et al., XXXX): “Patient safety is a discipline in the Health Care sector that applies safety science methods toward the goal of achieving a trustworthy system of Health Care delivery. Patient safety is also an attribute of Health Care systems; it minimizes the incidence and impact of, and maximizes recovery from, adverse events.”

In the next sections we will discuss the different levels of Human Factors and how they can potentially contribute to improve Patient Safety.

2. Physical Ergonomics in Patient Safety

When examining the Human Factors of physical space in hospital settings an interesting paradox emerges where “a life-sustaining, healing environment that paradoxically contains the noxious stimuli of noise, bright lights, and frequent interruptions” (Fontaine, 2001, p.22).

In their review of the literature of hospital design Ulrich, Quan, Zimrich et al., (2004) provide a comprehensive summary. A more recent review of the physical ergonomics in Health Care can be found in

Alvarado (2012).

Important in this context is that it has become clear over the last couple of decades that the physical environment of healthcare facilities has an influence on the satisfaction of patients and on clinical patient outcomes such as pain, the likelihood of post-surgical organic delirium, and the time of a post- surgical stay (Ulrich, 1984; Ulrich, 1991, Wilson, 1992). Patient stress often is a result of the perceived lack of privacy and control, and of noise and of crowding (Schmaker and Pequegnat, 1989). Horsburgh (1995) discusses design principles that may create an environment that is psychologically supportive, supports healing, and reduces stress to Health Care workers. In addition, high levels of Patient Safety are a result of an environment that reduces Health Care worker stress since high levels of stress have a negative impact

6For internal HEN use. on performance. In the next section we will focus on a number of issues that are related to physical ergonomics factors that impact Patient Safety.

2.1 Noise

Among one of the most prevalent stressors in the hospital environment is noise. The WHO recommends that the noise level in hospitals should not exceed 30 dB(A) and that peaks during night time should not exceed 40 dB(A) (Berglund et al., 1999). However, several studies indicate that noise in the ICU frequently exceeds these values, resulting in average noise levels in the 60 to 70 dB(A) range with peak sound pressure levels of 90 dB(A). In a hospital wide study of Johns Hopkins Hospital Busch-Vishniac et al

(2005) found that none of the hospital locations studied was in compliance with the WHO guideline. Noise is a result of the presence of more and more devices in the hospital with measured noise levels in excess of 100 dB(A). Another source of noise in the hospital is the omnipresence of device alarms.

In the ICU almost all devices are equipped with alarm systems that notify staff about potential problems or dangers for the patient. One of the challenges clinicians face is that many of the alarms are either false or clinically insignificant. Estimates are that between 80% and 90% of all alarms in the ICU are false alarms, and 3% of the alarms in the OR actually communicated an actual risk to the patient (Kestin et al, 1988).

The omnipresence of the auditory alarms has a negative impact on staff (Tsien and Fackler, 1997) resulting in nurses often ignoring alarms (Lawless, 1994), thus running danger of missing correct alarms.

Beyond the fact that the delivery of care is affected by high noise levels due to alarms, other impacts on

Health Care workers are present. For example, there have been reports of increased levels of stress and frustration due to exposure to high noise levels (Cropp, Woods, Raney et al, 1994). In addition, noise has the potential to mask critical auditory information in the background, thus interfering with

7For internal HEN use. communication. In addition, noise can have negative impact on other cognitive functions like task performance, concentration or problem solving (Morrison et al., 2003).

Overall, noise is omnipresent in the hospital setting and poses a significant challenge to Patient

Safety be this via direct or indirect pathways.

2.2 Unit Lay-out

Another factor that has an impact on Patient Safety is the spatial layout of units. Next, we will discuss some of the progress that has been made in the context of ICU unit layout to illustrate the importance of this aspect of physical ergonomics.

Rashid (2006) pointed out that “the layout of an ICU is arguably the most important design feature affecting all aspects of intensive care services including patient privacy, comfort and safety, staff working conditions and family integration”. Current design of ICU space often poses a challenge because the space was designed many decades ago and consequentially does not accommodate all of the technology that is being used today. As a result many ICUs do not have available the minimum space recommended with potentially negative impact on Patient Safety. For example, the review of the 2003 SARS outbreak in

Canada partly attributes the outbreak to unit design that did not meet the basic infection control guidelines (Naylor et al., 2004). For example, there is evidence that single patient rooms can reduce the likelihood of patients acquiring infections (Teltsch et al, 2011).

Arrangement of physical space can be guided by recommendations from Sanders and McCormick

(1993). Among these recommended principles are the (1) importance principle that states that important components need to be placed in convenient locations, so that they are easily available to the user, (2) frequency of use principle, reflecting the fact that frequently used components should be placed in convenient locations, (3) function principle which states that components of a system should be grouped in clusters that are organized by the function they serve, and (4) the sequence of use principle, stating that

8For internal HEN use. items should be provided in the sequence in which they are being used while performing the task (Drews, in press).

2.3 Recommendations for Spatial Lay-out

Several Critical Care Organizations provide recommendations on how to create a positive environment in the ICU that supports patient recovery with these principles being compatible with good

Human Factors. For example, the Canadian Association of Critical Care Nurses outlines a number of factors that can contribute to improved patient outcomes (Rashid, 2006; Schmalenberg & Kramer, 2007).

Among the recommended factors are the following:

 A spatial design of patient rooms that allows for constant ability to observe and monitor

critically ill patients, while also providing sufficient privacy for the patient and family (Rashid,

2006; Schmalenberg & Kramer, 2007; Surrey Memorial Hospital, 2006; White, 2006; Williams &

Wilkins, 1995).

 Adequate spacing in the patient rooms to allow the presence of equipment and the ability to

perform procedures at the bedside (Rashid, 2006; Surrey Memorial Hospital, 2006; Williams &

Wilkins, 1995) while also accommodating family presence at the bedside

 Design of space needs to accommodate the currently used equipment but also provide the

capability to accommodate advanced monitoring and therapy equipment in the future (Brown &

Gallant, 2006; Schmalenberg & Kramer, 2007; Surrey Memorial Hospital, 2006; Williams &

Wilkins, 1995).

 Information technology that are highly integrated (see discussion above) so that all information

is available at the bed side by providing access to laboratory, pharmacy, diagnostic imaging,

health records and other departments/services (Lapinsky, Holt, Hallett, Abdolell, & Adhikari,

2008; Surrey Memorial Hospital, 2006; White, 2006; Williams & Wilkins, 1995).

9For internal HEN use.  Equipment and supplies need to be organized to ensure patient and caregiver safety and quick

and easy accessibility (Gurses & Carayon, 2007; Rashid, 2006; Rosenberg & Moss, 2004; Runy,

2004; Surrey Memorial Hospital, 2006; White, 2006; Williams & Wilkins, 1995).

 The ability to provide care for patients who are in isolation for infection control (Rashid, 2006;

Rosenberg & Moss, 2004; Surrey Memorial Hospital, 2006; White, 2006).

 Design patient rooms in such way that they lead to noise reduction, privacy, improved sleep

quality and reduction in nosocomial infection rates (Brown & Gallant, 2006; Gurses & Carayon,

2007; Rashid, 2006; Surrey Memorial Hospital, 2006).

 Provide natural light to facilitate well-being for patients, family members and staff (Rashid,

2006; Surrey Memorial Hospital, 2006; White, 2006).

 Waste disposal that minimizes risk of exposure to contaminants (Rashid, 2006; Surrey Memorial

Hospital, 2006).

 Minimization of traffic flow past individual patient care areas (White, 2006).

Above, only a brief number of topics related to physical ergonomics in Patient Safety is discussed. These examples serve as illustration of how applying human factors can help change the physical properties of the heath care environment, and how this in turn can positively affect Patient

Safety.

3. Cognitive Ergonomics in Patient Safety

Next we will focus on cognitive ergonomics and its contributions to Patient Safety. The focus of this discussion will be Human Error since Human Error is a significant contributor to challenges in Patient

Safety.

10For internal HEN use. 3.1 Human Error

The individual contribution to human error

For healthcare, the most comprehensive definition of Human Error is provided by the revised definition of medical error based on the first IOM report and expanded by the Quality Interagency

Coordination task force: “An error is defined as the failure of a planned action to be completed as intended, or the use of a wrong plan to achieve an aim. Errors can include problems in practice, products, procedures, and systems.” (Kohn, Corrigan, Donaldson, et al., 1999)

The purpose of models of Human Error is to provide insight into the psychological and organizational contributors that lead to near misses and adverse events in many contexts. Over the last two decades a literature emerged which analyzes Human Error in a wide range of contexts (Wiegmann and Shappell, 2003; Helmreich, 1997; Lee, Kim, Kim, Kim, Chung, and Jung, 2004; Wagenaar,

Groeneweg, Hudson, and Reason, 1994). Overall, work from a range of domains leads to a better understanding of how and under which conditions error occurs. One of the advantages of the existing literature on Human Error in non-Health Care domains is that a significant portion can be applied to the

Health Care context, despite the fact that the types of systems that are being “operated” differ in some important aspects (Durso and Drews, 2010).

One perspective on Human Error proposes that Error is a result of limitations of the human cognitive architecture (Reason, 1990) that, with contextual factors (operationalized as environment in

Figure 1), can result in adverse events. With Rasmussen (Rasmussen and Jensen, 1974), it is possible to differentiate different levels of human performance to identify different cognitive control modes of behavior. According to Rasmussen and Jensen (1974), human performance can be described on three levels as being skill-based, rule-based, or knowledge-based.

11For internal HEN use. Skill-based behavior is behavior that is highly automatic and learned. It is driven by patterns that were acquired earlier as part of the process of acquisition of expertise. A person acting at this level is not required to allocate significant amounts of attention to the execution of a task. Behavior at the skill- based level can be automated; attention can either shift occasionally to another task or be completely allocated to such additional task. The withdrawal of attention from the skill-based task may result in error when changes in the environment or changes in the tools used occur. For skill-based performance to be successful, it is important that the operator, at least on occasion, monitors the progress of the behavior to keep it on track.

Rule-based performance is present when a person deals with a familiar problem where a solution to this problem is available in form of a stored production rule (if X then Y) that was learned during the development of expertise. Here a person is required to allocate attention to the task to recognize the current conditions and to match these conditions via an association to the conditional statement of a production rule. In case of such match, a particular rule is applied and an action is executed. The cognitive demand involved at this level of performance is higher compared to the demand when performing at the skill-based level. Error in this context can be a result of mismatching the conditional elements of a rule to a situation.

Finally, knowledge-based behavior occurs when a person deals with a new and unfamiliar problem. Performance at this level requires undivided attention, and the cognitive processes involved in dealing with this type of problem are slow and sequential. Error at this level is a result of a lack of comprehension of a system or a problem. Reason (1990) applies this classification of human behavior to develop a framework that integrates different literatures on Human Error. The result of this work is the

Generic Error Modeling System.

12For internal HEN use. The Generic Error Modeling System (GEMS) distinguishes three stages of cognitive processing

(planning, storage, and execution) and three levels of control, which vary according to intensity of cognitive effort (automatic, mixed, and effortful). A combination of stages and levels of cognitive control leads to different modes that result in behavior being skill-based, rule-based, or knowledge-based. The skill-based mode corresponds to habitual activities which are nearly effortless with respect to cognitive load; the rule-based mode depends on pattern-matching against a set of internal problem-solving rules; the knowledge-based mode applies to novel, difficult, dangerous, or critical problems, or when automatic control has to be overridden to prevent performance of a task in the habitual way. Thus, the level of control depends on the complexity and novelty of the task. For a given task, humans typically gravitate toward the lowest mode possible to minimize cognitive effort (Fisk & Taylor, 1991).

The GEMS model makes the prediction that the same cognitive processes that govern everyday activities influence the occurrence of error. Each level and stage of cognitive processing corresponds to a different type of error. Slips are the result of errors at the execution phase of a skill-based activity, whereas lapses are skill-based errors at the storage stage. An example of a slip is writing a routine medication refill for digoxin but inadvertently switching the “qod” frequency to “qd.” Mistakes are categorized as errors of rule- or knowledge-based modes of control. An example for a mistake in Health

Care would be the failure to suspect a retropharyngeal abscess in an adult who presents with painful swallowing and fever because the wrong diagnostic evaluation rule, “rule out streptococcal pharyngitis,” is applied.

What is important in this context is to understand that all different types of error constitute instances of active failures. Active failures are occasions where performance at the “sharp end” breaks down. Active failures occur where human operators interact with a system.

13For internal HEN use. Latent conditions

Latent conditions are factors that are present in organizations for long periods of time and that are important contributors to Human Error. However, their contribution is less apparent than the contribution of active failures. A good example for a device with latent conditions in Health Care is provided by Lin, Isla, Doniz, Harkness, Vicente, and Doyle (1998) for an infusion pump and by Leveson and Turner (1993) for a dual mode accelerator. Relevant in this context is that latent conditions may be present for many years before, in conjunction with active failures, a loss occurs. Latent conditions exist as a result of high-level decisions in an organization and the contributions of regulators, manufacturers, designers, and organizational managers. Latent conditions are present in all systems and organizations, and their existence is unavoidable.

Error-producing conditions

The goal of including error-producing conditions (EPC) into a framework of Human Error is to help identify conditions under which the occurrence of error is more likely and by removing those conditions to implement measures that reduce the likelihood of error. This approach requires the identification of precursors or EPC that contribute to the creation of hazards. Adopting a framework based on Williams’s (1988) work on the Human Error Assessment and Reduction Technique (HEART), the most important EPC are the following: Unfamiliarity with a situation, time pressure in error detection, low noise-to-signal ratio, mismatch between an operator's mental model and that imagined by a device designer, impoverished quality of information, ambiguity in performance standards, disruption in normal work-sleep cycles, and unreliable instrumentation. The foundation of these EPC is Williams’s comprehensive review of the Human Factors literature on performance-shaping factors.

Unfortunately, this work has yet not received its well-deserved attention in the context of

Health Care. However, to provide some evidence for the benefit of this approach as an analytical tool to

14For internal HEN use. identify problems in the Health Care context, we will discuss one Health Care application of this approach.

Drews et al. (2007) studied the prevalence of EPC in the ICU that are associated with specific devices. The selection of specific devices in this study was driven by their function (therapeutic or monitoring devices) and their criticality for Patient Safety (Samore et al. 2004).

To identify the impact of EPC, the authors developed a questionnaire that included 121 items grouped according to general EPC and device-specific EPC. The instrument was administered to 25 experienced ICU nurses who were asked to rate their level of agreement with general (“You always can tell whether something is malfunctioning or just difficult to use”) and device-specific statements concerning EPC (“You feel you have received adequate training to use [device]”) had to be rated. Ratings were obtained on a scale from 1-9 with lower scores indicating higher agreement with the statement

(1= “strongly agree”; 9 = “strongly disagree”). Next, the average ratings were aggregated for individual

EPC. The results indicate that low signal-to-noise ratio, unreliable instrumentation, a mismatch between a nurse’s mental model and the mental model of the designer of a device, and shortage of time rank high in their importance as EPC. These higher mean rankings can be contrasted with the EPC ambiguity in performance standards which seems not to be perceived as an important contributor to conditions that are increasing the likelihood of error.

Violations

Violations are often considered the intentional or erroneous deviation from a protocol, procedure, or rule (Reason, 1997). A violation can be defined as a deviation from a safe operating procedure, standard, or rule (Reason, Parker, and Free, 1994). Both organizational and individual factors contribute to the occurrence of violations. Often violations are described as behavior that is deliberate but non-malevolent (Reason, 1990). Reason (1990) distinguished between routine, optimizing, and

15For internal HEN use. necessary violations. Others identified additional types of violations, e.g., situational violations (Lawton,

1998). Runciman, Merry, and Walton (2007) distinguish between four types of violations: routine violations, corporate violations, exceptional violations, and necessary violations.

Routine violations are violations that occur when a person perceives an alternative, more efficient way of dealing with a task than that required by a policy or protocol. As a result of this behavior, safety is sacrificed. Often, external pressures reinforce routine violations; they are then repeated on a routine basis. Corporate violations result from decisions at the administrative level that create a situation that supports the violation of procedures (e.g., excessive working hours). Exceptional violations occur in unusual or exceptional circumstances where a routine cannot be followed and an exceptional response is required. What makes these violations problematic is that they are risky because of their deviation from routine and likely to not produce the desired outcome. Violations like these occur under novel conditions. Finally, optimization violations are those violations where additional motives are involved that go beyond task specific considerations and where these motives supersede the primary motivation to engage in a task.

Violations in health care

In Health Care, the types of violations outlined above can be observed in many contexts. For example, a very common routine violation for Health Care workers is to not sanitize their hands when entering a patient room. Violations like this might be partly driven by high levels of time pressure.

Optimizing violations in the Health Care context are violations where maximization of a personal goal may lead to the administration of a procedure that is not indicated. For example, a provider who performs a surgical procedure because the procedure is well compensated commits an optimization violation, when another more qualified provider also could provide the procedure. Nurses and doctors who omit steps in a procedure because of tight scheduling of patients and limited time available are

16For internal HEN use. committing necessary violations. For example, a clinician who is working under time pressure and who turns his back to a sterile field during a central line insertion is, according to protocol, required to reestablish a sterile field by re-draping the patient. However, in the presence of time pressure, the clinician may not re-drape the patient to reestablish sterility and therefore commits a necessary violation. An example for a corporate violation in Health Care is chronic understaffing of a unit so that nurses working at the unit provide care to more than the optimal number of patients with predictable negative consequences.

Violation Producing Conditions

In addition to EPC, Williams (1997) identified a number of violation producing conditions (“VPC,” see also Croskerry, 2005). In the presence of a VPC, individuals are more likely to violate procedures or protocols at the workplace. Among the conditions that Williams identified are the perceived low probability of detection, inconvenience in performing a procedure according to protocol, authority to violate a procedure, copying behavior of others who are violating procedures, the lack of a disapproving authority figure present, the perceived requirement to obey authority figure, being male, and group pressure to violate existing procedures. VPC have been identified in a number of contexts, like plant management (Rasmussen and Petersen, 1999) and traffic violations (Jason and Liotta, 1982; Van Elsande and Fouquet, 2007). Croskerry and Chisholm (2005) applied the concept of VPC to the Health Care context (see also Croskerry and Wears, 2003).

VPC in health care

Patterson et al. (2006) report the findings of a study that examined bar code medication administration in clinical settings. In their study they observed a total of 28 nurses during the process of medication administration with the goal of identifying workaround strategies related to problems with the barcode scanning method. The workarounds they identified can be classified as routine violations

17For internal HEN use. since they involve disabling the technology-based defenses that were put in place to reduce the likelihood of misadministration of drugs to patients. Workarounds in the context of patient identification that were identified were entering patient information (Social Security number) by hand instead of scanning the patient’s wristband and scanning surrogate wristbands not located on the patient. One of the many interesting findings in this study was that in an acute care setting only 53% of the patient identification strategies were in compliance with the protocol for drug administration.

Overall the findings of this study indicate that violations of procedures are frequent in the context of patient identification and medication administration when using bar code technology.

However, the context in which these tasks are performed plays an important role. The authors express skepticism about the effectiveness of sanctions, training, or policies and procedures with the goal of increasing compliance. The main reason for this skepticism is that the observed nurses stated that high time pressure present in the unit was in the way of complying with the procedures. Similar findings on violations in the context of drug administration are reported by Koppel, Wetterneck, Telles, and Karsh

(2008).

Hazards

The result of an unsafe act is a hazard. A hazard increases the likelihood of an adverse event as an outcome of an action. The difference between hazards and adverse events is that an adverse event is the realization of the potential for harm that is associated with a hazard. This transition from potential into reality can be due to several contributors. One of the contributors to this transformation is that the defenses that are normally in place to protect patients and Health Care workers are not effective.

Another reason is that the defenses are intentionally disabled. Finally, it is possible that the conditions that lead to a particular hazard were not foreseen, and defenses were not available. Thus, the lack of effective defenses in conjunction with an unsafe act can lead to an adverse event.

18For internal HEN use. Defenses

Even with potentially high rates of Human Error in Health Care, not all of the errors result in patient harm. This is due to the fact that in Health Care defenses are in place to protect patients from harm. Defenses are protective measures put in place to reduce the likelihood of negative outcomes resulting from an unsafe act. Defenses are designed to serve one or more functions. These functions may aim at creating a better understanding and awareness of hazards present and/or to provide some guidance by having people follow protocols or checklists. The functions also provide alarms when dangers are imminent by utilizing direct feedback, although this might be difficult or impossible in some contexts. Two types of defenses can be distinguished that serve the above functions: hard and soft defenses (Reason, 1997). Hard defenses include technical devices (e.g., alarms on patient monitors, ventilators, infusion pumps), where soft defenses rely on people and organizational factors (e.g., procedures, training, licensing).

Adverse events

There are a number of important differences in the use of terminology when describing and analyzing Human Error during a comparison between Health Care and other industries. These differences in terminology point toward important domain-specific differences that need to be taken into account when investigating Human Error in Health Care.

The terminology to describe the consequences of Human Error in industries other than Health

Care uses frequently the term “losses” to describe the endpoint of error. In Health Care, “losses” are described as events that are related to patient injury. However, since most of the work focuses on patient-related losses, the terms “iatrogenic injury” or “adverse event” are used commonly. Often an adverse event is defined as any injury to a patient due to medical management as opposed to some underlying disease (Rothschild, Landrigan, Cronin, et al., 2005). Accordingly, a non-preventable adverse

19For internal HEN use. event is an unavoidable injury despite appropriate medical care where a preventable adverse event is an injury due to a non-intercepted serious medical error, i.e., a failure of defenses in conjunction with an unsafe act.

3.2 An Integrative Model of Human Error in Health Care

Figure 2 illustrates a theory of Human Error in Health Care that incorporates the differences outlined above. Behavior-shaping internal and situational influences are represented as arrows that affect an action by pulling it toward the outcome of an unsafe act or successful behavior. The overall forces that are present in professional contexts are usually asymmetrical, i.e., favoring safe acts.

However, since this is also a reflection of the organization learning how to structure particular acts, innovation and new types of acts required may initially, due to a lack of experience, favor safe acts less than unsafe acts. Finally, there are strong compensatory mechanisms in place that support accomplishing successful behavior.

20For internal HEN use. Figure 2. An integrative model of Human Error

21For internal HEN use. 4. Organizational Ergonomics in Patient Safety

In the next sections we will describe two areas of organizational ergonomics and how they relate to Patient Safety. The first area that is discussed will be in team work Health Care, while the second area is safety culture in Health Care.

4.1 Teamwork

Among the most important communication events that facilitate team work in Health Care are handoffs and rounds. The goal of rounds and handovers is to facilitate the transfer of patient information across successive shifts, facilitating a smooth transfer of authority, and assuring clear assignment of responsibilities to providers (Kripalani et al, 2007; from Miller 2009).

Handoffs (also handover, change of shift report, inter-shift report, or sign-out) occur during shift changes between nurses, or while transferring patients from one context to another. A definition of handoffs is provided by Abraham, Kannampallil, and Patel (2012) describing handoffs as to: “… refer to the transfer of care from one clinician to the next and involve a transfer of information, responsibility, and authority for patient care” (p. 240). Handoffs have been studied using different indicators for quality and breakdowns, for example, by investigating sentinel events, (Croteau, 2005), critical incidents (Pezzolesi et al. 2010), and errors and near misses (Ebright, et al, 2004).

The goal of a smooth handoff is to transfer patient relevant information, which allows the responsible nurse to make appropriate clinical decisions, help to develop a work plan for the upcoming shift, which allows prioritization of tasks that need to be performed. Finally, a well performed handoff also supports nurses in developing a mutual understanding of the patient’s condition and needs which affects clinical decision making.

22For internal HEN use. During the handoff there is an expectation that information is communicated that provides the larger context in which patient care is being provided. This is in particular important when a nurse comes on shift and is not familiar with a particular patient. Other information included in the handoff is the plan of care for a patient. This plan includes physician orders, a summary of important events that occurred during the previous shift and tasks that need to be performed during the upcoming shift. The majority of information exchanges during handoffs consists of an overview of current patient status including a summary of the patient's momentary clinical status and progression of this status during the last shift

(Johnson, Jefferies, & Nicholls, 2012; Lamond, 2000; Mayor, Bangerter, & Aribot, 2011; Staggers et al.,

2012). Finally, patient related safety information (e.g., code status, allergies, etc.) is also communicated during handoff (Collins et al., 2011; Staggers et al., 2012). In the context of recent literature related to resilience engineering, Patterson and Wears (2010) discuss handoffs as a corrective measure that increases the resilience of patient care.

The above summary of information that needs to be considered during the handoff represents the optimal case. In the clinical context, the handoff is performed relatively fast, since it occurs at the end of one nurses’ shift and the beginning of another’s shift. Problems with handovers often result from omission of information, with consequences ranging from failure to treat problems (Gandi, 2005), to adverse patient events (Donchin et al 1995; Nast et al 2005) with significant impact on Patient Safety.

The need for improvements in handoff performance was underlined by the Joint Commission on

Accreditation of Healthcare Organizations (JCAHO) making improvement of handoff communication effectiveness one of their goals of Patient Safety improvement. One suggestion of JCAHO was to standardize the handover process (Consultants, 2005).

Other work indicates the importance and effectiveness of interdisciplinary handovers (see Miller et al., 2009) and rounds with improvements on numerous measures like quality of care, reduction in cost,

23For internal HEN use. length of patient stay and variability of care between providers (Halm et al., 2003; O’Mahony, Mazur,

Charney, Want, & Fine, 2007).

Rounds in the ICU provide the Health Care team with a formal opportunity to discuss patient care on a daily basis. Elements of a round are: review of a patient’s progress, discussion of evidence in favor or against the current diagnosis or diagnoses, discussion of successful and failed treatments, and planning of patient care.

It is common that rounds occur in the morning but delays or rescheduling might be required due to conflicting needs. The location of rounds can vary, for example, rounds may be held in the patient's room or at other locations without the patient being present. Often, rounds are intensivist lead, and among the round team members are ICU attending physicians and fellows, anesthesia and surgical residents, nurse practitioners, nurses and a pharmacist. Patient visits during a round can take usually between 15 to 25 minutes.

Pronovost et al., (2003) aimed at reducing miscommunication during rounds in the ICU by introducing a daily goals form for patients. The initial motivation of the design of a daily goals form for patients was based on the insight that rounds often failed to develop explicit patient goals, while discussions were frequently involving physiology, pharmacology and other aspects of the patient, failing to develop clearly articulated plans. The daily goals form was intended to create a clear route towards the formulation of explicit patient goals. It included among many others questions related to activities that needed to be performed to allow discharge of the patient from the ICU, Patient Safety risks and interventions to reduce these risks. The prospective cohort study was conducted in a 16 bed surgical ICU over an approximate 12 month time period. As a result of the implementation of the daily goals form, the understanding of residents and nurses of the goals of care improved from initially 10% to more than 95%.

24For internal HEN use. In addition, the length of stay of patients decreased after the implementation of the form from an average of 2.2 days to an average of 1.1 days after implementation of the daily goals form.

Dodek and Raboud (2003) evaluated an intervention with the goal to improve assignment of responsibilities and reporting during bedside rounds because initial analysis had identified inconsistencies, repetition, and omission of important information during daily bedside rounds in the ICU. The intervention involved the design of a flowchart the represented an ideal round aiming to "maximize the appropriateness, effectiveness, and efficiency of care for our patients and their families, and to provide an optimal learning environment.” (p. 1585). The impact of the intervention was measured by conducting pre- and post-intervention surveys of Health Care workers involved in the round. After the implementation of the explicit approach, an increase in agreement was observed with regard to having a long term plan for patients in place, and having this plan being articulated clearly. Overall, the results of this study support the idea, that a clear and structured approach that guides communication during a round can improve the effectiveness and dissemination of knowledge of goals of patient care.

4.2 Safety Culture in Health Care

Over the last decades organizational factors have been identified as important in the context of safe operation of human-machine systems as outlined in the above section on Human Error. Reason

(1993) developed a framework that includes organizational climate as one contributor to the creation of latent conditions that increase the likelihood of Human Error and violations in organizations.

The concept of safety culture originates in analyses of the Chernobyl nuclear power accident, concluding that the lack of a safety culture was one important contributor to this disaster (IAEA, 1991).

Definitions of what constitutes safety culture include several elements (see ACSNI, 1993 for the most commonly used safety culture definition): values, attitudes, perceptions, competencies and behaviors among the organizations members. A high level of safety culture finds its expression in an organizations’

25For internal HEN use. members’ belief that safety is important and can be managed. In addition, personal relationships are based on mutual trust among employees.

Safety culture in Health Care only recently became a focus of research. However, some surveys assessed the safety culture in several countries in recent years. For example, the Agency for Healthcare

Research and Quality (AHRQ) started collecting data in early 2000 on safety culture in Health Care that has been published cumulatively in the most recent 2012 report (Sorra et al, 2012). Other surveys were conducted in the United Kingdom (Mannion et al, 2009) and in Japan (Itoh and Andersen, 2008).

More specifically, detailed safety culture data for the ICU are available in the Sorra et al., (2012) report. Among the findings is that, support among ICU staff is rated as high among respondents, 89% expressing a positive response. Similarly, ICU members express a high level of team support (90%). With regard to supervisor actions promoting Patient Safety, 71% of the ICU staff respondents agree that the supervisor provides positive feedback when Patient Safety procedures are followed, and 75% agree that the manager does not ignore frequently occurring Patient Safety problems. Interestingly, attitudes towards organizational learning are less positive, with 62% of the respondents expressing agreement that mistakes have led to positive changes, and management support for Patient Safety is even less, with 47% of the respondents expressing the attitude that management is interested in Patient Safety only after an adverse event happens.

Pronovost et al., (2003) report data on the use of the Safety Climate Survey. The results of this survey indicate that staff perceives that supervisors are more committed to improvements in Patient

Safety than senior leadership, and that nurses are more positive towards Patient Safety than physicians.

Improvements in staff perceptions of the safety climate are related to decreases in error, patient length of stay and employment turnover, suggesting that staff perceptions are sensitive to these changes. Overall,

26For internal HEN use. one of the conclusions of this work was that there is a need to improve the strategic planning of Patient

Safety.

Itoh and Andersen (2008) performed a multi-national comparison of different work units in the hospital regarding safety culture. Particularly interesting are the data collected from nurses comparing the different work unit groups. With regard to the ICU nurses the results indicate that morale, motivation, safety awareness, and the recognition of the importance of communication for Patient Safety were lower than in any of the other units assessed. Interestingly, ICU nurses expressed that they perceived a lower power differential and more realistic recognition of how stress and Human Error affect clinical performance.

Overall, there are several important results of the work on safety culture in Health Care. First, there is a relatively strong relationship between safety culture and actual safety performance. Second, assessment of safety culture allows for a proactive management of Patient Safety, rather than the more traditional reactive approach. As outlined by Reason (1997), both approaches are important to assure that improvements in safety are sustainable. Finally, all members of an organization contribute actively to the safety culture. Thus, any interventions with the goal to improve safety culture will need to include all members of an organization

5. Summary

This white paper discusses the contribution Human Factors can provide to Health Care to improve Patient Safety. The contributions can be on different levels, they can involve the physical environment, human cognition and organizational contributors. Above, some examples were provided for how Human Factors can help to change how health care is delivered to patients, and how it can significantly improve Patient Safety. By involving Human Factors Engineers into the process of analysis of breakdowns of human system interaction and of breakdowns in Patient Safety expertise and knowledge 27For internal HEN use. can be leveraged that has not yet contributed to the necessarily needed transformation of our health care delivery system.

28For internal HEN use. References

Abraham, J., Kannampallil, T. G., & Patel, V. L. (2012). Bridging gaps in handoffs: a continuity of care based approach. J Biomed Inform, 45(2), 240-254.

ACSNI. (1993). ACSNI Human Factors Study Group. Third Report: Organising for Safety: Health and Safety Commission.

Alvarado, M. (2012). Advocating for safety in the workplace. J Emerg Nurs, 38(1), 5.

Berglund, B., Lindvall, T., & Schwela, D. (1999). Guidelines for Community Noise Geneva: World Health Organization.

Brown, K. K., & Gallant, D. (2006). Impacting patient outcomes through design: acuity adaptable care/universal room design. Crit Care Nurs Q, 29(4), 326-341.

Busch-Vishniac, I. J., West, J. E., Barnhill, C., Hunter, T., Orellana, D., & Chivukula, R. (2005). Noise levels in Johns Hopkins Hospital. J Acoust Soc Am, 118(6), 3629-3645.

CISCA. (2006). Acoustics in Healthcare Enviroments.

Collins, S. A., Mamykina, L., Jordan, D., Stein, D. M., Shine, A., Reyfman, P., et al. (2012). In search of common ground in handoff documentation in an Intensive Care Unit. J Biomed Inform, 45(2), 307-315.

Consultants, A. (2005). JCAHO to look closely at patient handoffs: communication lapses will be key focus. Healthcare Benchmarks Qual Improv, 12(12), 143-144.

Cropp, A. J., Woods, L. A., Raney, D., & Bredle, D. L. (1994). Name that tone. The proliferation of alarms in the intensive care unit. Chest, 105(4), 1217-1220.

Croskerry, P., & Chisholm, C. (2005). What does human factors ergonomics need to know about front- line medicine? Retrieved from http://www.crnns.ca/documents/Pat%20Croskerry%20Article %20May2005.pdf.

Croskerry, P., & Wears, R. L. (2003). Safety Errors in Emergency Medicine. In V.J.

Croteau, R. (2005). JCAHO comments on handoff requirement. OR Manager, 21(8), 8.

Dodek, P. M., & Raboud, J. (2003). Explicit approach to rounds in an ICU improves communication and satisfaction of providers. Intensive Care Med, 29(9), 1584-1588.

Donchin, Y., Gopher, D., Olin, M., Badihi, Y., Biesky, M., Sprung, C. L., et al. (1995). A look into the nature and causes of human errors in the intensive care unit. Crit Care Med, 23(2), 294-300.

29For internal HEN use. Drews, F. A., Picciano, P., Agutter, J., Syroid, N., Westenskow, D. R., & Strayer, D. L. (2007). Development and evaluation of a just-in-time support system. Human Factors, 49, 543-551.

Durso, F., & Drews, F. A. (2010). Healthcare, Aviation, and Ecosystems - A socio-natural systems perspective. Computers, Informatics, Nursing, 29(12), 706-713.

Ebright, P. R., Urden, L., Patterson, E., & Chalko, B. (2004). Themes surrounding novice nurse near-miss and adverse-event situations. J Nurs Adm, 34(11), 531-538.

Emanuel, L., Berwick, D., Conway, J., Combes, J., Hatlie, M., Leape, L., et al. (2008). What Exactly Is Patient Safety? Assessment.

Fiske, S. T., & Taylor, S. E. (1991). Social Cognition (Vol. 2nd ed.). New York McGraw-Hill

Fontaine, D. K., Briggs, L. P., & Pope-Smith, B. (2001). Designing humanistic critical care environments. Crit Care Nurs Q, 24(3), 21-34.

Gandhi, T. K. (2005). Fumbled handoffs: one dropped ball after another. Ann Intern Med, 142(5), 352- 358.

Gurses, A. P., & Carayon, P. (2007). Performance obstacles of intensive care nurses. Nurs Res, 56(3), 185- 194.

Halm, M. A., Gagner, S., Goering, M., Sabo, J., Smith, M., & Zaccagnini, M. (2003). Interdisciplinary rounds: impact on patients, families, and staff. Clin Nurse Spec, 17(3), 133-142.

Horsburgh, C. R., Jr. (1995). Healing by design. N Engl J Med, 333(11), 735-740.

IAEA. (1991). Safety Culture, Safety Series. Vienna, Austria: IAEA, International Atomic Energy Agency

IEA. (2013). IEA Ergonomics human centered design from www.iea.cc.

Itoh, K., & Andersen, H. B. (2008). A National Survey on Healthcare Safety Culture in Japan: Analysis of 20,000 Staff Responses from 84 Hospitals. Paper presented at the Proceedings of the International Conference on Healthcare Systems Ergonomics and Patient Safety HEPS, Strasbourg.

Jason, L. A., & Liotta, R. (1982). Pedestrian jaywalking under facilitating and nonfacilitating conditions. J Appl Behav Anal, 15(3), 469-473.

Johnson, M., Jefferies, D., & Nicholls, D. (2012). Developing a minimum data set for electronic nursing handover. J Clin Nurs, 21(3-4), 331-343.

Kestin, I. G., Miller, B. R., & Lockhart, C. H. (1988). Auditory alarms during anesthesia monitoring. Anesthesiology, 69(1), 106-109.

30For internal HEN use. Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (1999). To err is human: building a safer health system. Washington, D.C.: National Academy Press.

Koppel, R., Wetterneck, T., Telles, J. L., & Karsh, B. T. (2008). Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc, 15(4), 408-423.

Kripalani, S., LeFevre, F., Phillips, C. O., Williams, M. V., Basaviah, P., & Baker, D. W. (2007). Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA, 297(8), 831-841.

Lamond, D. (2000). The information content of the nurse change of shift report: a comparative study. J Adv Nurs, 31(4), 794-804.

Lapinsky, S. E., Holt, D., Hallett, D., Abdolell, M., & Adhikari, N. K. (2008). Survey of information technology in Intensive Care Units in Ontario, Canada. BMC Med Inform Decis Mak, 8(5).

Lawless, S. T. (1994). Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med, 22(6), 981-985.

Lawton, R. (1998). Not working to rule: understanding procedural violations at work. Safety Science 28, 77-95.

Lee, Y. S., Kim, Y., Kim, S. H., Kim, C., Chung, C. H., & Jung, W. D. (2004). Analysis of human error and organizational deficiency in events considering risk significance. Nuclear Engineering and Design, 230(11), 61-67.

Leveson, N., & Turner, C. S. (1993). An investigation of the Therac-25 accidents. IEEE Computer 26(7), 18- 41.

Lin, L., Isla, R., Doniz, K., Harkness, H., Vicente, K. J., & Doyle, D. J. (1998). Applying human factors to the design of medical equipment: patient-controlled analgesia. J Clin Monit Comput, 14(4), 253-263.

Mannion, R., Konteh, F. H., & Davies, H. T. (2009). Assessing organisational culture for quality and safety improvement: a national survey of tools and tool use. Qual Saf Health Care, 18(2), 153-156.

Mayor, E., Bangerter, A., & Aribot, M. (2012). Task uncertainty and communication during nursing shift handovers. J Adv Nurs, 68(9), 1956-1966.

Morrison, W. E., Haas, E. C., Shaffner, D. H., Garrett, E. S., & Fackler, J. C. (2003). Noise, stress, and annoyance in a pediatric intensive care unit. Crit Care Med, 31(1), 113-119.

Nast, P. A., Avidan, M., Harris, C. B., Krauss, M. J., Jacobsohn, E., Petlin, A., et al. (2005). Reporting and classification of patient safety events in a cardiothoracic intensive care unit and cardiothoracic postoperative care unit. J Thorac Cardiovasc Surg, 130(4), 1137.

31For internal HEN use. Naylor, C. D., Chantler, C., & Griffiths, S. (2004). Learning from SARS in Hong Kong and Toronto. JAMA, 291(20), 2483-2487. O'Mahony, S., Mazur, E., Charney, P., Wang, Y., & Fine, J. (2007). Use of multidisciplinary rounds to simultaneously improve quality outcomes, enhance resident education, and shorten length of stay. J Gen Intern Med, 22(8), 1073-1079.

Patterson, E. S., Rogers, M. L., Chapman, R. J., & Render, M. L. (2006). Compliance with intended use of Bar Code Medication Administration in acute and long-term care: an observational study. Hum Factors, 48(1), 15-22.

Patterson, E. S., & Wears, R. L. (2010). Patient handoffs: standardized and reliable measurement tools remain elusive. Jt Comm J Qual Patient Saf, 36(2), 52-61.

Pezzolesi, C., Schifano, F., Pickles, J., Randell, W., Hussain, Z., Muir, H., et al. (2010). Clinical handover incident reporting in one UK general hospital. Int J Qual Health Care, 22(5), 396-401.

Pronovost, P. J., Weast, B., Holzmueller, C. G., Rosenstein, B. J., Kidwell, R. P., Haller, K. B., et al. (2003). Evaluation of the culture of safety: survey of clinicians and managers in an academic medical center. Qual Saf Health Care, 12(6), 405-410.

Rashid, M. (2006). A decade of adult intensive care unit design: a study of the physical design features of the best-practice examples. Crit Care Nurs Q, 29(4), 282-311.

Rasmussen, B., & Petersen, K. E. (1999). Plant functional modelling as a basis for assessing the impact of management on plant safety. Reliability Engineering and System Safety, 64(2), 201-207.

Rasmussen, J., & Jensen, A. (1974). Mental procedures in real-life tasks: a case study of electronic trouble shooting. Ergonomics, 17(3), 293-307.

Reason, J. (1990). Human Error. New York Cambridge University Press.

Reason, J. (1993). Managing the Management Risk: New approaches to organisational safety. In B. Wilpert & T. Qvale (Eds.), Reliability and Safety in Hazardous Work Systems: Approaches to Analysis and Design Manchester, UK.

Reason, J. (1997). Managing the risks of organizational accidents. Brookfield Ashgate.

Reason, J., Parker, D., & Free, R. (1994 ). Bending the rules: The varieties, origins, and management of safety violations. Leiden, University of Leiden.

Rosenberg, D. I., & Moss, M. M. (2004). Guidelines and levels of care for pediatric intensive care units. Crit Care Med, 32(10), 2117-2127.

Rothschild, J. M., Landrigan, C. P., Cronin, J. W., Kaushal, R., Lockley, S. W., Burdick, E., et al. (2005). The Critical Care Safety Study: The incidence and nature of adverse events and serious medical errors in intensive care. Crit Care Med, 33(8), 1694-1700. 32For internal HEN use. Runciman, B., Merry, A., & Walton, M. (2007). Safety and Ethics in Healthcare: A Guide to Getting it Right. Aldershot Ashgate Publishing.

Runy, L. A. (2004). Best practices and safety issues in the ICU. Hosp Health Netw, 78(4), 45-51.

Samore, M. H., Evans, R. S., Lassen, A., Gould, P., Lloyd, J., Gardner, R. M., et al. (2004). Surveillance of medical device-related hazards and adverse events in hospitalized patients. JAMA, 291(3), 325- 334.

Sanders, M. S., & McCormick, E. J. (1993). Human Factors in Engineering and Design. (Vol. 7th Edition): McGraw-Hill Science/Engineering/Math.

Schmaker, & Pequgnat. (1989). Hospital design, health providers and the delivery of effective health care. In E. H. Zube & G. T. Moore (Eds.), In Advances in Enviroment, Behavior, and Design. New York: Plenum Press.

Schmalenberg, C., & Kramer, M. (2007). Types of intensive care units with the healthiest, most productive work environments. Am J Crit Care, 16(5), 458-468; quiz 469.

Sorra, J., Famolaro, T., Dyer, N., Mardon, R., & Famolaro, T. (2012). Hospital Survey on Patient Safety Culture 2012 user comparative database report (Prepared by Westat, Rockville, MD, under contract No. HHSA 290200710024C). Rockville, MD: Agency for Healthcare Research and Quality.

Staggers, N., & Blaz, J. W. (2012). Research on nursing handoffs for medical and surgical settings: an integrative review. J Adv Nurs.

Staggers, N., Clark, L., Blaz, J. W., & Kapsandoy, S. (2012). Nurses' information management and use of electronic tools during acute care handoffs. West J Nurs Res, 34(2), 153-173.

Surrey Memorial Hospital, R. P. G. I. (2006). Redevelopment Project, Phase 1A. Surrey, BC.

Teltsch, D. Y., Hanley, J., Loo, V., Goldberg, P., Gursahaney, A., & Buckeridge, D. L. (2011). Infection acquisition following intensive care unit room privatization. Arch Intern Med, 171(1), 32-38.

Tsien, C. L., & Fackler, J. C. (1997). Poor prognosis for existing monitors in the intensive care unit. Crit Care Med, 25(4), 614-619.

Ulrich, R., Quan, X., & Zimring, C. (2004). The role of the physical enviroment in the hospital of the 21st century: a once-in-a-lifetime opportunity.

Ulrich, R. S. (1984). View through a window may influence recovery from surgery. Science, 224(4647), 420-421.

Ulrich, R. S. (1991). Effects of interior design on wellness: theory and recent scientific research. J Health Care Inter Des, 3, 97-109. 33For internal HEN use. Van Elslande, P., & Fouquet, K. (2007). Analyzing 'human functional failures' in road accidents. Information Society Technologies.

Wagenaar, W. A., Groeneweg, J., Hudson, P. T. W., & Reason, J. T. (1994). Promoting safety in the oil industry. Ergonomics, 37(12), 1999-2013.

Wiegmann, D. A., & Shappell, S. A. (2003). A human error approach to aviation accident analysis. Burlington, VT: Ashgate.

Williams, J. C. (1997). Assessing and reducing the likelihood of violation behaviour - a preliminary investigation. Proceedings of an International Conference on the Commercial & Operational Benefits of Probabilistic Safety Assessments. Paper presented at the Institute of Nuclear Engineers Edinburgh, Scotland.

Williams, M. A. (1988). The physical environment and patient care. Annu Rev Nurs Res, 6, 61-84.

Wilson, P. R. (1992). Clinical practice guideline: acute pain management. Clin J Pain, 8(3), 187-188.

34For internal HEN use.

Recommended publications